[[ha-aa-db]] === Database The first step is installing the database that sits at the heart of the cluster. When we’re talking about High Availability, however, we’re talking about not just one database, but several (for redundancy) and a means to keep them synchronised. In this case, we’re going to choose the MySQL database, along with Galera for synchronous multi-master replication. The choice of database isn’t a foregone conclusion; you’re not required to use MySQL. It is, however, a fairly common choice in OpenStack installations, so we’ll cover it here. [[ha-aa-db-mysql-galera]] ==== MySQL with Galera Rather than starting with a vanilla version of MySQL and then adding Galera to it, you will want to install a version of MySQL patched for wsrep (Write Set REPlication) from https://launchpad.net/codership-mysql/0.7. Note that the installation requirements are a bit touchy; you will want to make sure to read the README file so you don’t miss any steps. Next, download Galera itself from https://launchpad.net/galera/+download. Go ahead and install the *.rpms or *.debs, taking care of any dependencies that your system doesn’t already have installed. Once you’ve completed the installation, you’ll need to make a few configuration changes: In the system-wide +my.conf+ file, make sure mysqld isn’t bound to 127.0.0.1, and that +/etc/mysql/conf.d/+ is included. Typically you can find this file at +/etc/my.cnf+: ---- [mysqld] ... !includedir /etc/mysql/conf.d/ ... #bind-address = 127.0.0.1 ---- When adding a new node, you must configure it with a MySQL account that can access the other nodes so that it can request a state snapshot from one of those existing nodes. First specify that account information in +/etc/mysql/conf.d/wsrep.cnf+: ---- wsrep_sst_auth=wsrep_sst:wspass ---- Next connect as root and grant privileges to that user: ---- $ mysql -e "SET wsrep_on=OFF; GRANT ALL ON *.* TO wsrep_sst@'%' IDENTIFIED BY 'wspass'"; ---- You’ll also need to remove user accounts with empty usernames, as they cause problems: ---- $ mysql -e "SET wsrep_on=OFF; DELETE FROM mysql.user WHERE user='';" ---- You'll also need to set certain mandatory configuration options within MySQL itself. These include: ---- query_cache_size=0 binlog_format=ROW default_storage_engine=innodb innodb_autoinc_lock_mode=2 innodb_doublewrite=1 ---- Finally, make sure that the nodes can access each other through the firewall. This might mean adjusting iptables, as in: ---- # iptables --insert RH-Firewall-1-INPUT 1 --proto tcp --source /24 --destination /32 --dport 3306 -j ACCEPT # iptables --insert RH-Firewall-1-INPUT 1 --proto tcp --source /24 --destination /32 --dport 4567 -j ACCEPT ---- It might also mean configuring any NAT firewall between nodes to allow direct connections, or disabling SELinux or configuring it to allow mysqld to listen to sockets at unprivileged ports. Now you’re ready to actually create the cluster. ===== Creating the cluster In creating a cluster, you first start a single instance, which creates the cluster. The rest of the MySQL instances then connect to that cluster. For example, if you started on +10.0.0.10+ by executing the command: ---- # service mysql start wsrep_cluster_address=gcomm:// ---- you could then connect to that cluster on the rest of the nodes by referencing the address of that node, as in: ---- # service mysql start wsrep_cluster_address=gcomm://10.0.0.10 ---- You also have the option to set the +wsrep_cluster_address+ in the +/etc/mysql/conf.d/wsrep.cnf+ file, or within the client itself. (In fact, for some systems, such as MariaDB or Percona, this may be your only option.) For example, to check the status of the cluster, open the MySQL client and check the status of the various parameters: ---- mysql> SET GLOBAL wsrep_cluster_address=''; mysql> SHOW STATUS LIKE 'wsrep%'; ---- You should see a status that looks something like this: ---- mysql> show status like 'wsrep%'; +----------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------+--------------------------------------+ | wsrep_local_state_uuid | 111fc28b-1b05-11e1-0800-e00ec5a7c930 | | wsrep_protocol_version | 1 | | wsrep_last_committed | 0 | | wsrep_replicated | 0 | | wsrep_replicated_bytes | 0 | | wsrep_received | 2 | | wsrep_received_bytes | 134 | | wsrep_local_commits | 0 | | wsrep_local_cert_failures | 0 | | wsrep_local_bf_aborts | 0 | | wsrep_local_replays | 0 | | wsrep_local_send_queue | 0 | | wsrep_local_send_queue_avg | 0.000000 | | wsrep_local_recv_queue | 0 | | wsrep_local_recv_queue_avg | 0.000000 | | wsrep_flow_control_paused | 0.000000 | | wsrep_flow_control_sent | 0 | | wsrep_flow_control_recv | 0 | | wsrep_cert_deps_distance | 0.000000 | | wsrep_apply_oooe | 0.000000 | | wsrep_apply_oool | 0.000000 | | wsrep_apply_window | 0.000000 | | wsrep_commit_oooe | 0.000000 | | wsrep_commit_oool | 0.000000 | | wsrep_commit_window | 0.000000 | | wsrep_local_state | 4 | | wsrep_local_state_comment | Synced (6) | | wsrep_cert_index_size | 0 | | wsrep_cluster_conf_id | 1 | | wsrep_cluster_size | 1 | | wsrep_cluster_state_uuid | 111fc28b-1b05-11e1-0800-e00ec5a7c930 | | wsrep_cluster_status | Primary | | wsrep_connected | ON | | wsrep_local_index | 0 | | wsrep_provider_name | Galera | | wsrep_provider_vendor | Codership Oy | | wsrep_provider_version | 21.1.0(r86) | | wsrep_ready | ON | +----------------------------+--------------------------------------+ 38 rows in set (0.01 sec) ---- [[ha-aa-db-galera-monitoring]] ==== Galera Monitoring Scripts (Coming soon) ==== Other ways to provide a Highly Available database MySQL with Galera is by no means the only way to achieve database HA. MariaDB (https://mariadb.org/) and Percona (http://www.percona.com/) also work with Galera. You also have the option to use Postgres, which has its own replication, or some other database HA option.