From 0cd5218f50be4697a43f393036f6c70af0a9a98b Mon Sep 17 00:00:00 2001 From: Doug Baer Date: Thu, 14 Aug 2014 12:36:29 -0700 Subject: [PATCH] Corrects typos and provide clarification Closes-Bug: #1352053 Minor ormatting, capitalization and verb tense corrections Corrections to IP addresses in some examples and replacing "VIP" with "virtual IP" for simplicity. Calling out split-brain possibility in 2-node cluster with required quorum override Amended with feedback from Andreas Jaeger and Christian Berendt on patch set 1 and 2 Change-Id: I1cc532d73483ad82d1ceaec92a8caf11071a7d0e --- .../api/section_api_pacemaker.xml | 2 +- doc/high-availability-guide/api/section_glance_api.xml | 8 ++++---- .../api/section_neutron_server.xml | 10 +++++----- doc/high-availability-guide/ch_intro.xml | 2 +- doc/high-availability-guide/ch_network.xml | 2 +- .../controller/section_mysql.xml | 10 +++++----- .../ha_aa_controllers/section_memcached.xml | 2 +- .../section_run_openstack_api_and_schedulers.xml | 4 ++-- .../ha_aa_db/section_ha_aa_db_mysql_galera.xml | 2 +- ...n_configure_openstack_services_to_user_rabbitmq.xml | 2 +- ...section_highly_available_neutron_metadata_agent.xml | 2 +- .../pacemaker/section_install_packages.xml | 4 ++-- .../pacemaker/section_set_basic_cluster_properties.xml | 7 +++---- .../pacemaker/section_start_pacemaker.xml | 2 +- .../pacemaker/section_starting_corosync.xml | 2 +- 15 files changed, 30 insertions(+), 31 deletions(-) diff --git a/doc/high-availability-guide/api/section_api_pacemaker.xml b/doc/high-availability-guide/api/section_api_pacemaker.xml index 0e1874295e..7ae0173a8a 100644 --- a/doc/high-availability-guide/api/section_api_pacemaker.xml +++ b/doc/high-availability-guide/api/section_api_pacemaker.xml @@ -6,7 +6,7 @@ Configure Pacemaker group - Finally, we need to create a service group to ensure that virtual IP is linked to the API services resources: + Finally, we need to create a service group to ensure that the virtual IP is linked to the API services resources: group g_services_api p_api-ip p_keystone p_glance-api p_cinder-api \ p_neutron-server p_glance-registry p_ceilometer-agent-central diff --git a/doc/high-availability-guide/api/section_glance_api.xml b/doc/high-availability-guide/api/section_glance_api.xml index 9e98e4cc72..7af34bee49 100644 --- a/doc/high-availability-guide/api/section_glance_api.xml +++ b/doc/high-availability-guide/api/section_glance_api.xml @@ -26,7 +26,7 @@ Configure OpenStack services to use this IP address. - Here is the documentation for installing OpenStack Image API service. + Here is the documentation for installing the OpenStack Image API service.
@@ -45,12 +45,12 @@ configure, and add the following cluster resources: This configuration creates - p_glance-api, a resource for manage OpenStack Image API service + p_glance-api, a resource for managing OpenStack Image API service crm configure supports batch input, so you may copy and paste the -above into your live pacemaker configuration, and then make changes as +above into your live Pacemaker configuration, and then make changes as required. For example, you may enter edit p_ip_glance-api from the crm configure menu and edit the resource to match your preferred virtual IP address. @@ -84,7 +84,7 @@ rabbit_host = 192.168.42.102 the highly available, virtual cluster IP address — rather than an OpenStack Image API server’s physical IP address as you normally would. For OpenStack Compute, for example, if your OpenStack Image API service IP address is -192.168.42.104 as in the configuration explained here, you would use +192.168.42.103 as in the configuration explained here, you would use the following line in your nova.conf file: glance_api_servers = 192.168.42.103 You must also create the OpenStack Image API endpoint with this IP. diff --git a/doc/high-availability-guide/api/section_neutron_server.xml b/doc/high-availability-guide/api/section_neutron_server.xml index aeb0389f59..fa20c13e2f 100644 --- a/doc/high-availability-guide/api/section_neutron_server.xml +++ b/doc/high-availability-guide/api/section_neutron_server.xml @@ -7,21 +7,21 @@ Highly available OpenStack Networking server OpenStack Networking is the network connectivity service in OpenStack. -Making the OpenStack Networking Server service highly available in active / passive mode involves +Making the OpenStack Networking Server service highly available in active / passive mode involves the following tasks: -Configure OpenStack Networking to listen on the VIP address, +Configure OpenStack Networking to listen on the virtual IP address, -managing OpenStack Networking API Server daemon with the Pacemaker cluster manager, +Manage the OpenStack Networking API Server daemon with the Pacemaker cluster manager, -Configure OpenStack services to use this IP address. +Configure OpenStack services to use the virtual IP address. @@ -40,7 +40,7 @@ Configure OpenStack services to use this IP address. OpenStack Networking Server resource. Connect to the Pacemaker cluster with crm configure, and add the following cluster resources: primitive p_neutron-server ocf:openstack:neutron-server \ - params os_password="secrete" os_username="admin" os_tenant_name="admin" \ + params os_password="secret" os_username="admin" os_tenant_name="admin" \ keystone_get_token_url="http://192.168.42.103:5000/v2.0/tokens" \ op monitor interval="30s" timeout="30s" This configuration creates p_neutron-server, a resource for manage OpenStack Networking Server service diff --git a/doc/high-availability-guide/ch_intro.xml b/doc/high-availability-guide/ch_intro.xml index f7938b088e..7e6f65c636 100644 --- a/doc/high-availability-guide/ch_intro.xml +++ b/doc/high-availability-guide/ch_intro.xml @@ -59,7 +59,7 @@ Facility services such as power, air conditioning, and fire protection Active/Passive In an active/passive configuration, systems are set up to bring additional resources online to replace those that have failed. For example, OpenStack would write to the main database while maintaining a disaster recovery database that can be brought online in the event that the main database fails. - Typically, an active/passive installation for a stateless service would maintain a redundant instance that can be brought online when required. Requests are load balanced using a virtual IP address and a load balancer such as HAProxy. + Typically, an active/passive installation for a stateless service would maintain a redundant instance that can be brought online when required. Requests may be handled using a virtual IP address to facilitate return to service with minimal reconfiguration required. A typical active/passive installation for a stateful service maintains a replacement resource that can be brought online when required. A separate application (such as Pacemaker or Corosync) monitors these services, bringing the backup online as necessary.
diff --git a/doc/high-availability-guide/ch_network.xml b/doc/high-availability-guide/ch_network.xml index 53a6a09ba9..1d43c34de0 100644 --- a/doc/high-availability-guide/ch_network.xml +++ b/doc/high-availability-guide/ch_network.xml @@ -7,7 +7,7 @@ Network controller cluster stack - The network controller sits on the management and data network, and needs to be connected to the Internet if a VM needs access to it. + The network controller sits on the management and data network, and needs to be connected to the Internet if an instance will need access to the Internet. Both nodes should have the same hostname since the Networking scheduler will be aware of one node, for example a virtual router attached to a single L3 node. diff --git a/doc/high-availability-guide/controller/section_mysql.xml b/doc/high-availability-guide/controller/section_mysql.xml index 9468734928..6b57df38c9 100644 --- a/doc/high-availability-guide/controller/section_mysql.xml +++ b/doc/high-availability-guide/controller/section_mysql.xml @@ -27,7 +27,7 @@ Configure MySQL to use a data directory residing on that DRBD -selecting and assigning a virtual IP address (VIP) that can freely +Select and assign a virtual IP address (VIP) that can freely float between cluster nodes, @@ -38,14 +38,14 @@ Configure MySQL to listen on that IP address, -managing all resources, including the MySQL daemon itself, with +Manage all resources, including the MySQL daemon itself, with the Pacemaker cluster manager. MySQL/Galera is an -alternative method of Configure MySQL for high availability. It is +alternative method of configuring MySQL for high availability. It is likely to become the preferred method of achieving MySQL high availability once it has sufficiently matured. At the time of writing, however, the Pacemaker/DRBD based approach remains the recommended one @@ -125,7 +125,7 @@ creating your filesystem. Once the DRBD resource is running and in the primary role (and potentially still in the process of running the initial device synchronization), you may proceed with creating the filesystem for -MySQL data. XFS is the generally recommended filesystem: +MySQL data. XFS is the generally recommended filesystem due to its journaling, efficient allocation, and performance: # mkfs -t xfs /dev/drbd0 You may also use the alternate device path for the DRBD device, which may be easier to remember as it includes the self-explanatory resource @@ -187,7 +187,7 @@ primitive p_fs_mysql ocf:heartbeat:Filesystem \ op stop timeout="180s" \ op monitor interval="60s" timeout="60s" primitive p_mysql ocf:heartbeat:mysql \ - params additional_parameters="--bind-address=50.56.179.138" + params additional_parameters="--bind-address=192.168.42.101" config="/etc/mysql/my.cnf" \ pid="/var/run/mysqld/mysqld.pid" \ socket="/var/run/mysqld/mysqld.sock" \ diff --git a/doc/high-availability-guide/ha_aa_controllers/section_memcached.xml b/doc/high-availability-guide/ha_aa_controllers/section_memcached.xml index da5f46b6d7..e73d4bf00a 100644 --- a/doc/high-availability-guide/ha_aa_controllers/section_memcached.xml +++ b/doc/high-availability-guide/ha_aa_controllers/section_memcached.xml @@ -7,7 +7,7 @@ Memcached Most of OpenStack services use an application to offer persistence and store ephemeral data (like tokens). -Memcached is one of them and can scale-out easily without specific trick. +Memcached is one of them and can scale-out easily without any specific tricks required. To install and configure it, read the official documentation. Memory caching is managed by oslo-incubator so the way to use multiple memcached servers is the same for all projects. Example with two hosts: diff --git a/doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml b/doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml index 0708288bea..d02a745b45 100644 --- a/doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml +++ b/doc/high-availability-guide/ha_aa_controllers/section_run_openstack_api_and_schedulers.xml @@ -20,12 +20,12 @@ and use load balancing and virtual IP (with HAproxy & Keepalived in this set -You use Virtual IP when configuring OpenStack Identity endpoints. +You use virtual IPs when configuring OpenStack Identity endpoints. -All OpenStack configuration files should refer to Virtual IP. +All OpenStack configuration files should refer to virtual IPs. diff --git a/doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml b/doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml index bcddb73636..f681af2627 100644 --- a/doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml +++ b/doc/high-availability-guide/ha_aa_db/section_ha_aa_db_mysql_galera.xml @@ -100,7 +100,7 @@ that cluster: -Start on 10.0.0.10 by executing the command: +Start on the first node having IP address 10.0.0.10 by executing the command: diff --git a/doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml b/doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml index 0da51fc5cd..6f2c41b04c 100644 --- a/doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml +++ b/doc/high-availability-guide/ha_aa_rabbitmq/section_configure_openstack_services_to_user_rabbitmq.xml @@ -18,7 +18,7 @@ rabbit_max_retries=0 Use durable queues in RabbitMQ: rabbit_durable_queues=false - Use H/A queues in RabbitMQ (x-ha-policy: all): + Use HA queues in RabbitMQ (x-ha-policy: all): rabbit_ha_queues=true If you change the configuration from an old setup which did not use HA queues, you should interrupt the service: # rabbitmqctl stop_app diff --git a/doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml b/doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml index 84d5ed0672..caff0b2ddd 100644 --- a/doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml +++ b/doc/high-availability-guide/network/section_highly_available_neutron_metadata_agent.xml @@ -32,7 +32,7 @@ service crm configure supports batch input, so you may copy and paste the -above into your live pacemaker configuration, and then make changes as +above into your live Pacemaker configuration, and then make changes as required. Once completed, commit your configuration changes by entering commit from the crm configure menu. Pacemaker will then start the neutron metadata diff --git a/doc/high-availability-guide/pacemaker/section_install_packages.xml b/doc/high-availability-guide/pacemaker/section_install_packages.xml index 5c3301bb77..f3a7d286e6 100644 --- a/doc/high-availability-guide/pacemaker/section_install_packages.xml +++ b/doc/high-availability-guide/pacemaker/section_install_packages.xml @@ -6,12 +6,12 @@ Install packages On any host that is meant to be part of a Pacemaker cluster, you must first establish cluster communications through the Corosync messaging -layer. This involves Install the following packages (and their +layer. This involves installing the following packages (and their dependencies, which your package manager will normally install automatically): - pacemaker Note that the crm shell should be downloaded separately. + pacemaker (Note that the crm shell should be downloaded separately.) diff --git a/doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml b/doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml index 30a2ee8ce3..19dd9b945a 100644 --- a/doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml +++ b/doc/high-availability-guide/pacemaker/section_set_basic_cluster_properties.xml @@ -25,10 +25,9 @@ Setting no-quorum-policy="ignore" is required in 2-node Pacem clusters for the following reason: if quorum enforcement is enabled, and one of the two nodes fails, then the remaining node can not establish a majority of quorum votes necessary to run services, and -thus it is unable to take over any resources. The appropriate -workaround is to ignore loss of quorum in the cluster. This is safe -and necessary only in 2-node clusters. Do not set this property in -Pacemaker clusters with more than two nodes. +thus it is unable to take over any resources. In this case, the appropriate +workaround is to ignore loss of quorum in the cluster. This should only only be done in 2-node clusters: do not set this property in +Pacemaker clusters with more than two nodes. Note that a two-node cluster with this setting exposes a risk of split-brain because either half of the cluster, or both, are able to become active in the event that both nodes remain online but lose communication with one another. The preferred configuration is 3 or more nodes per cluster. diff --git a/doc/high-availability-guide/pacemaker/section_start_pacemaker.xml b/doc/high-availability-guide/pacemaker/section_start_pacemaker.xml index c094fbd8d2..b6c603e4b2 100644 --- a/doc/high-availability-guide/pacemaker/section_start_pacemaker.xml +++ b/doc/high-availability-guide/pacemaker/section_start_pacemaker.xml @@ -4,7 +4,7 @@ version="5.0" xml:id="_start_pacemaker"> Start Pacemaker - Once the Corosync services have been started, and you have established + Once the Corosync services have been started and you have established that the cluster is communicating properly, it is safe to start pacemakerd, the Pacemaker master control process: diff --git a/doc/high-availability-guide/pacemaker/section_starting_corosync.xml b/doc/high-availability-guide/pacemaker/section_starting_corosync.xml index 962c6a90fa..3edb131b8a 100644 --- a/doc/high-availability-guide/pacemaker/section_starting_corosync.xml +++ b/doc/high-availability-guide/pacemaker/section_starting_corosync.xml @@ -7,7 +7,7 @@ Starting Corosync Corosync is started as a regular system service. Depending on your -distribution, it may ship with a LSB (System V style) init script, an +distribution, it may ship with an LSB init script, an upstart job, or a systemd unit file. Either way, the service is usually named corosync: