Merge "Corrects typos and provide clarification"
This commit is contained in:
commit
e8ea4fdf01
@ -6,7 +6,7 @@
|
|||||||
|
|
||||||
<title>Configure Pacemaker group</title>
|
<title>Configure Pacemaker group</title>
|
||||||
|
|
||||||
<para>Finally, we need to create a service <literal>group</literal> to ensure that virtual IP is linked to the API services resources:</para>
|
<para>Finally, we need to create a service <literal>group</literal> to ensure that the virtual IP is linked to the API services resources:</para>
|
||||||
<screen>group g_services_api p_api-ip p_keystone p_glance-api p_cinder-api \
|
<screen>group g_services_api p_api-ip p_keystone p_glance-api p_cinder-api \
|
||||||
p_neutron-server p_glance-registry p_ceilometer-agent-central</screen>
|
p_neutron-server p_glance-registry p_ceilometer-agent-central</screen>
|
||||||
</section>
|
</section>
|
||||||
|
@ -26,7 +26,7 @@ Configure OpenStack services to use this IP address.
|
|||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
<note>
|
<note>
|
||||||
<para>Here is the <link xlink:href="http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_glance.html">documentation</link> for installing OpenStack Image API service.</para>
|
<para>Here is the <link xlink:href="http://docs.openstack.org/trunk/install-guide/install/apt/content/ch_glance.html">documentation</link> for installing the OpenStack Image API service.</para>
|
||||||
</note>
|
</note>
|
||||||
<section xml:id="_add_openstack_image_api_resource_to_pacemaker">
|
<section xml:id="_add_openstack_image_api_resource_to_pacemaker">
|
||||||
|
|
||||||
@ -45,12 +45,12 @@ configure</literal>, and add the following cluster resources:</para>
|
|||||||
<para>This configuration creates</para>
|
<para>This configuration creates</para>
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><literal>p_glance-api</literal>, a resource for manage OpenStack Image API service
|
<para><literal>p_glance-api</literal>, a resource for managing OpenStack Image API service
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||||
above into your live pacemaker configuration, and then make changes as
|
above into your live Pacemaker configuration, and then make changes as
|
||||||
required. For example, you may enter <literal>edit p_ip_glance-api</literal> from the
|
required. For example, you may enter <literal>edit p_ip_glance-api</literal> from the
|
||||||
<literal>crm configure</literal> menu and edit the resource to match your preferred
|
<literal>crm configure</literal> menu and edit the resource to match your preferred
|
||||||
virtual IP address.</para>
|
virtual IP address.</para>
|
||||||
@ -84,7 +84,7 @@ rabbit_host = 192.168.42.102</programlisting>
|
|||||||
the highly available, virtual cluster IP address — rather than an
|
the highly available, virtual cluster IP address — rather than an
|
||||||
OpenStack Image API server’s physical IP address as you normally would.</para>
|
OpenStack Image API server’s physical IP address as you normally would.</para>
|
||||||
<para>For OpenStack Compute, for example, if your OpenStack Image API service IP address is
|
<para>For OpenStack Compute, for example, if your OpenStack Image API service IP address is
|
||||||
192.168.42.104 as in the configuration explained here, you would use
|
192.168.42.103 as in the configuration explained here, you would use
|
||||||
the following line in your <filename>nova.conf</filename> file:</para>
|
the following line in your <filename>nova.conf</filename> file:</para>
|
||||||
<programlisting language="ini">glance_api_servers = 192.168.42.103</programlisting>
|
<programlisting language="ini">glance_api_servers = 192.168.42.103</programlisting>
|
||||||
<para>You must also create the OpenStack Image API endpoint with this IP.</para>
|
<para>You must also create the OpenStack Image API endpoint with this IP.</para>
|
||||||
|
@ -7,21 +7,21 @@
|
|||||||
<title>Highly available OpenStack Networking server</title>
|
<title>Highly available OpenStack Networking server</title>
|
||||||
|
|
||||||
<para>OpenStack Networking is the network connectivity service in OpenStack.
|
<para>OpenStack Networking is the network connectivity service in OpenStack.
|
||||||
Making the OpenStack Networking Server service highly available in active / passive mode involves</para>
|
Making the OpenStack Networking Server service highly available in active / passive mode involves the following tasks:</para>
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
Configure OpenStack Networking to listen on the VIP address,
|
Configure OpenStack Networking to listen on the virtual IP address,
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
managing OpenStack Networking API Server daemon with the Pacemaker cluster manager,
|
Manage the OpenStack Networking API Server daemon with the Pacemaker cluster manager,
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
Configure OpenStack services to use this IP address.
|
Configure OpenStack services to use the virtual IP address.
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
@ -40,7 +40,7 @@ Configure OpenStack services to use this IP address.
|
|||||||
OpenStack Networking Server resource. Connect to the Pacemaker cluster with <literal>crm
|
OpenStack Networking Server resource. Connect to the Pacemaker cluster with <literal>crm
|
||||||
configure</literal>, and add the following cluster resources:</para>
|
configure</literal>, and add the following cluster resources:</para>
|
||||||
<programlisting>primitive p_neutron-server ocf:openstack:neutron-server \
|
<programlisting>primitive p_neutron-server ocf:openstack:neutron-server \
|
||||||
params os_password="secrete" os_username="admin" os_tenant_name="admin" \
|
params os_password="secret" os_username="admin" os_tenant_name="admin" \
|
||||||
keystone_get_token_url="http://192.168.42.103:5000/v2.0/tokens" \
|
keystone_get_token_url="http://192.168.42.103:5000/v2.0/tokens" \
|
||||||
op monitor interval="30s" timeout="30s"</programlisting>
|
op monitor interval="30s" timeout="30s"</programlisting>
|
||||||
<para>This configuration creates <literal>p_neutron-server</literal>, a resource for manage OpenStack Networking Server service</para>
|
<para>This configuration creates <literal>p_neutron-server</literal>, a resource for manage OpenStack Networking Server service</para>
|
||||||
|
@ -59,7 +59,7 @@ Facility services such as power, air conditioning, and fire protection
|
|||||||
<title>Active/Passive</title>
|
<title>Active/Passive</title>
|
||||||
|
|
||||||
<para>In an active/passive configuration, systems are set up to bring additional resources online to replace those that have failed. For example, OpenStack would write to the main database while maintaining a disaster recovery database that can be brought online in the event that the main database fails.</para>
|
<para>In an active/passive configuration, systems are set up to bring additional resources online to replace those that have failed. For example, OpenStack would write to the main database while maintaining a disaster recovery database that can be brought online in the event that the main database fails.</para>
|
||||||
<para>Typically, an active/passive installation for a stateless service would maintain a redundant instance that can be brought online when required. Requests are load balanced using a virtual IP address and a load balancer such as HAProxy.</para>
|
<para>Typically, an active/passive installation for a stateless service would maintain a redundant instance that can be brought online when required. Requests may be handled using a virtual IP address to facilitate return to service with minimal reconfiguration required.</para>
|
||||||
<para>A typical active/passive installation for a stateful service maintains a replacement resource that can be brought online when required. A separate application (such as Pacemaker or Corosync) monitors these services, bringing the backup online as necessary.</para>
|
<para>A typical active/passive installation for a stateful service maintains a replacement resource that can be brought online when required. A separate application (such as Pacemaker or Corosync) monitors these services, bringing the backup online as necessary.</para>
|
||||||
</section>
|
</section>
|
||||||
<section xml:id="aa-intro">
|
<section xml:id="aa-intro">
|
||||||
|
@ -7,7 +7,7 @@
|
|||||||
|
|
||||||
<title>Network controller cluster stack</title>
|
<title>Network controller cluster stack</title>
|
||||||
|
|
||||||
<para>The network controller sits on the management and data network, and needs to be connected to the Internet if a VM needs access to it.</para>
|
<para>The network controller sits on the management and data network, and needs to be connected to the Internet if an instance will need access to the Internet.</para>
|
||||||
<note>
|
<note>
|
||||||
<para>Both nodes should have the same hostname since the Networking scheduler will be
|
<para>Both nodes should have the same hostname since the Networking scheduler will be
|
||||||
aware of one node, for example a virtual router attached to a single L3 node.</para>
|
aware of one node, for example a virtual router attached to a single L3 node.</para>
|
||||||
|
@ -27,7 +27,7 @@ Configure MySQL to use a data directory residing on that DRBD
|
|||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
selecting and assigning a virtual IP address (VIP) that can freely
|
Select and assign a virtual IP address (VIP) that can freely
|
||||||
float between cluster nodes,
|
float between cluster nodes,
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
@ -38,14 +38,14 @@ Configure MySQL to listen on that IP address,
|
|||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
managing all resources, including the MySQL daemon itself, with
|
Manage all resources, including the MySQL daemon itself, with
|
||||||
the Pacemaker cluster manager.
|
the Pacemaker cluster manager.
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
<note>
|
<note>
|
||||||
<para><link xlink:href="http://galeracluster.com/">MySQL/Galera</link> is an
|
<para><link xlink:href="http://galeracluster.com/">MySQL/Galera</link> is an
|
||||||
alternative method of Configure MySQL for high availability. It is
|
alternative method of configuring MySQL for high availability. It is
|
||||||
likely to become the preferred method of achieving MySQL high
|
likely to become the preferred method of achieving MySQL high
|
||||||
availability once it has sufficiently matured. At the time of writing,
|
availability once it has sufficiently matured. At the time of writing,
|
||||||
however, the Pacemaker/DRBD based approach remains the recommended one
|
however, the Pacemaker/DRBD based approach remains the recommended one
|
||||||
@ -125,7 +125,7 @@ creating your filesystem.
|
|||||||
<para>Once the DRBD resource is running and in the primary role (and
|
<para>Once the DRBD resource is running and in the primary role (and
|
||||||
potentially still in the process of running the initial device
|
potentially still in the process of running the initial device
|
||||||
synchronization), you may proceed with creating the filesystem for
|
synchronization), you may proceed with creating the filesystem for
|
||||||
MySQL data. XFS is the generally recommended filesystem:</para>
|
MySQL data. XFS is the generally recommended filesystem due to its journaling, efficient allocation, and performance:</para>
|
||||||
<screen><prompt>#</prompt> <userinput>mkfs -t xfs /dev/drbd0</userinput></screen>
|
<screen><prompt>#</prompt> <userinput>mkfs -t xfs /dev/drbd0</userinput></screen>
|
||||||
<para>You may also use the alternate device path for the DRBD device, which
|
<para>You may also use the alternate device path for the DRBD device, which
|
||||||
may be easier to remember as it includes the self-explanatory resource
|
may be easier to remember as it includes the self-explanatory resource
|
||||||
@ -187,7 +187,7 @@ primitive p_fs_mysql ocf:heartbeat:Filesystem \
|
|||||||
op stop timeout="180s" \
|
op stop timeout="180s" \
|
||||||
op monitor interval="60s" timeout="60s"
|
op monitor interval="60s" timeout="60s"
|
||||||
primitive p_mysql ocf:heartbeat:mysql \
|
primitive p_mysql ocf:heartbeat:mysql \
|
||||||
params additional_parameters="--bind-address=50.56.179.138"
|
params additional_parameters="--bind-address=192.168.42.101"
|
||||||
config="/etc/mysql/my.cnf" \
|
config="/etc/mysql/my.cnf" \
|
||||||
pid="/var/run/mysqld/mysqld.pid" \
|
pid="/var/run/mysqld/mysqld.pid" \
|
||||||
socket="/var/run/mysqld/mysqld.sock" \
|
socket="/var/run/mysqld/mysqld.sock" \
|
||||||
|
@ -7,7 +7,7 @@
|
|||||||
<title>Memcached</title>
|
<title>Memcached</title>
|
||||||
|
|
||||||
<para>Most of OpenStack services use an application to offer persistence and store ephemeral data (like tokens).
|
<para>Most of OpenStack services use an application to offer persistence and store ephemeral data (like tokens).
|
||||||
Memcached is one of them and can scale-out easily without specific trick.</para>
|
Memcached is one of them and can scale-out easily without any specific tricks required.</para>
|
||||||
<para>To install and configure it, read the <link xlink:href="http://code.google.com/p/memcached/wiki/NewStart">official documentation</link>.</para>
|
<para>To install and configure it, read the <link xlink:href="http://code.google.com/p/memcached/wiki/NewStart">official documentation</link>.</para>
|
||||||
<para>Memory caching is managed by oslo-incubator so the way to use multiple memcached servers is the same for all projects.</para>
|
<para>Memory caching is managed by oslo-incubator so the way to use multiple memcached servers is the same for all projects.</para>
|
||||||
<para>Example with two hosts:</para>
|
<para>Example with two hosts:</para>
|
||||||
|
@ -20,12 +20,12 @@ and use load balancing and virtual IP (with HAProxy & Keepalived in this set
|
|||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
You use Virtual IP when configuring OpenStack Identity endpoints.
|
You use virtual IPs when configuring OpenStack Identity endpoints.
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
All OpenStack configuration files should refer to Virtual IP.
|
All OpenStack configuration files should refer to virtual IPs.
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
|
@ -100,7 +100,7 @@ that cluster:</para>
|
|||||||
|
|
||||||
<listitem>
|
<listitem>
|
||||||
<para>
|
<para>
|
||||||
Start on <literal>10.0.0.10</literal> by executing the command:
|
Start on the first node having IP address <literal>10.0.0.10</literal> by executing the command:
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
</orderedlist>
|
</orderedlist>
|
||||||
|
@ -18,7 +18,7 @@
|
|||||||
<programlisting>rabbit_max_retries=0</programlisting>
|
<programlisting>rabbit_max_retries=0</programlisting>
|
||||||
<para>Use durable queues in RabbitMQ:</para>
|
<para>Use durable queues in RabbitMQ:</para>
|
||||||
<programlisting>rabbit_durable_queues=false</programlisting>
|
<programlisting>rabbit_durable_queues=false</programlisting>
|
||||||
<para>Use H/A queues in RabbitMQ (x-ha-policy: all):</para>
|
<para>Use HA queues in RabbitMQ (x-ha-policy: all):</para>
|
||||||
<programlisting>rabbit_ha_queues=true</programlisting>
|
<programlisting>rabbit_ha_queues=true</programlisting>
|
||||||
<para>If you change the configuration from an old setup which did not use HA queues, you should interrupt the service:</para>
|
<para>If you change the configuration from an old setup which did not use HA queues, you should interrupt the service:</para>
|
||||||
<screen><prompt>#</prompt> <userinput>rabbitmqctl stop_app</userinput>
|
<screen><prompt>#</prompt> <userinput>rabbitmqctl stop_app</userinput>
|
||||||
|
@ -32,7 +32,7 @@ service
|
|||||||
</listitem>
|
</listitem>
|
||||||
</itemizedlist>
|
</itemizedlist>
|
||||||
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
<para><literal>crm configure</literal> supports batch input, so you may copy and paste the
|
||||||
above into your live pacemaker configuration, and then make changes as
|
above into your live Pacemaker configuration, and then make changes as
|
||||||
required.</para>
|
required.</para>
|
||||||
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
<para>Once completed, commit your configuration changes by entering <literal>commit</literal>
|
||||||
from the <literal>crm configure</literal> menu. Pacemaker will then start the neutron metadata
|
from the <literal>crm configure</literal> menu. Pacemaker will then start the neutron metadata
|
||||||
|
@ -6,12 +6,12 @@
|
|||||||
<title>Install packages</title>
|
<title>Install packages</title>
|
||||||
<para>On any host that is meant to be part of a Pacemaker cluster, you must
|
<para>On any host that is meant to be part of a Pacemaker cluster, you must
|
||||||
first establish cluster communications through the Corosync messaging
|
first establish cluster communications through the Corosync messaging
|
||||||
layer. This involves Install the following packages (and their
|
layer. This involves installing the following packages (and their
|
||||||
dependencies, which your package manager will normally install
|
dependencies, which your package manager will normally install
|
||||||
automatically):</para>
|
automatically):</para>
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
<listitem>
|
<listitem>
|
||||||
<para><literal>pacemaker</literal> Note that the crm shell should be downloaded separately.
|
<para><literal>pacemaker</literal> (Note that the crm shell should be downloaded separately.)
|
||||||
</para>
|
</para>
|
||||||
</listitem>
|
</listitem>
|
||||||
<listitem>
|
<listitem>
|
||||||
|
@ -25,10 +25,9 @@ Setting <literal>no-quorum-policy="ignore"</literal> is required in 2-node Pacem
|
|||||||
clusters for the following reason: if quorum enforcement is enabled,
|
clusters for the following reason: if quorum enforcement is enabled,
|
||||||
and one of the two nodes fails, then the remaining node can not
|
and one of the two nodes fails, then the remaining node can not
|
||||||
establish a <emphasis>majority</emphasis> of quorum votes necessary to run services, and
|
establish a <emphasis>majority</emphasis> of quorum votes necessary to run services, and
|
||||||
thus it is unable to take over any resources. The appropriate
|
thus it is unable to take over any resources. In this case, the appropriate
|
||||||
workaround is to ignore loss of quorum in the cluster. This is safe
|
workaround is to ignore loss of quorum in the cluster. This should only <emphasis>only</emphasis> be done in 2-node clusters: do not set this property in
|
||||||
and necessary <emphasis>only</emphasis> in 2-node clusters. Do not set this property in
|
Pacemaker clusters with more than two nodes. Note that a two-node cluster with this setting exposes a risk of split-brain because either half of the cluster, or both, are able to become active in the event that both nodes remain online but lose communication with one another. The preferred configuration is 3 or more nodes per cluster.
|
||||||
Pacemaker clusters with more than two nodes.
|
|
||||||
</para>
|
</para>
|
||||||
</callout>
|
</callout>
|
||||||
<callout arearefs="CO2-2">
|
<callout arearefs="CO2-2">
|
||||||
|
@ -4,7 +4,7 @@
|
|||||||
version="5.0"
|
version="5.0"
|
||||||
xml:id="_start_pacemaker">
|
xml:id="_start_pacemaker">
|
||||||
<title>Start Pacemaker</title>
|
<title>Start Pacemaker</title>
|
||||||
<para>Once the Corosync services have been started, and you have established
|
<para>Once the Corosync services have been started and you have established
|
||||||
that the cluster is communicating properly, it is safe to start
|
that the cluster is communicating properly, it is safe to start
|
||||||
<literal>pacemakerd</literal>, the Pacemaker master control process:</para>
|
<literal>pacemakerd</literal>, the Pacemaker master control process:</para>
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
|
@ -7,7 +7,7 @@
|
|||||||
<title>Starting Corosync</title>
|
<title>Starting Corosync</title>
|
||||||
|
|
||||||
<para>Corosync is started as a regular system service. Depending on your
|
<para>Corosync is started as a regular system service. Depending on your
|
||||||
distribution, it may ship with a LSB (System V style) init script, an
|
distribution, it may ship with an LSB init script, an
|
||||||
upstart job, or a systemd unit file. Either way, the service is
|
upstart job, or a systemd unit file. Either way, the service is
|
||||||
usually named <literal>corosync</literal>:</para>
|
usually named <literal>corosync</literal>:</para>
|
||||||
<itemizedlist>
|
<itemizedlist>
|
||||||
|
Loading…
Reference in New Issue
Block a user