Removes passive voice from chap 6, arch guide.

Also includes some other minor changes in chap 6.
Closes-Bug: #1427938

Change-Id: Ic608390287faa4b4dd81c54f4a3fe9b403f3c1a7
This commit is contained in:
Deepti Navale 2015-03-20 15:52:39 +11:00
parent 1d81a08f27
commit e28e458750
5 changed files with 67 additions and 67 deletions

View File

@ -6,7 +6,7 @@
xml:id="arch-design-architecture-multiple-site">
<?dbhtml stop-chunking?>
<title>Architecture</title>
<para>This graphic is a high level diagram of a multiple site OpenStack
<para>This graphic is a high level diagram of a multi-site OpenStack
architecture. Each site is an OpenStack cloud but it may be necessary to
architect the sites on different versions. For example, if the second
site is intended to be a replacement for the first site, they would be
@ -104,15 +104,15 @@
dependent on a number of factors. One major dependency to consider
is storage. When designing the storage system, the storage mechanism
needs to be determined. Once the storage type is determined, how it
will be accessed is critical. For example, we recommend that
is accessed is critical. For example, we recommend that
storage should use a dedicated network. Another concern is how
the storage is configured to protect the data. For example, the
recovery point objective (RPO) and the recovery time objective
(RTO). How quickly can the recovery from a fault be completed, will
determine how often the replication of data be required. Ensure that
(RTO). How quickly can the recovery from a fault be completed,
determines how often the replication of data is required. Ensure that
enough storage is allocated to support the data protection
strategy.</para>
<para>Networking decisions include the encapsulation mechanism that will
<para>Networking decisions include the encapsulation mechanism that can
be used for the tenant networks, how large the broadcast domains
should be, and the contracted SLAs for the interconnects.</para>
</section>

View File

@ -16,12 +16,12 @@
customization of the service catalog for their site either
manually or via customization of the deployment tools in
use.</para>
<para>Note that, as of the Icehouse release, documentation for
<note><para>As of the Icehouse release, documentation for
implementing this feature is in progress. See this bug for
more information:
<link
xlink:href="https://bugs.launchpad.net/openstack-manuals/+bug/1340509">https://bugs.launchpad.net/openstack-manuals/+bug/1340509</link>.
</para>
</para></note>
<section xml:id="licensing">
<title>Licensing</title>
<para>Multi-site OpenStack deployments present additional
@ -175,17 +175,17 @@
that user documentation is accessible by users of the cloud
infrastructure to ensure they are given sufficient information
to help them leverage the cloud. As an example, by default
OpenStack will schedule instances on a compute node
OpenStack schedules instances on a compute node
automatically. However, when multiple regions are available,
it is left to the end user to decide in which region to
schedule the new instance. The dashboard will present the user with
schedule the new instance. The dashboard presents the user with
the first region in your configuration. The API and CLI tools
will not execute commands unless a valid region is specified.
do not execute commands unless a valid region is specified.
It is therefore important to provide documentation to your
users describing the region layout as well as calling out that
quotas are region-specific. If a user reaches his or her quota
in one region, OpenStack will not automatically build new
instances in another. Documenting specific examples will help
in one region, OpenStack does not automatically build new
instances in another. Documenting specific examples helps
users understand how to operate the cloud, thereby reducing
calls and tickets filed with the help desk.</para></section>
</section>

View File

@ -10,8 +10,8 @@
xml:id="prescriptive-example-multisite">
<?dbhtml stop-chunking?>
<title>Prescriptive examples</title>
<para>Based on the needs of the intended workloads, there are
multiple ways to build a multi-site OpenStack installation.
<para>There are multiple ways to build a multi-site OpenStack
installation, based on the needs of the intended workloads.
Below are example architectures based on different
requirements. These examples are meant as a reference, and not
a hard and fast rule for deployments. Use the previous
@ -31,32 +31,31 @@
The intent is to scale by creating more copies of the
application in closer proximity to the users that need it
most, in order to ensure faster response time to user
requests. This provider will deploy two datacenters at each of
requests. This provider deploys two datacenters at each of
the four chosen regions. The implications of this design are
based around the method of placing copies of resources in each
of the remote regions. Swift objects, Glance images, and block
storage will need to be manually replicated into each region.
storage need to be manually replicated into each region.
This may be beneficial for some systems, such as the case of
content service, where only some of the content needs to exist
in some but not all regions. A centralized Keystone is
recommended to ensure authentication and that access to the
API endpoints is easily manageable.</para>
<para>Installation of an automated DNS system such as Designate is
highly recommended. Unless an external Dynamic DNS system is
available, application administrators will need a way to
<para>It is recommended that you install an automated DNS system such
as Designate. Application administrators need a way to
manage the mapping of which application copy exists in each
region and how to reach it. Designate will assist by making
the process automatic and by populating the records in the
each region's zone.</para>
region and how to reach it, unless an external Dynamic DNS system
is available. Designate assists by making the process automatic
and by populating the records in the each region's zone.</para>
<para>Telemetry for each region is also deployed, as each region
may grow differently or be used at a different rate.
Ceilometer will run to collect each region's metrics from each
Ceilometer collects each region's metrics from each
of the controllers and report them back to a central location.
This is useful both to the end user and the administrator of
the OpenStack environment. The end user will find this method
useful, in that it is possible to determine if certain
useful, as it makes possible to determine if certain
locations are experiencing higher load than others, and take
appropriate action. Administrators will also benefit by
appropriate action. Administrators also benefit by
possibly being able to forecast growth per region, rather than
expanding the capacity of all regions simultaneously,
therefore maximizing the cost-effectiveness of the multi-site
@ -64,18 +63,18 @@
<para>One of the key decisions of running this sort of
infrastructure is whether or not to provide a redundancy
model. Two types of redundancy and high availability models in
this configuration will be implemented. The first type
this configuration can be implemented. The first type
revolves around the availability of the central OpenStack
components. Keystone will be made highly available in three
central data centers that will host the centralized OpenStack
components. Keystone can be made highly available in three
central data centers that host the centralized OpenStack
components. This prevents a loss of any one of the regions
causing an outage in service. It also has the added benefit of
being able to run a central storage repository as a primary
cache for distributing content to each of the regions.</para>
<para>The second redundancy topic is that of the edge data center
itself. A second data center in each of the edge regional
locations will house a second region near the first. This
ensures that the application will not suffer degraded
locations house a second region near the first. This
ensures that the application does not suffer degraded
performance in terms of latency and availability.</para>
<para>This figure depicts the solution designed to have both a
centralized set of core data centers for OpenStack services
@ -111,7 +110,7 @@
dashboard, Block Storage and Compute running locally in
each of the three regions. The other services,
Identity, Orchestration, Telemetry, Image Service and
Object Storage will be
Object Storage can be
installed centrally&mdash;with nodes in each of the region
providing a redundant OpenStack Controller plane
throughout the globe.</para>
@ -122,7 +121,7 @@
</listitem>
<listitem>
<para>OpenStack Object Storage for serving static objects
such as images will be used to ensure that all images
such as images can be used to ensure that all images
are standardized across all the regions, and
replicated on a regular basis.</para>
</listitem>
@ -132,14 +131,13 @@
deployed instances.</para>
</listitem>
<listitem>
<para>A geo-redundant load balancing service will be used
<para>A geo-redundant load balancing service can be used
to service the requests from the customers based on
their origin.</para>
</listitem>
</itemizedlist>
<para>An autoscaling heat template will used to deploy the
application in the three regions. This template will
include:</para>
<para>An autoscaling heat template can be used to deploy the
application in the three regions. This template includes:</para>
<itemizedlist>
<listitem>
<para>Web Servers, running Apache.</para>
@ -154,16 +152,19 @@
instance failure.</para>
</listitem>
</itemizedlist>
<para>Another autoscaling Heat template will be used to deploy a
<para>Another autoscaling Heat template can be used to deploy a
distributed MongoDB shard over the three locations&mdash;with the
option of storing required data on a globally available swift
container. according to the usage and load on the database
server&mdash;additional shards will be provisioned according to
container. According to the usage and load on the database
server&mdash;additional shards can be provisioned according to
the thresholds defined in Telemetry.</para>
<para>The reason that three regions were selected here was because of
<!-- <para>The reason that three regions were selected here was because of
the fear of having abnormal load on a single region in the
event of a failure. Two data center would have been sufficient
had the requirements been met.</para>
had the requirements been met.</para>-->
<para>Two data centers would have been sufficient had the requirements
been met. But three regions are selected here to avoid abnormal
load on a single region in the event of a failure.</para>
<para>Orchestration is used because of the built-in functionality of
autoscaling and auto healing in the event of increased load.
Additional configuration management tools, such as Puppet or
@ -175,7 +176,7 @@
external tools were not needed.</para>
<para>
OpenStack Object Storage is used here to serve as a back end for
the Image Service since was the most suitable solution for a
the Image Service since it is the most suitable solution for a
globally distributed storage solution&mdash;with its own
replication mechanism. Home grown solutions could also have
been used including the handling of replication&mdash;but were not
@ -193,13 +194,12 @@
<section xml:id="location-local-services"><title>Location-local service</title>
<para>A common use for a multi-site deployment of OpenStack, is
for creating a Content Delivery Network. An application that
uses a location-local architecture will require low network
uses a location-local architecture requires low network
latency and proximity to the user, in order to provide an
optimal user experience, in addition to reducing the cost of
bandwidth and transit, since the content resides on sites
closer to the customer, instead of a centralized content store
that would require utilizing higher cost cross country
links.</para>
that requires utilizing higher cost cross-country links.</para>
<para>This architecture usually includes a geo-location component
that places user requests at the closest possible node. In
this scenario, 100% redundancy of content across every site is
@ -212,7 +212,7 @@
<para>In this example, the application utilizing this multi-site
OpenStack install that is location aware would launch web
server or content serving instances on the compute cluster in
each site. Requests from clients will first be sent to a
each site. Requests from clients are first sent to a
global services load balancer that determines the location of
the client, then routes the request to the closest OpenStack
site where the application completes the request.</para>

View File

@ -10,8 +10,8 @@
with regard to designing a multi-site OpenStack
implementation. An OpenStack cloud can be designed in a
variety of ways to handle individual application needs. A
multi-site deployment will have additional challenges compared
to single site installations and will therefore be a more
multi-site deployment has additional challenges compared
to single site installations and therefore is a more
complex solution.</para>
<para>When determining capacity options be sure to take into
account not just the technical issues, but also the economic
@ -22,7 +22,7 @@
includes parameters such as bandwidth, latency, whether or not
a link is dedicated, and any business policies applied to the
connection. The capability and number of the links between
sites will determine what kind of options may be available for
sites determine what kind of options are available for
deployment. For example, if two sites have a pair of
high-bandwidth links available between them, it may be wise to
configure a separate storage replication network between the
@ -35,7 +35,7 @@
tenant private networks across the secondary link using
overlay networks with a third party mapping the site overlays
to each other.</para>
<para>The capacity requirements of the links between sites will be
<para>The capacity requirements of the links between sites is
driven by application behavior. If the latency of the links is
too high, certain applications that use a large number of
small packets, for example RPC calls, may encounter issues
@ -54,7 +54,7 @@
the Icehouse release, OpenStack Networking was not capable of managing
tunnel IDs across installations. This means that if one site
runs out of IDs, but other does not, that tenant's network
will be unable to reach the other site.</para>
is unable to reach the other site.</para>
<para>Capacity can take other forms as well. The ability for a
region to grow depends on scaling out the number of available
compute nodes. This topic is covered in greater detail in the
@ -93,18 +93,18 @@
actions to an API endpoint or in the dashboard.</para>
<para>Load balancing is another common issue with multi-site
installations. While it is still possible to run HAproxy
instances with Load-Balancer-as-a-Service, these will be local
instances with Load-Balancer-as-a-Service, these are local
to a specific region. Some applications may be able to cope
with this via internal mechanisms. Others, however, may
require the implementation of an external system including
global services load balancers or anycast-advertised
DNS.</para>
<para>Depending on the storage model chosen during site design,
storage replication and availability will also be a concern
storage replication and availability are also a concern
for end-users. If an application is capable of understanding
regions, then it is possible to keep the object storage system
separated by region. In this case, users who want to have an
object available to more than one region will need to do the
object available to more than one region need to do the
cross-site replication themselves. With a centralized swift
proxy, however, the user may need to benchmark the replication
timing of the Object Storage back end. Benchmarking allows the
@ -133,7 +133,7 @@
not created. Some applications may need to be tuned to account
for this effect. Block Storage does not currently have a
method for replicating data across multiple regions, so
applications that depend on available block storage will need
applications that depend on available block storage need
to manually cope with this limitation by creating duplicate
block storage entries in each region.</para></section>
<section xml:id="security-multi-site"><title>Security</title>
@ -142,8 +142,8 @@
to be secure. In a multi-site installation the use of a
non-private connection between sites may be required. This may
mean that traffic would be visible to third parties and, in
cases where an application requires security, this issue will
require mitigation. Installing a VPN or encrypted connection
cases where an application requires security, this issue
requires mitigation. Installing a VPN or encrypted connection
between sites is recommended in such instances.</para>
<para>Another security consideration with regard to multi-site
deployments is Identity. Authentication in a multi-site

View File

@ -76,13 +76,13 @@
into the infrastructure. If the OpenStack Object Storage is used as
a back end for the Image Service, it is possible to create repositories of
consistent images across multiple sites. Having central
endpoints with multiple storage nodes will allow for
consistent centralized storage for each and every site.</para>
<para>Not using a centralized object store will increase
operational overhead so that a consistent image library can be
maintained. This could include development of a replication
mechanism to handle the transport of images and the changes to
the images across multiple sites.</para></section>
endpoints with multiple storage nodes allows consistent centralized
storage for each and every site.</para>
<para>Not using a centralized object store increases operational
overhead so that a consistent image library can be maintained. This
could include development of a replication mechanism to handle
the transport of images and the changes to the images across
multiple sites.</para></section>
<section xml:id="high-availability-multi-site"><title>High availability</title>
<para>If high availability is a requirement to provide continuous
infrastructure operations, a basic requirement of high
@ -107,7 +107,7 @@
operational cost of maintaining the sites.</para>
<para>The ability to maintain object availability in both sites
has significant implications on the object storage design and
implementation. It will also have a significant impact on the
implementation. It also has a significant impact on the
WAN network design between the sites.</para>
<para>Connecting more than two sites increases the challenges and
adds more complexity to the design considerations. Multi-site
@ -175,7 +175,7 @@
unavailable.</para>
</listitem>
<listitem>
<para>It is important to understand what will happen to the
<para>It is important to understand what happens to the
replication of objects and data between the sites when
a site goes down. If this causes queues to start
building up, consider how long these queues can
@ -212,7 +212,7 @@
<title>Authentication between sites</title>
<para>Ideally it is best to have a single authentication domain
and not need a separate implementation for each and every
site. This will, of course, require an authentication
site. This, of course, requires an authentication
mechanism that is highly available and distributed to ensure
continuous operation. Authentication server locality is also
something that might be needed as well and should be planned