Remove unwanted unicode charaters

This patch is necessary because of I53e999fc91336871e1c32c70745f7d7cf2e256cf.

The following unicode characters will be removed:

* “...”
* ‘...’
* ― and —

Change-Id: If11a2d4ebd98b53f9f0d077b319983735f2e4b6b
This commit is contained in:
Christian Berendt 2015-01-07 11:33:00 +01:00
parent db078f7501
commit 97334b6859
23 changed files with 66 additions and 66 deletions

View File

@ -81,7 +81,7 @@
<section xml:id="section_manage-compute-users">
<title>Manage Compute users</title>
<para>Access to the Euca2ools (ec2) API is controlled by an access and secret key. The
users access key needs to be included in the request, and the request must be signed
user's access key needs to be included in the request, and the request must be signed
with the secret key. Upon receipt of API requests, Compute verifies the signature and
runs commands on behalf of the user.</para>
<para>To begin using Compute, you must create a user with the Identity Service.</para>

View File

@ -700,7 +700,7 @@ physical_interface_mappings = physnet2:eth1</programlisting>
scale out on large overlay networks. This traffic is sent to the relevant agent via
encapsulation as a targeted unicast.</para>
<para>Current <emphasis>Open vSwitch</emphasis> and <emphasis>Linux Bridge</emphasis>
tunneling implementations broadcast to every agent, even if they dont host the
tunneling implementations broadcast to every agent, even if they don't host the
corresponding network as illustrated below.</para>
<mediaobject>
<imageobject>

View File

@ -124,7 +124,7 @@
</section>
<section xml:id="monitoring-statsdlog">
<title>Statsdlog</title>
<para>Florians <link
<para>Florian's <link
xlink:href="https://github.com/pandemicsyn/statsdlog"
>Statsdlog</link> project increments StatsD counters
based on logged events. Like Swift-Informant, it is also

View File

@ -608,7 +608,7 @@ sinks:
from the sample values of the <literal>cpu</literal> counter, which represents
cumulative CPU time in nanoseconds. The transformer definition above defines a
scale factor (for nanoseconds, multiple CPUs, etc.), which is applied before the
transformation derives a sequence of gauge samples with unit %, from sequential
transformation derives a sequence of gauge samples with unit '%', from sequential
values of the <literal>cpu</literal> meter.</para>
<para>The definition for the disk I/O rate, which is also generated by the rate of change
transformer:</para>
@ -628,7 +628,7 @@ sinks:
<simplesect>
<title>Unit conversion transformer</title>
<para>Transformer to apply a unit conversion. It takes the volume of the meter and
multiplies it with the given scale expression. Also supports <literal>map_from
multiplies it with the given 'scale' expression. Also supports <literal>map_from
</literal> and <literal>map_to</literal> like the rate of change transformer.</para>
<para>Sample configuration:</para>
<programlisting>transformers:
@ -664,7 +664,7 @@ sinks:
</parameter>, <parameter>user_id</parameter> and <parameter>resource_metadata</parameter>.
To aggregate by the chosen attributes, specify them in the configuration and set
which value of the attribute to take for the new sample (first to take the first
samples attribute, last to take the last samples attribute, and drop to discard
sample's attribute, last to take the last sample's attribute, and drop to discard
the attribute).</para>
<para>To aggregate 60s worth of samples by <parameter>resource_metadata</parameter>
and keep the <parameter>resource_metadata</parameter> of the latest received
@ -699,7 +699,7 @@ sinks:
meters and/or their metadata, for example:</para>
<programlisting>memory_util = 100 * memory.usage / memory</programlisting>
<para>A new sample is created with the properties described in the <literal>target
</literal> section of the transformers configuration. The samples volume is the
</literal> section of the transformer's configuration. The sample's volume is the
result of the provided expression. The calculation is performed on samples from
the same resource.</para>
<note>

View File

@ -64,7 +64,7 @@
</para>
<para>
<link xlink:href="http://www.opencompute.org/">Open Compute
Project</link>: The Open Compute Project Foundations mission is
Project</link>: The Open Compute Project Foundation's mission is
to design and enable the delivery of the most efficient server,
storage and data center hardware designs for scalable
computing.

View File

@ -23,7 +23,7 @@
consistently performant. This process is important because,
when a service becomes a critical part of a user's
infrastructure, the user's fate becomes wedded to the SLAs of
the cloud itself. In cloud computing, a services performance
the cloud itself. In cloud computing, a service's performance
will not be measured by its average speed but rather by the
consistency of its speed.</para>
<para>There are two aspects of capacity planning to consider:

View File

@ -441,7 +441,7 @@
are instances where the relationship between networking
hardware and networking software are not as tightly defined.
An example of this type of software is Cumulus Linux, which is
capable of running on a number of switch vendors hardware
capable of running on a number of switch vendor's hardware
solutions.</para>
<para>Some of the key considerations that should be included in
the selection of networking hardware include:</para>

View File

@ -172,7 +172,7 @@
encapsulating with VXLAN, and VLAN tags.</para>
<para>Initially, it is suggested to design at least three network
segments, the first of which will be used for access to the
clouds REST APIs by tenants and operators. This is generally
cloud's REST APIs by tenants and operators. This is generally
referred to as a public network. In most cases, the controller
nodes and swift proxies within the cloud will be the only
devices necessary to connect to this network segment. In some
@ -508,7 +508,7 @@
delays in operation functions such as spinning up and deleting
instances, provisioning new storage volumes and managing
network resources. Such delays could adversely affect an
applications ability to react to certain conditions,
application's ability to react to certain conditions,
especially when using auto-scaling features. It is important
to properly design the hardware used to run the controller
infrastructure as outlined above in the Hardware Selection
@ -577,7 +577,7 @@
dedicated interfaces on the Controller and Compute
hosts.</para>
<para>When considering performance of OpenStack Object Storage, a
number of design choices will affect performance. A users
number of design choices will affect performance. A user's
access to the Object Storage is through the proxy services,
which typically sit behind hardware load balancers. By the
very nature of a highly resilient storage system, replication
@ -617,7 +617,7 @@
access maintained in the OpenStack Compute code, provides a
feature that removes a single point of failure when it comes
to routing, and this feature is currently missing in OpenStack
Networking. The effect of legacy networkings multi-host
Networking. The effect of legacy networking's multi-host
functionality restricts failure domains to the host running
that instance.</para>
<para>On the other hand, when using OpenStack Networking, the

View File

@ -27,7 +27,7 @@
<para>Use case planning can seem counter-intuitive. After all, it takes
about five minutes to sign up for a server with Amazon. Amazon does not
know in advance what any given user is planning on doing with it, right?
Wrong. Amazons product management department spends plenty of time
Wrong. Amazon's product management department spends plenty of time
figuring out exactly what would be attractive to their typical customer
and honing the service to deliver it. For the enterprise, the planning
process is no different, but instead of planning for an external paying
@ -77,7 +77,7 @@
</listitem>
</itemizedlist>
<para>As an example of how this works, consider a business goal of using the
cloud for the companys E-commerce website. This goal means planning for
cloud for the company's E-commerce website. This goal means planning for
applications that will support thousands of sessions per second,
variable workloads, and lots of complex and changing data. By
identifying the key metrics, such as number of concurrent transactions
@ -232,13 +232,13 @@
</listitem>
<listitem>
<para>But not too paranoid: Not every application needs the
platinum solution. Architect for different SLAs, service
platinum solution. Architect for different SLA's, service
tiers and security levels.</para>
</listitem>
<listitem>
<para>Manage the data: Data is usually the most inflexible and
complex area of a cloud and cloud integration architecture.
Dont short change the effort in analyzing and addressing
Don't short change the effort in analyzing and addressing
data needs.</para>
</listitem>
<listitem>
@ -269,7 +269,7 @@
</listitem>
<listitem>
<para>Keep it loose: Loose coupling, service interfaces,
separation of concerns, abstraction and well defined APIs
separation of concerns, abstraction and well defined API's
deliver flexibility.</para>
</listitem>
<listitem>

View File

@ -22,7 +22,7 @@
very sensitive to latency and needs a rapid response to
end-users. After reviewing the user, technical and operational
considerations, it is determined beneficial to build a number
of regions local to the customers edge. In this case rather
of regions local to the customer's edge. In this case rather
than build a few large, centralized data centers, the intent
of the architecture is to provide a pair of small data centers
in locations that are closer to the customer. In this use

View File

@ -254,7 +254,7 @@
<listitem>
<para>A requirement for vendor independence. To avoid
hardware or software vendor lock-in, the design should
not rely on specific features of a vendors router or
not rely on specific features of a vendor's router or
switch.</para>
</listitem>
<listitem>

View File

@ -37,7 +37,7 @@
management console, or other dashboards capable of visualizing
SNMP data, will be helpful in discovering and resolving issues
that might arise within the storage cluster. An example of
this is Cephs Calamari.</para>
this is Ceph's Calamari.</para>
<para>A storage-focused cloud design should include:</para>
<itemizedlist>
<listitem>
@ -273,7 +273,7 @@
nodes. In some cases, this replication can consist of
extremely large data sets. In these cases, it is recommended
to make use of back-end replication links which will not
contend with tenants access to data.</para>
contend with tenants' access to data.</para>
<para>As more tenants begin to access data within the cluster and
their data sets grow it will become necessary to add front-end
bandwidth to service data access requests. Adding front-end

View File

@ -90,7 +90,7 @@
<listitem>
<para>Data grids can be helpful in
deterministically answering questions around data
valuation. A fundamental challenge of todays
valuation. A fundamental challenge of today's
information sciences is determining which data is
worth keeping, on what tier of access and performance
should it reside, and how long should it remain in a

View File

@ -214,7 +214,7 @@ No fixtures found.</computeroutput></screen>
<para>If you use Django 1.4 or later, the <command>signed_cookies</command>
back end avoids server load and scaling problems.</para>
<para>This back end stores session data in a cookie, which is
stored by the users browser. The back end uses a
stored by the user's browser. The back end uses a
cryptographic signing technique to ensure session data is
not tampered with during transport. This is not the same
as encryption; session data is still readable by an
@ -224,7 +224,7 @@ No fixtures found.</computeroutput></screen>
scales indefinitely as long as the quantity of session
data being stored fits into a normal cookie.</para>
<para>The biggest downside is that it places session data into
storage on the users machine and transports it over the
storage on the user's machine and transports it over the
wire. It also limits the quantity of session data that can
be stored.</para>
<para>See the Django <link

View File

@ -6,7 +6,7 @@
xml:id="section_objectstorage-account-reaper">
<title>Account reaper</title>
<para>In the background, the account reaper removes data from the deleted accounts.</para>
<para>A reseller marks an account for deletion by issuing a <code>DELETE</code> request on the accounts
<para>A reseller marks an account for deletion by issuing a <code>DELETE</code> request on the account's
storage URL. This action sets the <code>status</code> column of the account_stat table in the account
database and replicas to <code>DELETED</code>, marking the account's data for deletion.</para>
<para>Typically, a specific retention time or undelete are not provided. However, you can set a
@ -19,10 +19,10 @@
</para>
<para>The account reaper runs on each account server and scans the server occasionally for
account databases marked for deletion. It only fires up on the accounts for which the server
is the primary node, so that multiple account servers arent trying to do it simultaneously.
is the primary node, so that multiple account servers aren't trying to do it simultaneously.
Using multiple servers to delete one account might improve the deletion speed but requires
coordination to avoid duplication. Speed really is not a big concern with data deletion, and
large accounts arent deleted often.</para>
large accounts aren't deleted often.</para>
<para>Deleting an account is simple. For each account container, all objects are deleted and
then the container is deleted. Deletion requests that fail will not stop the overall process
but will cause the overall process to fail eventually (for example, if an object delete

View File

@ -18,7 +18,7 @@
</listitem>
<listitem>
<para><emphasis role="bold">Zones.</emphasis> Isolate data from other zones. A
failure in one zone doesnt impact the rest of the cluster because data is
failure in one zone doesn't impact the rest of the cluster because data is
replicated across zones.</para>
</listitem>
<listitem>
@ -100,7 +100,7 @@
item separately or the entire cluster all at once.</para>
<para>Another configurable value is the replica count, which indicates how many of the
partition-device assignments make up a single ring. For a given partition number, each
replicas device will not be in the same zone as any other replica's device. Zones can
replica's device will not be in the same zone as any other replica's device. Zones can
be used to group devices based on physical locations, power separations, network
separations, or any other attribute that would improve the availability of multiple
replicas at the same time.</para>

View File

@ -37,12 +37,12 @@
</section>
<section xml:id="section_partition-assignment">
<title>Partition assignment list</title>
<para>This is a list of <literal>array(H)</literal> of
<para>This is a list of <literal>array('H')</literal> of
devices ids. The outermost list contains an
<literal>array(H)</literal> for each replica. Each
<literal>array(H)</literal> has a length equal to
<literal>array('H')</literal> for each replica. Each
<literal>array('H')</literal> has a length equal to
the partition count for the ring. Each integer in the
<literal>array(H)</literal> is an index into the
<literal>array('H')</literal> is an index into the
above list of devices. The partition list is known
internally to the Ring class as
<literal>_replica2part2dev_id</literal>.</para>
@ -54,7 +54,7 @@ part2dev_id in self._replica2part2dev_id]</programlisting></para>
account for the removal of duplicate devices. If a ring
has more replicas than devices, a partition will have more
than one replica on a device.</para>
<para><literal>array(H)</literal> is used for memory
<para><literal>array('H')</literal> is used for memory
conservation as there may be millions of
partitions.</para>
</section>

View File

@ -14,7 +14,7 @@
unmounted. This will make it easier for Object Storage to work around the failure until
it has been resolved. If the drive is going to be replaced immediately, then it is just
best to replace the drive, format it, remount it, and let replication fill it up.</para>
<para>If the drive cant be replaced immediately, then it is best to leave it
<para>If the drive can't be replaced immediately, then it is best to leave it
unmounted, and remove the drive from the ring. This will allow all the replicas
that were on that drive to be replicated elsewhere until the drive is replaced.
Once the drive is replaced, it can be re-added to the ring.</para>
@ -31,8 +31,8 @@
comes back online, replication will make sure that anything that is missing
during the downtime will get updated.</para>
<para>If the server has more serious issues, then it is probably best to remove all
of the servers devices from the ring. Once the server has been repaired and is
back online, the servers devices can be added back into the ring. It is
of the server's devices from the ring. Once the server has been repaired and is
back online, the server's devices can be added back into the ring. It is
important that the devices are reformatted before putting them back into the
ring as it is likely to be responsible for a different set of partitions than
before.</para>

View File

@ -346,7 +346,7 @@ coraid_repository_key = <replaceable>coraid_repository_key</replaceable></progra
</step>
<step>
<para>Create a volume.</para>
<screen><prompt>$</prompt> <userinput>cinder type-create <replaceable>volume_type_name</replaceable></userinput></screen>
<screen><prompt>$</prompt> <userinput>cinder type-create '<replaceable>volume_type_name</replaceable>'</userinput></screen>
<para>where <replaceable>volume_type_name</replaceable> is the
name you assign the volume. You will see output similar to
the following:</para>
@ -362,7 +362,7 @@ coraid_repository_key = <replaceable>coraid_repository_key</replaceable></progra
<para>Associate the volume type with the Storage
Repository.</para>
<para>
<screen><prompt>#</prompt> <userinput>cinder type-key <replaceable>UUID</replaceable> set <replaceable>coraid_repository_key</replaceable>=<replaceable>FQRN</replaceable></userinput></screen>
<screen><prompt>#</prompt> <userinput>cinder type-key <replaceable>UUID</replaceable> set <replaceable>coraid_repository_key</replaceable>='<replaceable>FQRN</replaceable>'</userinput></screen>
</para>
<informaltable rules="all">
<thead>

View File

@ -36,7 +36,7 @@
<title>Installing using the OpenStack cinder volume installer</title>
<para>In case you want to avoid all the manual setup, you can use
Cloudbase Solutions installer. You can find it at <link
Cloudbase Solutions' installer. You can find it at <link
xlink:href="https://www.cloudbase.it/downloads/CinderVolumeSetup_Beta.msi">
https://www.cloudbase.it/downloads/CinderVolumeSetup_Beta.msi</link>. It
installs an independent Python environment, in order to avoid conflicts

View File

@ -348,8 +348,8 @@ pipeline = pipeline = healthcheck cache <emphasis role="bold">tempurl</emphasis>
instance, a common deployment has three replicas of each
object. The health of that object can be measured by
checking if each replica is in its proper place. If only 2
of the 3 is in place the objects health can be said to be
at 66.66%, where 100% would be perfect. A single objects
of the 3 is in place the object's health can be said to be
at 66.66%, where 100% would be perfect. A single object's
health, especially an older object, usually reflects the
health of that entire partition the object is in. If you
make enough objects on a distinct percentage of the
@ -583,7 +583,7 @@ X-Trans-Id: tx602634cf478546a39b1be-0051e6bc7a</computeroutput></screen>
The name of each file uploaded is appended to the specified
<literal>swift-url</literal>. So, you can upload directly to the root of container with
a URL like: <uri>https://swift-cluster.example.com/v1/AUTH_account/container/</uri>
Optionally, you can include an object prefix to better separate different users
Optionally, you can include an object prefix to better separate different users'
uploads, such as:
<uri>https://swift-cluster.example.com/v1/AUTH_account/container/object_prefix</uri>
</para>

View File

@ -30,7 +30,7 @@
by the router, the DVR agent populates the ARP entry. By pre-populating ARP
entries across compute nodes, the distributed virtual router ensures traffic
goes to the correct destination. The integration bridge on a particular
compute node identifies the incoming frames source MAC address as a
compute node identifies the incoming frame's source MAC address as a
DVR-unique MAC address because every compute node l2 agent knows all
configured unique MAC addresses for DVR used in the cloud. The agent
replaces the DVR-unique MAC Address with the green subnet interface MAC

View File

@ -116,7 +116,7 @@
<a href="http://docs.openstack.org/developer/os-cloud-config/">
os-cloud-config
</a>
Provides a set of tools to perform up-front configuration for OpenStack
- Provides a set of tools to perform up-front configuration for OpenStack
clouds, currently used primarily by TripleO.
</dd>
</dl>
@ -134,118 +134,118 @@
<a href="http://docs.openstack.org/developer/oslo.concurrency/">
oslo.concurrency
</a>
Provides support for managing external processes and
- Provides support for managing external processes and
task synchronization.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslo.config/">
oslo.config
</a>
Parses config options from command line and config files.
- Parses config options from command line and config files.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslo.db/">
oslo.db
</a>
Provides database connectivity.
- Provides database connectivity.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslo.i18n/">
oslo.i18n
</a>
Internationalization and translation utilities.
- Internationalization and translation utilities.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslo.log/">
oslo.log
</a>
A logging configuration library.
- A logging configuration library.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslo.messaging/">
oslo.messaging
</a>
Provides inter-process communication.
- Provides inter-process communication.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslo.middleware/">
oslo.middleware
</a>
A collection of WSGI middleware for web service development.
- A collection of WSGI middleware for web service development.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslo.rootwrap/">
oslo.rootwrap
</a>
Provides fine filtering of shell commands to run as root.
- Provides fine filtering of shell commands to run as root.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslo.serialization/">
oslo.serialization
</a>
Provides serialization functionality with special handling
- Provides serialization functionality with special handling
for some common types.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslo.utils/">
oslo.utils
</a>
Provides library of various common low-level utility modules.
- Provides library of various common low-level utility modules.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslo.vmware/">
oslo.vmware
</a>
Provides common functionality required by VMware drivers in
- Provides common functionality required by VMware drivers in
several projects.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslosphinx/">
oslosphinx
</a>
Provides theme and extension support for Sphinx documentation.
- Provides theme and extension support for Sphinx documentation.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/oslotest/">
oslotest
</a>
Provides a unit test and fixture framework.
- Provides a unit test and fixture framework.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/cliff/">
cliff
</a>
Builds command-line programs in Python.
- Builds command-line programs in Python.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/pbr/">
pbr
</a>
Manages setuptools packaging needs in a consistent way.
- Manages setuptools packaging needs in a consistent way.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/pycadf/">
PyCADF
</a>
Creates CADF events to capture cloud-related events.
- Creates CADF events to capture cloud-related events.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/stevedore/">
stevedore
</a>
Manages dynamic plug-ins for Python applications.
- Manages dynamic plug-ins for Python applications.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/taskflow/">
TaskFlow
</a>
Makes task execution easy, consistent, and reliable.
- Makes task execution easy, consistent, and reliable.
</dd>
<dd>
<a href="http://docs.openstack.org/developer/tooz/">
Tooz
</a>
Distributed primitives like group membership protocol, lock service and leader elections.
- Distributed primitives like group membership protocol, lock service and leader elections.
</dd>
</dl>
</div>