Typos and spellings errors fix in config-ref, HA/image guides

Another round of typos/spelling errors fixing, in particular for
config-reference, HA guide and image guide.

Change-Id: Ibc3661e132a0ea4010a5a61f71ef5209ae6655b6
Closes-Bug: #1356970
This commit is contained in:
Sandro Tosi 2014-08-14 17:14:36 +01:00
parent 8041a8d0cf
commit bcec02416e
27 changed files with 46 additions and 46 deletions

View File

@ -79,7 +79,7 @@
</listitem>
<listitem>
<para><emphasis>RBD</emphasis>. Use as a block device.
The Linux kernel RBD (rados block device) driver
The Linux kernel RBD (RADOS block device) driver
allows striping a Linux block device over multiple
distributed object store data objects. It is
compatible with the KVM RBD image.</para>
@ -101,7 +101,7 @@
<para><emphasis>librados</emphasis>, and its related C/C++ bindings.</para>
</listitem>
<listitem>
<para><emphasis>rbd and QEMU-RBD</emphasis>. Linux
<para><emphasis>RBD and QEMU-RBD</emphasis>. Linux
kernel and QEMU block devices that stripe data
across multiple objects.</para>
</listitem>

View File

@ -87,7 +87,7 @@
</orderedlist>
<section xml:id="install-naviseccli">
<title>Install NaviSecCLI</title>
<para>On Ubuntu x64, download the NaviSecCLI deb package from <link xlink:href="https://github.com/emc-openstack/naviseccli">EMC's OpenStack Github</link> web site.
<para>On Ubuntu x64, download the NaviSecCLI deb package from <link xlink:href="https://github.com/emc-openstack/naviseccli">EMC's OpenStack GitHub</link> web site.
</para>
<para>For all the other variants of Linux, download the NaviSecCLI rpm package from EMC's support web site for <link xlink:href="https://support.emc.com/downloads/36656_VNX2-Series">VNX2 series</link> or <link xlink:href="https://support.emc.com/downloads/12781_VNX1-Series">VNX1 series</link>. Login is required.
</para>

View File

@ -156,7 +156,7 @@
<title>Register the node</title>
<step><para>On the Compute node or Volume node <literal>1.1.1.1</literal>, do
the following (assume <literal>10.10.61.35</literal>
is the iscsi target):</para>
is the iSCSI target):</para>
<screen><prompt>#</prompt> <userinput>/etc/init.d/open-iscsi start</userinput>
<prompt>#</prompt> <userinput>iscsiadm -m discovery -t st -p 10.10.61.35</userinput>
<prompt>#</prompt> <userinput>cd /etc/iscsi</userinput>

View File

@ -18,7 +18,7 @@
communicate with an HNAS array. This utility package is available in the
physical media distributed with the hardware or it can be copied from
the SMU (<filename>/usr/local/bin/ssc</filename>).</para>
<para>Platform: Ubuntu 12.04LTS or newer.</para>
<para>Platform: Ubuntu 12.04 LTS or newer.</para>
</section>
<section xml:id="hds-hnas-supported-operations">
<title>Supported operations</title>

View File

@ -18,7 +18,7 @@
utility package from the HDS support site (<link
xlink:href="https://HDSSupport.hds.com"
>https://HDSSupport.hds.com</link>).</para>
<para>Platform: Ubuntu 12.04LTS or newer.</para>
<para>Platform: Ubuntu 12.04 LTS or newer.</para>
</section>
<section xml:id="hds-supported-operations">
<title>Supported operations</title>

View File

@ -6,14 +6,14 @@
<title>LVM</title>
<para>The default volume back-end uses local volumes managed by LVM.</para>
<para>This driver supports different transport protocols to attach
volumes, currently ISCSI and ISER.</para>
volumes, currently iSCSI and iSER.</para>
<para>
Set the following in your
<filename>cinder.conf</filename>, and use the following options to
configure for ISCSI transport:
configure for iSCSI transport:
</para>
<programlisting language="ini">volume_driver=cinder.volume.drivers.lvm.ISCSIDriver</programlisting>
<para>and for the ISER transport:</para>
<para>and for the iSER transport:</para>
<programlisting language="ini">volume_driver=cinder.volume.drivers.lvm.ISERDriver</programlisting>
<xi:include href="../../../common/tables/cinder-lvm.xml"/>
</section>

View File

@ -9,7 +9,7 @@
commodity servers.</para>
<para>Sheepdog scales to several hundred nodes, and has powerful
virtual disk management features like snapshot, cloning, rollback,
thin proisioning.</para>
thin provisioning.</para>
<para>More information can be found on <link
xlink:href="http://sheepdog.github.io/sheepdog/">Sheepdog Project</link>.</para>
<para>This driver enables use of Sheepdog through Qemu/KVM.</para>

View File

@ -42,7 +42,7 @@ Generally a large timeout is required for Windows instances, but you may want to
</para></section>
<section xml:id="xen-firewall">
<title>Firewall</title>
<para>If using nova-network, IPTables is supported:
<para>If using nova-network, iptables is supported:
<programlisting language="ini">firewall_driver = nova.virt.firewall.IptablesFirewallDriver</programlisting>
Alternately, doing the isolation in Dom0:
<programlisting language="ini">firewall_driver = nova.virt.xenapi.firewall.Dom0IptablesFirewallDriver</programlisting>

View File

@ -306,7 +306,7 @@ aggregate_image_properties_isolation_separator=.</programlisting>
set of instances. To take advantage of this filter,
the requester must pass a scheduler hint, using
<literal>different_host</literal> as the key and a
list of instance uuids as the value. This filter is
list of instance UUIDs as the value. This filter is
the opposite of the <literal>SameHostFilter</literal>.
Using the <command>nova</command> command-line tool,
use the <literal>--hint</literal> flag. For
@ -528,7 +528,7 @@ isolated_images=342b492c-128f-4a42-8d3a-c5088cf27d13,ebd267a6-ca86-4d6c-9a0e-bd1
instance in a set of instances. To take advantage of
this filter, the requester must pass a scheduler hint,
using <literal>same_host</literal> as the key and a
list of instance uuids as the value. This filter is
list of instance UUIDs as the value. This filter is
the opposite of the
<literal>DifferentHostFilter</literal>. Using the
<command>nova</command> command-line tool, use the

View File

@ -875,8 +875,8 @@ trusty-server-cloudimg-amd64-disk1.vmdk</userinput></screen>
<literal>flat_network_bridge</literal> value in the
<filename>nova.conf</filename> file. The default value is
<literal>br100</literal>. If you specify another value,
the new value must be a valid linux bridge identifier that
adheres to linux bridge naming conventions.</para>
the new value must be a valid Linux bridge identifier that
adheres to Linux bridge naming conventions.</para>
<para>All VM NICs are attached to this port group.</para>
<para>Ensure that the flat interface of the node that runs
the <systemitem class="service">nova-network</systemitem>
@ -915,7 +915,7 @@ trusty-server-cloudimg-amd64-disk1.vmdk</userinput></screen>
OpenStack Block Storage service. The VMware VMDK driver for
OpenStack Block Storage is recommended and should be used for
managing volumes based on vSphere data stores. For more information
about the VMware VMDK driver, see <link xlink:href="http://docs.openstack.org/trunk/config-reference/content/vmware-vmdk-driver.html">VMware VMDK Driver</link>. Also an iscsi volume driver
about the VMware VMDK driver, see <link xlink:href="http://docs.openstack.org/trunk/config-reference/content/vmware-vmdk-driver.html">VMware VMDK Driver</link>. Also an iSCSI volume driver
provides limited support and can be used only for
attachments.</para>
</section>

View File

@ -25,7 +25,7 @@
<prompt>$</prompt> <userinput>NOVA_SOURCES=$(mktemp -d)</userinput></screen>
</step>
<step>
<para>Get the source from github. The example assumes
<para>Get the source from GitHub. The example assumes
the master branch is used. Amend the URL to match
the version being used:</para>
<screen><prompt>$</prompt> <userinput>wget -qO "$NOVA_ZIPBALL" https://github.com/openstack/nova/archive/master.zip</userinput>

View File

@ -144,23 +144,23 @@
in read-write mode.</para>
</step>
<step>
<para>On the compute host, find and record the uuid of
<para>On the compute host, find and record the UUID of
this ISO SR:</para>
<screen><prompt>#</prompt> <userinput>xe host-list</userinput></screen>
</step>
<step>
<para>Locate the uuid of the NFS ISO library:</para>
<para>Locate the UUID of the NFS ISO library:</para>
<screen><prompt>#</prompt> <userinput>xe sr-list content-type=iso</userinput></screen>
</step>
<step>
<para>Set the uuid and configuration. Even if an NFS
<para>Set the UUID and configuration. Even if an NFS
mount point is not local, you must specify
<literal>local-storage-iso</literal>.</para>
<screen><prompt>#</prompt> <userinput>xe sr-param-set uuid=[iso sr uuid] other-config:i18n-key=local-storage-iso</userinput></screen>
</step>
<step>
<para>Make sure the host-uuid from <literal>xe
pbd-list</literal> equals the uuid of the host
<para>Make sure the host-UUID from <literal>xe
pbd-list</literal> equals the UUID of the host
you found previously:</para>
<screen><prompt>#</prompt> <userinput>xe sr-uuid=[iso sr uuid]</userinput></screen>
</step>

View File

@ -62,7 +62,7 @@ Load-Balancer-as-a-Service related settings.</para>
<section xml:id="networking-options-fwaas">
<title>Firewall-as-a-Service driver</title>
<para>Use the following options in the <filename>fwaas_driver.ini</filename>
file for the FwaaS driver.</para>
file for the FWaaS driver.</para>
<xi:include href="../../common/tables/neutron-fwaas.xml"/>
</section>
@ -76,7 +76,7 @@ Load-Balancer-as-a-Service related settings.</para>
<section xml:id="networking-options-lbaas">
<title>Load-Balancer-as-a-Service agent</title>
<para>Use the following options in the <filename>lbaas_agent.ini</filename>
file for the LbaaS agent.</para>
file for the LBaaS agent.</para>
<xi:include href="../../common/tables/neutron-lbaas.xml"/>
<xi:include href="../../common/tables/neutron-lbaas_haproxy.xml"/>
<xi:include href="../../common/tables/neutron-lbaas_netscalar.xml"/>
@ -154,7 +154,7 @@ for your driver to change security group settings.</para>
<section xml:id="networking-options-varmour">
<title>vArmour Firewall-as-a-Service driver</title>
<para>Use the following options in the <filename>l3_agent.ini</filename>
file for the vArmour FwaaS driver.</para>
file for the vArmour FWaaS driver.</para>
<xi:include href="../../common/tables/neutron-varmour.xml"/>
</section>

View File

@ -28,7 +28,7 @@
</listitem>
<listitem>
<para>Another workaround is to decrease the virtual
ethernet devices' MTU. Set the
Ethernet devices' MTU. Set the
<option>network_device_mtu</option> option to
1450 in the <filename>neutron.conf</filename>
file, and set all guest virtual machines' MTU to

View File

@ -436,7 +436,7 @@ Queried 2619 objects for dispersion reporting, 7s, 0 retries
Sample represents 1.00% of the object partition space
</programlisting>
<para>Alternatively, the dispersion report can also be output
in json format. This allows it to be more easily consumed
in JSON format. This allows it to be more easily consumed
by third-party utilities:</para>
<screen><prompt>$</prompt> <userinput>swift-dispersion-report -j</userinput>
<computeroutput>{"object": {"retries:": 0, "missing_two": 0, "copies_found": 7863, "missing_one": 0,
@ -473,7 +473,7 @@ Sample represents 1.00% of the object partition space
descriptive body.</para>
<para>Quotas are subject to several limitations: eventual
consistency, the timeliness of the cached container_info
(60 second ttl by default), and it is unable to reject
(60 second TTL by default), and it is unable to reject
chunked transfer uploads that exceed the quota (though
once the quota is exceeded, new chunked transfers are
refused).</para>

View File

@ -6,7 +6,7 @@
xml:id="swift-general-service-configuration">
<title>Object Storage general service configuration</title>
<para>
Most Object Storage services fall into two categories, Object Storage's wsgi servers
Most Object Storage services fall into two categories, Object Storage's WSGI servers
and background daemons.
</para>
<para>

View File

@ -2363,7 +2363,7 @@
A platform that provides a suite of desktop environments
that users may log in to receive a desktop experience from
any location. This may provide general use, development, or
even homogenous testing environments.
even homogeneous testing environments.
</para>
</glossdef>
</glossentry>

View File

@ -5,16 +5,16 @@
version="5.0"
xml:id="ha-aa-haproxy">
<title>HAproxy nodes</title>
<title>HAProxy nodes</title>
<para>HAProxy is a very fast and reliable solution offering high availability, load balancing, and proxying
for TCP and HTTP-based applications. It is particularly suited for web sites crawling under very high loads
while needing persistence or Layer 7 processing. Supporting tens of thousands of connections is clearly
realistic with todays hardware.</para>
<para>For installing HAproxy on your nodes, you should consider its <link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://haproxy.1wt.eu/#docs">official documentation</link>.
<para>For installing HAProxy on your nodes, you should consider its <link xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="http://haproxy.1wt.eu/#docs">official documentation</link>.
Also, you have to consider that this service should not be a single point of failure, so you need at least two
nodes running HAproxy.</para>
<para>Here is an example for HAproxy configuration file:</para>
nodes running HAProxy.</para>
<para>Here is an example for HAProxy configuration file:</para>
<programlisting>global
chroot /var/lib/haproxy
daemon
@ -154,5 +154,5 @@ listen swift_proxy_cluster
option tcpka
server controller1 10.0.0.1:8080 check inter 2000 rise 2 fall 5
server controller2 10.0.0.2:8080 check inter 2000 rise 2 fall 5</programlisting>
<para>After each change of this file, you should restart HAproxy.</para>
<para>After each change of this file, you should restart HAProxy.</para>
</chapter>

View File

@ -12,7 +12,7 @@
<para>All OpenStack projects have an API service for controlling all the resources in the Cloud.
In Active / Active mode, the most common setup is to scale-out these services on at least two nodes
and use load balancing and virtual IP (with HAproxy &amp; Keepalived in this setup).</para>
and use load balancing and virtual IP (with HAProxy &amp; Keepalived in this setup).</para>
<para>
<emphasis role="strong">Configure API OpenStack services</emphasis>
</para>

View File

@ -12,7 +12,7 @@ We have to consider that while exchanges and bindings will survive the loss of i
and their messages will not because a queue and its contents is located on one node. If we lose this node,
we also lose the queue.</para>
<para>We consider that we run (at least) two RabbitMQ servers. To build a broker, we need to ensure that all nodes
have the same erlang cookie file. To do so, stop RabbitMQ everywhere and copy the cookie from rabbit1 server
have the same Erlang cookie file. To do so, stop RabbitMQ everywhere and copy the cookie from rabbit1 server
to other server(s):</para>
<screen><prompt>#</prompt> <userinput>scp /var/lib/rabbitmq/.erlang.cookie \
root@rabbit2:/var/lib/rabbitmq/.erlang.cookie</userinput></screen>

View File

@ -26,7 +26,7 @@
</note></para>
<para>A full treatment of Oz is beyond the scope of this
document, but we will provide an example. You can find
additional examples of Oz template files on github at
additional examples of Oz template files on GitHub at
<link
xlink:href="https://github.com/rackerjoe/oz-image-build/tree/master/templates"
>rackerjoe/oz-image-build/templates</link>. Here's how

View File

@ -171,7 +171,7 @@ the console to complete the installation process.</computeroutput></screen>
<computeroutput>:1</computeroutput></screen>
<para>In the example above, the guest
<literal>centos-6.4</literal> uses VNC display
<literal>:1</literal>, which corresponds to tcp port
<literal>:1</literal>, which corresponds to TCP port
<literal>5901</literal>. You should be able to connect
to a VNC client running on your local machine to display
:1 on the remote machine and step through the installation

View File

@ -97,7 +97,7 @@
<para>Rackspace Cloud Builders maintains a list of pre-built images from various
distributions (Red Hat, CentOS, Fedora, Ubuntu). Links to these images can be found at
<link xlink:href="https://github.com/rackerjoe/oz-image-build"
>rackerjoe/oz-image-build on Github</link>.</para>
>rackerjoe/oz-image-build on GitHub</link>.</para>
</section>
<section xml:id="windows-images">
<title>Microsoft Windows images</title>

View File

@ -160,7 +160,7 @@
guests).</para>
<para>If you cannot install
<literal>cloud-initramfs-tools</literal>, Robert
Plestenjak has a github project called <link
Plestenjak has a GitHub project called <link
xlink:href="https://github.com/flegmatik/linux-rootfs-resize"
>linux-rootfs-resize</link> that contains scripts
that update a ramdisk by using

View File

@ -304,10 +304,10 @@ kernel <replaceable>...</replaceable> console=tty0 console=ttyS0,115200n8</progr
</simplesect>
<simplesect>
<title>Clean up (remove MAC address details)</title>
<para>The operating system records the MAC address of the virtual ethernet card in locations
<para>The operating system records the MAC address of the virtual Ethernet card in locations
such as <filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename> and
<filename>/etc/udev/rules.d/70-persistent-net.rules</filename> during the instance
process. However, each time the image boots up, the virtual ethernet card will have a
process. However, each time the image boots up, the virtual Ethernet card will have a
different MAC address, so this information must be deleted from the configuration
file.</para>
<para>There is a utility called <command>virt-sysprep</command>, that performs various

View File

@ -176,9 +176,9 @@
</simplesect>
<simplesect>
<title>Clean up (remove MAC address details)</title>
<para>The operating system records the MAC address of the virtual ethernet card in locations
<para>The operating system records the MAC address of the virtual Ethernet card in locations
such as <filename>/etc/udev/rules.d/70-persistent-net.rules</filename> during the
instance process. However, each time the image boots up, the virtual ethernet card will
instance process. However, each time the image boots up, the virtual Ethernet card will
have a different MAC address, so this information must be deleted from the configuration
file.</para>
<para>There is a utility called <command>virt-sysprep</command>, that performs various

View File

@ -28,7 +28,7 @@
<para><link xlink:href="http://www.cloudbase.it/cloud-init-for-windows-instances/"
>cloudbase-init</link> is a Windows port of cloud-init that should be installed
inside of the guest. The <link xlink:href="https://github.com/cloudbase/cloudbase-init"
>source code</link> is available on github.</para>
>source code</link> is available on GitHub.</para>
</simplesect>
<simplesect>
<title>Jordan Rinke's OpenStack Windows resources</title>