Fixed grammatical errors found in doc/admin-guide-cloud

Change-Id: Ie7afcadfa1b3b987f856a71441248b5e67af6025
This commit is contained in:
Jackie Heitzer 2014-05-20 18:45:11 -04:00
parent f671ef84d7
commit 512e54b68b
13 changed files with 32 additions and 32 deletions

View File

@ -11,7 +11,7 @@
<title xml:id="ts_block_config">Troubleshoot the Block Storage
configuration</title>
<para>Most Block Storage errors are caused by incorrect volume
configurations that result in volume creation failues. To resolve
configurations that result in volume creation failures. To resolve
these failures, review these logs:</para>
<itemizedlist>
<listitem><para><systemitem class="service">cinder-api</systemitem>
@ -128,7 +128,7 @@
<literal>state_path</literal>. The result is a
file-tree
<filename>/var/lib/cinder/volumes/</filename>.</para>
<para>While this should all be handled by the installer,
<para>While the installer should handle all this,
it can go wrong. If you have trouble creating volumes
and this directory does not exist you should see an
error message in the <systemitem class="service"

View File

@ -58,7 +58,7 @@
</itemizedlist>
</para>
<para>Compute uses a messaging-based, <literal>shared nothing</literal> architecture. All
major components exist on multiple servers, including the compute,volume, and network
major components exist on multiple servers, including the compute, volume, and network
controllers, and the object store or image service. The state of the entire system is
stored in a database. The cloud controller communicates with the internal object store
using HTTP, but it communicates with the scheduler, network controller, and volume

View File

@ -57,7 +57,7 @@
<literal>192.168.2.3:2181</literal>.</para>
<para>The following values in the <filename>/etc/nova/nova.conf</filename> file (on every
node) are required for the <systemitem>ZooKeeper</systemitem> driver:</para>
<programlisting language="ini"># Driver for the ServiceGroup serice
<programlisting language="ini"># Driver for the ServiceGroup service
servicegroup_driver="zk"
[zookeeper]
@ -82,7 +82,7 @@ address="192.168.2.1:2181,192.168.2.2:2181,192.168.2.3:2181"</programlisting>
information.</para>
<para>The following values in the <filename>/etc/nova/nova.conf</filename> file (on every
node) are required for the <systemitem>memcache</systemitem> driver:</para>
<programlisting language="ini"># Driver for the ServiceGroup serice
<programlisting language="ini"># Driver for the ServiceGroup service
servicegroup_driver="mc"
# Memcached servers. Use either a list of memcached servers to use for caching (list value),

View File

@ -348,7 +348,7 @@ echo 'Extra user data here'</computeroutput></screen>
service, so you should not need to modify it.</para>
<para>Hosts access the service at <literal>169.254.169.254:80</literal>, and this is
translated to <literal>metadata_host:metadata_port</literal> by an iptables rule
established by the <systemitem class="service">nova-network</systemitem> servce. In
established by the <systemitem class="service">nova-network</systemitem> service. In
multi-host mode, you can set <option>metadata_host</option> to
<literal>127.0.0.1</literal>.</para>
<para>To enable instances to reach the metadata

View File

@ -199,7 +199,7 @@
</listitem>
<listitem>
<para>From the cloud controller to the compute node, we also have iptables/
ebtables rules which allow access from the cloud controller to the running
ebtables rules, which allow access from the cloud controller to the running
instance.</para>
</listitem>
<listitem>

View File

@ -34,7 +34,7 @@
</listitem>
<listitem>
<para><systemitem class="service">nova-compute</systemitem>. Responsible for
managing virtual machines. It loads a Service object which exposes the
managing virtual machines. It loads a Service object, which exposes the
public methods on ComputeManager through Remote Procedure Call
(RPC).</para>
</listitem>

View File

@ -393,8 +393,8 @@ external_network_bridge = br-ex-2</computeroutput></screen>
</section>
<section xml:id="adv_cfg_l3_metering_agent_driver">
<title>L3 metering driver</title>
<para>A driver which implements the metering abstraction needs to be configured.
Currently there is only one implementation which is based on iptables.</para>
<para>You must configure any driver that implements the metering abstraction.
Currently the only available implementation uses iptables for metering.</para>
<para><programlisting language="ini">driver = neutron.services.metering.drivers.iptables.iptables_driver.IptablesMeteringDriver</programlisting></para>
</section>
<section xml:id="adv_cfg_l3_metering_service_driver">

View File

@ -179,7 +179,7 @@ export OS_AUTH_URL=http://controlnode:5000/v2.0/</programlisting>
</procedure>
<procedure>
<title>Manage agents in neutron deployment</title>
<para>Every agent which supports these extensions will
<para>Every agent that supports these extensions will
register itself with the neutron server when it
starts up.</para>
<step>

View File

@ -10,7 +10,7 @@
<section xml:id="section_provider_networks">
<title>Provider networks</title>
<para>Networks can be categorized as either "tenant networks" or "provider networks". Tenant
networks are created by normal users, and details about how they are physically realized
networks are created by normal users and details about how they are physically realized
are hidden from those users. Provider networks are created with administrative
credentials, specifying the details of how the network is physically realized, usually
to match some existing network in the data center.</para>
@ -23,7 +23,7 @@
<para>The provider extension allows administrators to explicitly manage the relationship
between Networking virtual networks and underlying physical mechanisms such as VLANs and
tunnels. When this extension is supported, Networking client users with administrative
privileges see additional provider attributes on all virtual networks, and are able to
privileges see additional provider attributes on all virtual networks and are able to
specify these attributes in order to create provider networks.</para>
<para>The provider extension is supported by the Open vSwitch and Linux Bridge plug-ins.
Configuration of these plug-ins requires familiarity with this extension.</para>
@ -74,7 +74,7 @@
<td>A virtual network implemented as packets on a specific physical network
containing IEEE 802.1Q headers with a specific VID field value. VLAN
networks sharing the same physical network are isolated from each other
at L2, and can even have overlapping IP address spaces. Each distinct
at L2 and can even have overlapping IP address spaces. Each distinct
physical network supporting VLAN networks is treated as a separate VLAN
trunk, with a distinct space of VID values. Valid VID values are 1
through 4094.</td>
@ -150,7 +150,7 @@
<tr>
<td>provider:physical_network</td>
<td>String</td>
<td>If a physical network named "default" has been configured, and if
<td>If a physical network named "default" has been configured and if
provider:network_type is <literal>flat</literal> or
<literal>vlan</literal>, then "default" is used.</td>
<td>The name of the physical network over which the virtual network is
@ -230,14 +230,14 @@
<para>The Networking API provides abstract L2 network segments that are decoupled from the
technology used to implement the L2 network. Networking includes an API extension that
provides abstract L3 routers that API users can dynamically provision and configure.
These Networking routers can connect multiple L2 Networking networks, and can also
These Networking routers can connect multiple L2 Networking networks and can also
provide a gateway that connects one or more private L2 networks to a shared external
network. For example, a public network for access to the
Internet. See the <citetitle>OpenStack Configuration
Reference</citetitle> for details on common models of
deploying Networking L3 routers.</para>
<para>The L3 router provides basic NAT capabilities on gateway ports that uplink the router
to external networks. This router SNATs all traffic by default, and supports floating
to external networks. This router SNATs all traffic by default and supports floating
IPs, which creates a static one-to-one mapping from a public IP on the external network
to a private IP on one of the other subnets attached to the router. This allows a tenant
to selectively expose VMs on private networks to other hosts on the external network
@ -432,7 +432,7 @@
<td>
<screen><prompt>$</prompt> <userinput>neutron router-gateway-set router1 &lt;ext-net-id&gt;</userinput></screen>
<para>The router obtains an interface with the gateway_ip address of the
subnet, and this interface is attached to a port on the L2
subnet and this interface is attached to a port on the L2
Networking network associated with the subnet. The router also gets
a gateway interface to the specified external network. This provides
SNAT connectivity to the external network as well as support for
@ -911,7 +911,7 @@
<td>Boolean</td>
<td>False</td>
<td>When set to True makes this firewall rule visible to tenants other than
its owner, and it can be used in firewall policies not owned by its
its owner and it can be used in firewall policies not owned by its
tenant.</td>
</tr>
<tr>
@ -1023,7 +1023,7 @@
<td>Boolean</td>
<td>False</td>
<td>When set to True makes this firewall policy visible to tenants other
than its owner, and can be used to associate with firewalls not owned by
than its owner and can be used to associate with firewalls not owned by
its tenant.</td>
</tr>
<tr>
@ -1363,8 +1363,8 @@
<para>Starting with the Havana release, the VMware NSX plug-in provides an
asynchronous mechanism for retrieving the operational status for neutron
resources from the NSX back-end; this applies to <emphasis>network</emphasis>,
<emphasis>port</emphasis>, and <emphasis>router</emphasis> resources.</para>
<para>The back-end is polled periodically, and the status for every resource is
<emphasis>port</emphasis> and <emphasis>router</emphasis> resources.</para>
<para>The back-end is polled periodically and the status for every resource is
retrieved; then the status in the Networking database is updated only for the
resources for which a status change occurred. As operational status is now
retrieved asynchronously, performance for <literal>GET</literal> operations is

View File

@ -116,7 +116,7 @@ default_notification_level = INFO
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# Defined in rpc_notifier for rpc way, can be comma-separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications</programlisting>
</section>
@ -152,7 +152,7 @@ default_notification_level = INFO
# host = myhost.com
# default_publisher_id = $host
# Defined in rpc_notifier for rpc way, can be comma separated values.
# Defined in rpc_notifier for rpc way, can be comma-separated values.
# The actual topic names will be %s.%(default_notification_level)s
notification_topics = notifications_one,notifications_two</programlisting>
</section>

View File

@ -203,7 +203,7 @@
</para>
<formalpara>
<title>Tenant networks</title>
<para>Tenant networks are created by users for connectivity within projects;
<para>Users create tenant networks for connectivity within projects;
they are fully isolated by default and are not shared with other projects.
Networking supports a range of tenant network types:
</para>

View File

@ -62,7 +62,7 @@
<emphasis role="italic">admin_or_network_owner</emphasis>
is a rule.</para>
<para>Policies are triggered by the Networking policy engine
whenever one of them matches an Networking API operation or a
whenever one of them matches a Networking API operation or a
specific attribute being used in a given operation. For
instance the <code>create_subnet</code> policy is triggered
every time a <code>POST /v2.0/subnets</code> request is sent
@ -71,13 +71,13 @@
the <emphasis role="italic">shared</emphasis> attribute is
explicitly specified (and set to a value different from its
default) in a <code>POST /v2.0/networks</code> request. It is
also worth mentioning that policies can be also related to
also worth mentioning that policies can also be related to
specific API extensions; for instance
<code>extension:provider_network:set</code> is be
<code>extension:provider_network:set</code> is
triggered if the attributes defined by the Provider Network
extensions are specified in an API request.</para>
<para>An authorization policy can be composed by one or more
rules. If more rules are specified, evaluation policy succeeds
rules. If more rules are specified then the evaluation policy succeeds
if any of the rules evaluates successfully; if an API
operation matches multiple policies, then all the policies
must evaluate successfully. Also, authorization rules are
@ -167,8 +167,8 @@
</calloutlist>
<para>In some cases, some operations are restricted to
administrators only. This example shows you how to modify a
policy file to permit tenants to define networks and see their
resources and permit administrative users to perform all other
policy file to permit tenants to define networks, see their
resources, and permit administrative users to perform all other
operations:</para>
<programlisting language="bash">{
"admin_or_owner": [["role:admin"], ["tenant_id:%(tenant_id)s"]],

View File

@ -284,7 +284,7 @@ enabled = True</programlisting>
<step>
<para>Create a firewall policy:</para>
<screen><prompt>$</prompt> <userinput>neutron firewall-policy-create --firewall-rules "&lt;firewall-rule IDs or names separated by space&gt;" myfirewallpolicy</userinput></screen>
<para>The order of the rules specified above is important.You
<para>The order of the rules specified above is important. You
can create a firewall policy without and rules and add rules
later either with the update operation (when adding multiple
rules) or with the insert-rule operations (when adding a single