Use small letters for node names

Change-Id: Icce849acc702f10038285cbe98342b5c38cd8b5b
This commit is contained in:
KATO Tomoyuki 2017-01-10 11:38:24 +09:00
parent 17a4fe0fe8
commit 9937d82cde
12 changed files with 24 additions and 24 deletions

View File

@ -300,10 +300,10 @@ image is transferred over this connection. The Image service streams the
image from the back end to the compute node.
It is possible to set up the Object Storage node on a separate network,
and still allow image traffic to flow between the Compute and Object
Storage nodes. Configure the ``my_block_storage_ip`` option in the
and still allow image traffic to flow between the compute and object
storage nodes. Configure the ``my_block_storage_ip`` option in the
storage node configuration file to allow block storage traffic to reach
the Compute node.
the compute node.
Certain back ends support a more direct method, where on request the
Image service will return a URL that links directly to the back-end store.

View File

@ -23,7 +23,7 @@ System administration
compute-node-down.rst
compute-adv-config.rst
To effectively administer Compute, you must understand how the different
To effectively administer compute, you must understand how the different
installed nodes interact with each other. Compute can be installed in
many different ways using multiple servers, but generally multiple
compute nodes control the virtual servers and a cloud controller node
@ -51,7 +51,7 @@ deployment. The responsibilities of services and drivers are:
Procedure Call (RPC).
``nova-conductor``
provides database-access support for Compute nodes
provides database-access support for compute nodes
(thereby reducing security risks).
``nova-consoleauth``

View File

@ -77,7 +77,7 @@ controller.
The diagrams below depict some VMware NSX deployment examples. The first
diagram illustrates the traffic flow between VMs on separate Compute
nodes, and the second diagram between two VMs on a single Compute node.
nodes, and the second diagram between two VMs on a single compute node.
Note the placement of the VMware NSX plug-in and the neutron-server
service on the network node. The green arrow indicates the management
relationship between the NSX controller and the network node.

View File

@ -200,7 +200,7 @@ Compute agent
-------------
This agent is responsible for collecting resource usage data of VM
instances on individual Compute nodes within an OpenStack deployment.
instances on individual compute nodes within an OpenStack deployment.
This mechanism requires a closer interaction with the hypervisor,
therefore a separate agent type fulfills the collection of the related
meters, which is placed on the host machines to retrieve this
@ -218,7 +218,7 @@ database connection. The samples are sent via AMQP to the notification agent.
The list of supported hypervisors can be found in
:ref:`telemetry-supported-hypervisors`. The Compute agent uses the API of the
hypervisor installed on the Compute hosts. Therefore, the supported meters may
hypervisor installed on the compute hosts. Therefore, the supported meters may
be different in case of each virtualization back end, as each inspection tool
provides a different set of meters.
@ -272,7 +272,7 @@ instances.
Therefore Telemetry uses another method to gather this data by polling
the infrastructure including the APIs of the different OpenStack
services and other assets, like hypervisors. The latter case requires
closer interaction with the Compute hosts. To solve this issue,
closer interaction with the compute hosts. To solve this issue,
Telemetry uses an agent based architecture to fulfill the requirements
against the data collection.
@ -359,15 +359,15 @@ IPMI agent
----------
This agent is responsible for collecting IPMI sensor data and Intel Node
Manager data on individual Compute nodes within an OpenStack deployment.
Manager data on individual compute nodes within an OpenStack deployment.
This agent requires an IPMI capable node with the ipmitool utility installed,
which is commonly used for IPMI control on various Linux distributions.
An IPMI agent instance could be installed on each and every Compute node
An IPMI agent instance could be installed on each and every compute node
with IPMI support, except when the node is managed by the Bare metal
service and the ``conductor.send_sensor_data`` option is set to ``true``
in the Bare metal service. It is no harm to install this agent on a
Compute node without IPMI or Intel Node Manager support, as the agent
compute node without IPMI or Intel Node Manager support, as the agent
checks for the hardware and if none is available, returns empty data. It
is suggested that you install the IPMI agent only on an IPMI capable
node for performance reasons.

View File

@ -37,7 +37,7 @@ The solution would consist of the following OpenStack components:
RabbitMQ, configured for high availability on at least three controller
nodes.
* OpenStack Compute nodes running the KVM hypervisor.
* OpenStack compute nodes running the KVM hypervisor.
* OpenStack Block Storage for use by compute instances, requiring
persistent storage (such as databases for dynamic sites).

View File

@ -31,7 +31,7 @@ The solution would consist of the following OpenStack components:
combined with support services such as MariaDB and RabbitMQ,
configured for high availability on at least three controller nodes.
* OpenStack Compute nodes running the KVM hypervisor.
* OpenStack compute nodes running the KVM hypervisor.
* OpenStack Block Storage for use by compute instances, requiring
persistent storage (such as databases for dynamic sites).
@ -44,10 +44,10 @@ Running up to 140 web instances and the small number of MariaDB
instances requires 292 vCPUs available, as well as 584 GB RAM. On a
typical 1U server using dual-socket hex-core Intel CPUs with
Hyperthreading, and assuming 2:1 CPU overcommit ratio, this would
require 8 OpenStack Compute nodes.
require 8 OpenStack compute nodes.
The web application instances run from local storage on each of the
OpenStack Compute nodes. The web application instances are stateless,
OpenStack compute nodes. The web application instances are stateless,
meaning that any of the instances can fail and the application will
continue to function.

View File

@ -100,7 +100,7 @@ The solution would consist of the following OpenStack components:
nodes in each of the region providing a redundant OpenStack
Controller plane throughout the globe.
* OpenStack Compute nodes running the KVM hypervisor.
* OpenStack compute nodes running the KVM hypervisor.
* OpenStack Object Storage for serving static objects such as images
can be used to ensure that all images are standardized across all the

View File

@ -52,7 +52,7 @@ Possible solutions: hypervisor
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In the case of running TripleO, the underlying OpenStack
cloud deploys the Compute nodes as bare-metal. You then deploy
cloud deploys the compute nodes as bare-metal. You then deploy
OpenStack on these Compute bare-metal servers with the
appropriate hypervisor, such as KVM.

View File

@ -338,7 +338,7 @@ Storage group automatic deletion
For volume attaching, the driver has a storage group on VNX for each compute
node hosting the vm instances which are going to consume VNX Block Storage
(using compute node's host name as storage group's name). All the volumes
attached to the VM instances in a Compute node will be put into the storage
attached to the VM instances in a compute node will be put into the storage
group. If ``destroy_empty_storage_group`` is set to ``True``, the driver will
remove the empty storage group after its last volume is detached. For data
safety, it does not suggest to set ``destroy_empty_storage_group=True`` unless
@ -379,7 +379,7 @@ iSCSI initiators
----------------
``iscsi_initiators`` is a dictionary of IP addresses of the iSCSI
initiator ports on OpenStack Compute and Block Storage nodes which want to
initiator ports on OpenStack compute and block storage nodes which want to
connect to VNX via iSCSI. If this option is configured, the driver will
leverage this information to find an accessible iSCSI target portal for the
initiator when attaching volume. Otherwise, the iSCSI target portal will be
@ -781,7 +781,7 @@ Enabling multipath volume access is recommended for robust data access.
The major configuration includes:
#. Install ``multipath-tools``, ``sysfsutils`` and ``sg3-utils`` on the
nodes hosting Nova-Compute and Cinder-Volume services. Check
nodes hosting compute and ``cinder-volume`` services. Check
the operating system manual for the system distribution for specific
installation steps. For Red Hat based distributions, they should be
``device-mapper-multipath``, ``sysfsutils`` and ``sg3_utils``.

View File

@ -10,7 +10,7 @@ An OpenStack environment includes multiple data pools for the VMs:
- Ephemeral storage is allocated for an instance and is deleted when the
instance is deleted. The Compute service manages ephemeral storage and
by default, Compute stores ephemeral drives as files on local disks on the
Compute node. As an alternative, you can use Ceph RBD as the storage back
compute node. As an alternative, you can use Ceph RBD as the storage back
end for ephemeral storage.
- Persistent storage exists outside all instances. Two types of persistent

View File

@ -18,7 +18,7 @@ interfaces can connect guests to this datapath. For more information on DPDK,
refer to the `DPDK <http://dpdk.org/>`__ website.
OVS with DPDK, or OVS-DPDK, can be used to provide high-performance networking
between instances on OpenStack Compute nodes.
between instances on OpenStack compute nodes.
Prerequisites
-------------

View File

@ -332,7 +332,7 @@ staple.
You can create automated alerts for critical processes by using Nagios
and NRPE. For example, to ensure that the ``nova-compute`` process is
running on Compute nodes, create an alert on your Nagios server:
running on the compute nodes, create an alert on your Nagios server:
.. code-block:: ini