[arch-design] Minor edits to the compute design section
Minor IA and heading edits, remove duplication, and update links. Change-Id: I88ee48c883cf04272822d81bbd5ee1c568ebef20 Implements: blueprint arch-design-pike
This commit is contained in:
parent
3cd73e20d0
commit
bc4723c23a
@ -1,11 +1,11 @@
|
|||||||
=============
|
===================
|
||||||
Compute nodes
|
Compute node design
|
||||||
=============
|
===================
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 3
|
:maxdepth: 3
|
||||||
|
|
||||||
design-compute/design-compute-concepts
|
design-compute/design-compute-arch
|
||||||
design-compute/design-compute-cpu
|
design-compute/design-compute-cpu
|
||||||
design-compute/design-compute-hypervisor
|
design-compute/design-compute-hypervisor
|
||||||
design-compute/design-compute-hardware
|
design-compute/design-compute-hardware
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
====================================
|
====================================
|
||||||
Compute Server Architecture Overview
|
Compute server architecture overview
|
||||||
====================================
|
====================================
|
||||||
|
|
||||||
When designing compute resource pools, consider the number of processors,
|
When designing compute resource pools, consider the number of processors,
|
||||||
@ -7,11 +7,12 @@ amount of memory, network requirements, the quantity of storage required for
|
|||||||
each hypervisor, and any requirements for bare metal hosts provisioned
|
each hypervisor, and any requirements for bare metal hosts provisioned
|
||||||
through ironic.
|
through ironic.
|
||||||
|
|
||||||
When architecting an OpenStack cloud, as part of the planning process, the
|
When architecting an OpenStack cloud, as part of the planning process, you
|
||||||
architect must not only determine what hardware to utilize but whether compute
|
must not only determine what hardware to utilize but whether compute
|
||||||
resources will be provided in a single pool or in multiple pools or
|
resources will be provided in a single pool or in multiple pools or
|
||||||
availability zones. Will the cloud provide distinctly different profiles for
|
availability zones. You should consider if the cloud will provide distinctly
|
||||||
compute?
|
different profiles for compute.
|
||||||
|
|
||||||
For example, CPU, memory or local storage based compute nodes. For NFV
|
For example, CPU, memory or local storage based compute nodes. For NFV
|
||||||
or HPC based clouds, there may even be specific network configurations that
|
or HPC based clouds, there may even be specific network configurations that
|
||||||
should be reserved for those specific workloads on specific compute nodes. This
|
should be reserved for those specific workloads on specific compute nodes. This
|
||||||
@ -83,7 +84,7 @@ the hardware layout of your compute nodes.
|
|||||||
and cause a potential increase in a noisy neighbor.
|
and cause a potential increase in a noisy neighbor.
|
||||||
|
|
||||||
Insufficient disk capacity could also have a negative effect on overall
|
Insufficient disk capacity could also have a negative effect on overall
|
||||||
performance including CPU and memory usage. Depending on the back-end
|
performance including CPU and memory usage. Depending on the back end
|
||||||
architecture of the OpenStack Block Storage layer, capacity includes
|
architecture of the OpenStack Block Storage layer, capacity includes
|
||||||
adding disk shelves to enterprise storage systems or installing
|
adding disk shelves to enterprise storage systems or installing
|
||||||
additional Block Storage nodes. Upgrading directly attached storage
|
additional Block Storage nodes. Upgrading directly attached storage
|
@ -1,3 +1,5 @@
|
|||||||
|
.. _choosing-a-cpu:
|
||||||
|
|
||||||
==============
|
==============
|
||||||
Choosing a CPU
|
Choosing a CPU
|
||||||
==============
|
==============
|
||||||
@ -9,9 +11,10 @@ and *AMD-v* for AMD chips.
|
|||||||
.. tip::
|
.. tip::
|
||||||
|
|
||||||
Consult the vendor documentation to check for virtualization support. For
|
Consult the vendor documentation to check for virtualization support. For
|
||||||
Intel, read `“Does my processor support Intel® Virtualization Technology?”
|
Intel CPUs, see
|
||||||
<http://www.intel.com/support/processors/sb/cs-030729.htm>`_. For AMD, read
|
`Does my processor support Intel® Virtualization Technology?
|
||||||
`AMD Virtualization
|
<http://www.intel.com/support/processors/sb/cs-030729.htm>`_. For AMD CPUs,
|
||||||
|
see `AMD Virtualization
|
||||||
<http://www.amd.com/en-us/innovations/software-technologies/server-solution/virtualization>`_.
|
<http://www.amd.com/en-us/innovations/software-technologies/server-solution/virtualization>`_.
|
||||||
Your CPU may support virtualization but it may be disabled. Consult your
|
Your CPU may support virtualization but it may be disabled. Consult your
|
||||||
BIOS documentation for how to enable CPU features.
|
BIOS documentation for how to enable CPU features.
|
||||||
@ -23,16 +26,17 @@ purchase a server that supports multiple CPUs, the number of cores is further
|
|||||||
multiplied.
|
multiplied.
|
||||||
|
|
||||||
As of the Kilo release, key enhancements have been added to the
|
As of the Kilo release, key enhancements have been added to the
|
||||||
OpenStack code to improve guest performance. These improvements allow OpenStack
|
OpenStack code to improve guest performance. These improvements allow the
|
||||||
nova to take advantage of greater insight into a Compute host's physical layout
|
Compute service to take advantage of greater insight into a compute host's
|
||||||
and therefore make smarter decisions regarding workload placement.
|
physical layout and therefore make smarter decisions regarding workload
|
||||||
Administrators can use this functionality to enable smarter planning choices
|
placement. Administrators can use this functionality to enable smarter planning
|
||||||
for use cases like NFV (Network Function Virtualization) and HPC (High
|
choices for use cases like NFV (Network Function Virtualization) and HPC (High
|
||||||
Performance Computing).
|
Performance Computing).
|
||||||
|
|
||||||
Considering NUMA is important when selecting CPU sizes and types, there are use
|
Considering non-uniform memory access (NUMA) is important when selecting CPU
|
||||||
cases that use NUMA pinning to reserve host cores for OS processes. These
|
sizes and types, as there are use cases that use NUMA pinning to reserve host
|
||||||
reduce the available CPU for workloads and protects the OS.
|
cores for operating system processes. These reduce the available CPU for
|
||||||
|
workloads and protects the operating system.
|
||||||
|
|
||||||
.. tip::
|
.. tip::
|
||||||
|
|
||||||
@ -55,11 +59,12 @@ reduce the available CPU for workloads and protects the OS.
|
|||||||
Additionally, CPU selection may not be one-size-fits-all across enterprises,
|
Additionally, CPU selection may not be one-size-fits-all across enterprises,
|
||||||
but more of a list of SKUs that are tuned for the enterprise workloads.
|
but more of a list of SKUs that are tuned for the enterprise workloads.
|
||||||
|
|
||||||
A deeper discussion about NUMA can be found in `CPU topologies in the Admin
|
For more information about NUMA, see `CPU topologies
|
||||||
Guide <https://docs.openstack.org/admin-guide/compute-cpu-topologies.html>`_.
|
<https://docs.openstack.org/admin-guide/compute-cpu-topologies.html>`_ in
|
||||||
|
the Administrator Guide.
|
||||||
|
|
||||||
In order to take advantage of these new enhancements in OpenStack nova, Compute
|
In order to take advantage of these new enhancements in the Compute service,
|
||||||
hosts must be using NUMA capable CPUs.
|
compute hosts must be using NUMA capable CPUs.
|
||||||
|
|
||||||
.. tip::
|
.. tip::
|
||||||
|
|
||||||
|
@ -1,8 +1,8 @@
|
|||||||
=========================
|
========================
|
||||||
Choosing server hardware
|
Choosing server hardware
|
||||||
=========================
|
========================
|
||||||
|
|
||||||
Consider the following factors when selecting compute (server) hardware:
|
Consider the following factors when selecting compute server hardware:
|
||||||
|
|
||||||
* Server density
|
* Server density
|
||||||
A measure of how many servers can fit into a given measure of
|
A measure of how many servers can fit into a given measure of
|
||||||
@ -20,10 +20,6 @@ Consider the following factors when selecting compute (server) hardware:
|
|||||||
The relative cost of the hardware weighed against the total amount of
|
The relative cost of the hardware weighed against the total amount of
|
||||||
capacity available on the hardware based on predetermined requirements.
|
capacity available on the hardware based on predetermined requirements.
|
||||||
|
|
||||||
|
|
||||||
Compute (server) hardware selection
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Weigh these considerations against each other to determine the best design for
|
Weigh these considerations against each other to determine the best design for
|
||||||
the desired purpose. For example, increasing server density means sacrificing
|
the desired purpose. For example, increasing server density means sacrificing
|
||||||
resource capacity or expandability. It also can decrease availability and
|
resource capacity or expandability. It also can decrease availability and
|
||||||
@ -32,103 +28,36 @@ expandability can increase cost but decrease server density. Decreasing cost
|
|||||||
often means decreasing supportability, availability, server density, resource
|
often means decreasing supportability, availability, server density, resource
|
||||||
capacity, and expandability.
|
capacity, and expandability.
|
||||||
|
|
||||||
The primary job of the OpenStack architect is to determine the requirements for
|
Determine the requirements for the cloud prior to constructing the cloud,
|
||||||
the cloud prior to constructing the cloud, and planning for expansion
|
and plan for hardware lifecycles, and expansion and new features that may
|
||||||
and new features that may require different hardware. Planning for hardware
|
require different hardware.
|
||||||
lifecycles is also the job of the architect. However, if the cloud is initially
|
|
||||||
built with near end of life, but cost effective hardware, then the
|
If the cloud is initially built with near end of life, but cost effective
|
||||||
performance and capacity demand of new workloads will drive the purchase of
|
hardware, then the performance and capacity demand of new workloads will drive
|
||||||
more modern hardware quicker. With individual harware components changing over
|
the purchase of more modern hardware. With individual hardware components
|
||||||
time, companies may prefer to manage configurations as stock keeping units
|
changing over time, you may prefer to manage configurations as stock keeping
|
||||||
(SKU)s. This method provides an enterprise with a standard configuration unit
|
units (SKU)s. This method provides an enterprise with a standard
|
||||||
of compute (server) that can be placed in any IT service manager or vendor
|
configuration unit of compute (server) that can be placed in any IT service
|
||||||
supplied ordering system that can be triggered manually or through advanced
|
manager or vendor supplied ordering system that can be triggered manually or
|
||||||
operational automations. This simplifies ordering, provisioning, and
|
through advanced operational automations. This simplifies ordering,
|
||||||
activating additional compute resources. For example, there are plug-ins for
|
provisioning, and activating additional compute resources. For example, there
|
||||||
several commercial service management tools that enable integration with
|
are plug-ins for several commercial service management tools that enable
|
||||||
hardware APIs. This configures and activates new compute resources from standby
|
integration with hardware APIs. These configure and activate new compute
|
||||||
hardware based on a standard configurations. Using this methodology, spare
|
resources from standby hardware based on a standard configurations. Using this
|
||||||
hardware can be ordered for a datacenter and provisioned based on
|
methodology, spare hardware can be ordered for a datacenter and provisioned
|
||||||
capacity data derived from OpenStack Telemetry.
|
based on capacity data derived from OpenStack Telemetry.
|
||||||
|
|
||||||
Compute capacity (CPU cores and RAM capacity) is a secondary consideration for
|
Compute capacity (CPU cores and RAM capacity) is a secondary consideration for
|
||||||
selecting server hardware. The required server hardware must supply adequate
|
selecting server hardware. The required server hardware must supply adequate
|
||||||
CPU sockets, additional CPU cores, and adequate RAM, and is discussed in detail
|
CPU sockets, additional CPU cores, and adequate RA. For more information, see
|
||||||
under the CPU selection secution.
|
:ref:`choosing-a-cpu`.
|
||||||
|
|
||||||
However, there are also network and storage considerations for any compute
|
In compute server architecture design, you must also consider network and
|
||||||
server. Network considerations are discussed in the
|
storage requirements. For more information on network considerations, see
|
||||||
`network section
|
:ref:`network-design`.
|
||||||
<https://docs.openstack.org/draft/arch-design-draft/design-networking.html>`_
|
|
||||||
of this chapter.
|
|
||||||
|
|
||||||
|
Considerations when choosing hardware
|
||||||
Scaling your cloud
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
For a compute-focused cloud, emphasis should be on server
|
|
||||||
hardware that can offer more CPU sockets, more CPU cores, and more RAM.
|
|
||||||
Network connectivity and storage capacity are less critical.
|
|
||||||
|
|
||||||
When designing a OpenStack cloud compute server architecture, you must
|
|
||||||
consider whether you intend to scale up or scale out. Selecting a
|
|
||||||
smaller number of larger hosts, or a larger number of smaller hosts,
|
|
||||||
depends on a combination of factors: cost, power, cooling, physical rack
|
|
||||||
and floor space, support-warranty, and manageability. Typically, the scale out
|
|
||||||
model has been popular for OpenStack because it further reduces the number of
|
|
||||||
possible failure domains by spreading workloads across more infrastructure.
|
|
||||||
However, the downside is the cost of additional servers and the datacenter
|
|
||||||
resources needed to power, network, and cool them.
|
|
||||||
|
|
||||||
|
|
||||||
Hardware selection considerations
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Consider the following in selecting server hardware form factor suited for
|
|
||||||
your OpenStack design architecture:
|
|
||||||
|
|
||||||
* Most blade servers can support dual-socket multi-core CPUs. To avoid
|
|
||||||
this CPU limit, select ``full width`` or ``full height`` blades. Be
|
|
||||||
aware, however, that this also decreases server density. For example,
|
|
||||||
high density blade servers such as HP BladeSystem or Dell PowerEdge
|
|
||||||
M1000e support up to 16 servers in only ten rack units. Using
|
|
||||||
half-height blades is twice as dense as using full-height blades,
|
|
||||||
which results in only eight servers per ten rack units.
|
|
||||||
|
|
||||||
* 1U rack-mounted servers have the ability to offer greater server density
|
|
||||||
than a blade server solution, but are often limited to dual-socket,
|
|
||||||
multi-core CPU configurations. It is possible to place forty 1U servers
|
|
||||||
in a rack, providing space for the top of rack (ToR) switches, compared
|
|
||||||
to 32 full width blade servers.
|
|
||||||
|
|
||||||
To obtain greater than dual-socket support in a 1U rack-mount form
|
|
||||||
factor, customers need to buy their systems from Original Design
|
|
||||||
Manufacturers (ODMs) or second-tier manufacturers.
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
This may cause issues for organizations that have preferred
|
|
||||||
vendor policies or concerns with support and hardware warranties
|
|
||||||
of non-tier 1 vendors.
|
|
||||||
|
|
||||||
* 2U rack-mounted servers provide quad-socket, multi-core CPU support,
|
|
||||||
but with a corresponding decrease in server density (half the density
|
|
||||||
that 1U rack-mounted servers offer).
|
|
||||||
|
|
||||||
* Larger rack-mounted servers, such as 4U servers, often provide even
|
|
||||||
greater CPU capacity, commonly supporting four or even eight CPU
|
|
||||||
sockets. These servers have greater expandability, but such servers
|
|
||||||
have much lower server density and are often more expensive.
|
|
||||||
|
|
||||||
* ``Sled servers`` are rack-mounted servers that support multiple
|
|
||||||
independent servers in a single 2U or 3U enclosure. These deliver
|
|
||||||
higher density as compared to typical 1U or 2U rack-mounted servers.
|
|
||||||
For example, many sled servers offer four independent dual-socket
|
|
||||||
nodes in 2U for a total of eight CPU sockets in 2U.
|
|
||||||
|
|
||||||
|
|
||||||
Other factors to consider
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Here are some other factors to consider when selecting hardware for your
|
Here are some other factors to consider when selecting hardware for your
|
||||||
compute servers.
|
compute servers.
|
||||||
@ -158,27 +87,27 @@ cooling. The number of hosts (or hypervisors) that can be fitted
|
|||||||
into a given metric (rack, rack unit, or floor tile) is another
|
into a given metric (rack, rack unit, or floor tile) is another
|
||||||
important method of sizing. Floor weight is an often overlooked
|
important method of sizing. Floor weight is an often overlooked
|
||||||
consideration.
|
consideration.
|
||||||
The data center floor must be able to support the
|
|
||||||
weight of the proposed number of hosts within a rack or set of
|
The data center floor must be able to support the weight of the proposed number
|
||||||
racks. These factors need to be applied as part of the host density
|
of hosts within a rack or set of racks. These factors need to be applied as
|
||||||
calculation and server hardware selection.
|
part of the host density calculation and server hardware selection.
|
||||||
|
|
||||||
Power and cooling density
|
Power and cooling density
|
||||||
-------------------------
|
-------------------------
|
||||||
|
|
||||||
The power and cooling density requirements might be lower than with
|
The power and cooling density requirements might be lower than with
|
||||||
blade,sled, or 1U server designs due to lower host density (by
|
blade, sled, or 1U server designs due to lower host density (by
|
||||||
using 2U, 3U or even 4U server designs). For data centers with older
|
using 2U, 3U or even 4U server designs). For data centers with older
|
||||||
infrastructure, this might be a desirable feature.
|
infrastructure, this might be a desirable feature.
|
||||||
|
|
||||||
Data centers have a specified amount of power fed to a given rack or
|
Data centers have a specified amount of power fed to a given rack or
|
||||||
set of racks. Older data centers may have a power density as power
|
set of racks. Older data centers may have power densities as low as 20A per
|
||||||
as low as 20 AMPs per rack, while more recent data centers can be
|
rack, and current data centers can be designed to support power densities as
|
||||||
architected to support power densities as high as 120 AMP per rack.
|
high as 120A per rack. The selected server hardware must take power density
|
||||||
The selected server hardware must take power density into account.
|
into account.
|
||||||
|
|
||||||
Specific hardware concepts
|
Selecting hardware form factor
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
Consider the following in selecting server hardware form factor suited for
|
Consider the following in selecting server hardware form factor suited for
|
||||||
your OpenStack design architecture:
|
your OpenStack design architecture:
|
||||||
@ -221,3 +150,16 @@ your OpenStack design architecture:
|
|||||||
higher density as compared to typical 1U or 2U rack-mounted servers.
|
higher density as compared to typical 1U or 2U rack-mounted servers.
|
||||||
For example, many sled servers offer four independent dual-socket
|
For example, many sled servers offer four independent dual-socket
|
||||||
nodes in 2U for a total of eight CPU sockets in 2U.
|
nodes in 2U for a total of eight CPU sockets in 2U.
|
||||||
|
|
||||||
|
Scaling your cloud
|
||||||
|
~~~~~~~~~~~~~~~~~~
|
||||||
|
|
||||||
|
When designing a OpenStack cloud compute server architecture, you must
|
||||||
|
decide whether you intend to scale up or scale out. Selecting a
|
||||||
|
smaller number of larger hosts, or a larger number of smaller hosts,
|
||||||
|
depends on a combination of factors: cost, power, cooling, physical rack
|
||||||
|
and floor space, support-warranty, and manageability. Typically, the scale out
|
||||||
|
model has been popular for OpenStack because it reduces the number of possible
|
||||||
|
failure domains by spreading workloads across more infrastructure.
|
||||||
|
However, the downside is the cost of additional servers and the datacenter
|
||||||
|
resources needed to power, network, and cool the servers.
|
||||||
|
@ -21,22 +21,20 @@ parity, documentation, and the level of community experience.
|
|||||||
|
|
||||||
As per the recent OpenStack user survey, KVM is the most widely adopted
|
As per the recent OpenStack user survey, KVM is the most widely adopted
|
||||||
hypervisor in the OpenStack community. Besides KVM, there are many deployments
|
hypervisor in the OpenStack community. Besides KVM, there are many deployments
|
||||||
that run other hypervisors such as LXC, VMware, Xen and Hyper-V. However, these
|
that run other hypervisors such as LXC, VMware, Xen, and Hyper-V. However,
|
||||||
hypervisors are either less used, are niche hypervisors, or have limited
|
these hypervisors are either less used, are niche hypervisors, or have limited
|
||||||
functionality based on the more commonly used hypervisors. This is due to gaps
|
functionality compared to more commonly used hypervisors.
|
||||||
in feature parity.
|
|
||||||
|
|
||||||
In addition, the nova configuration reference below details feature support for
|
|
||||||
hypervisors as well as ironic and Virtuozzo (formerly Parallels).
|
|
||||||
|
|
||||||
The best information available to support your choice is found on the
|
|
||||||
`Hypervisor Support Matrix
|
|
||||||
<https://docs.openstack.org/developer/nova/support-matrix.html>`_
|
|
||||||
and in the `configuration reference
|
|
||||||
<https://docs.openstack.org/ocata/config-reference/compute/hypervisors.html>`_.
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
It is also possible to run multiple hypervisors in a single
|
It is also possible to run multiple hypervisors in a single
|
||||||
deployment using host aggregates or cells. However, an individual
|
deployment using host aggregates or cells. However, an individual
|
||||||
compute node can run only a single hypervisor at a time.
|
compute node can run only a single hypervisor at a time.
|
||||||
|
|
||||||
|
For more information about feature support for
|
||||||
|
hypervisors as well as ironic and Virtuozzo (formerly Parallels), see
|
||||||
|
`Hypervisor Support Matrix
|
||||||
|
<https://docs.openstack.org/developer/nova/support-matrix.html>`_
|
||||||
|
and `Hypervisors
|
||||||
|
<https://docs.openstack.org/ocata/config-reference/compute/hypervisors.html>`_
|
||||||
|
in the Configuration Reference.
|
||||||
|
@ -5,12 +5,13 @@ Compute server logging
|
|||||||
The logs on the compute nodes, or any server running nova-compute (for example
|
The logs on the compute nodes, or any server running nova-compute (for example
|
||||||
in a hyperconverged architecture), are the primary points for troubleshooting
|
in a hyperconverged architecture), are the primary points for troubleshooting
|
||||||
issues with the hypervisor and compute services. Additionally, operating system
|
issues with the hypervisor and compute services. Additionally, operating system
|
||||||
logs can also provide useful information. However, as environments grow, the
|
logs can also provide useful information.
|
||||||
amount of log data increases exponentially. Enabling debugging on either the
|
|
||||||
OpenStack services or the operating system logging further compounds the data
|
|
||||||
issues.
|
|
||||||
|
|
||||||
Logging is detailed more fully in the `Operations Guide
|
As the cloud environment grows, the amount of log data increases exponentially.
|
||||||
|
Enabling debugging on either the OpenStack services or the operating system
|
||||||
|
further compounds the data issues.
|
||||||
|
|
||||||
|
Logging is described in more detail in the `Operations Guide
|
||||||
<http://docs.openstack.org/ops-guide/ops-logging-monitoring.html>`_. However,
|
<http://docs.openstack.org/ops-guide/ops-logging-monitoring.html>`_. However,
|
||||||
it is an important design consideration to take into account before commencing
|
it is an important design consideration to take into account before commencing
|
||||||
operations of your cloud.
|
operations of your cloud.
|
||||||
@ -18,7 +19,7 @@ operations of your cloud.
|
|||||||
OpenStack produces a great deal of useful logging information, but for
|
OpenStack produces a great deal of useful logging information, but for
|
||||||
the information to be useful for operations purposes, you should consider
|
the information to be useful for operations purposes, you should consider
|
||||||
having a central logging server to send logs to, and a log parsing/analysis
|
having a central logging server to send logs to, and a log parsing/analysis
|
||||||
system (such as Elastic Stack [formerly known as ELK]).
|
system such as Elastic Stack [formerly known as ELK].
|
||||||
|
|
||||||
Elastic Stack consists of mainly three components: Elasticsearch (log search
|
Elastic Stack consists of mainly three components: Elasticsearch (log search
|
||||||
and analysis), Logstash (log intake, processing and output) and Kibana (log
|
and analysis), Logstash (log intake, processing and output) and Kibana (log
|
||||||
@ -35,28 +36,28 @@ Redis and Memcached. In newer versions of Elastic Stack, a file buffer called
|
|||||||
similar purpose but adds a "backpressure-sensitive" protocol when sending data
|
similar purpose but adds a "backpressure-sensitive" protocol when sending data
|
||||||
to Logstash or Elasticsearch.
|
to Logstash or Elasticsearch.
|
||||||
|
|
||||||
Many times, log analysis requires disparate logs of differing formats, Elastic
|
Log analysis often requires disparate logs of differing formats. Elastic
|
||||||
Stack (namely logstash) was created to take many different log inputs and then
|
Stack (namely Logstash) was created to take many different log inputs and
|
||||||
transform them into a consistent format that elasticsearch can catalog and
|
transform them into a consistent format that Elasticsearch can catalog and
|
||||||
analyze. As seen in the image above, the process of ingestion starts on the
|
analyze. As seen in the image above, the process of ingestion starts on the
|
||||||
servers by logstash, is forwarded to the elasticsearch server for storage and
|
servers by Logstash, is forwarded to the Elasticsearch server for storage and
|
||||||
searching and then displayed via Kibana for visual analysis and interaction.
|
searching, and then displayed through Kibana for visual analysis and
|
||||||
|
interaction.
|
||||||
|
|
||||||
For instructions on installing Logstash, Elasticsearch and Kibana see `the
|
For instructions on installing Logstash, Elasticsearch and Kibana, see the
|
||||||
elastic stack documentation.
|
`Elasticsearch reference
|
||||||
<https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html>`_.
|
<https://www.elastic.co/guide/en/elasticsearch/reference/current/getting-started.html>`_.
|
||||||
|
|
||||||
There are some specific configuration parameters that are needed to
|
There are some specific configuration parameters that are needed to
|
||||||
configure logstash for OpenStack. For example, in order to get logstash to
|
configure Logstash for OpenStack. For example, in order to get Logstash to
|
||||||
collect, parse and send the correct portions of log files and send them to the
|
collect, parse, and send the correct portions of log files to the Elasticsearch
|
||||||
elasticsearch server, you need to format the configuration file properly. There
|
server, you need to format the configuration file properly. There
|
||||||
are input, output and filter configurations. Input configurations tell logstash
|
are input, output and filter configurations. Input configurations tell Logstash
|
||||||
who and what to recieve data from (log files/
|
where to recieve data from (log files/forwarders/filebeats/StdIn/Eventlog),
|
||||||
forwarders/filebeats/StdIn/Eventlog/etc.), output specifies where to put the
|
output configurations specify where to put the data, and filter configurations
|
||||||
data, and filter configurations define the input contents to forward to the
|
define the input contents to forward to the output.
|
||||||
output.
|
|
||||||
|
|
||||||
The logstash filter performs intermediary processing on each event. Conditional
|
The Logstash filter performs intermediary processing on each event. Conditional
|
||||||
filters are applied based on the characteristics of the input and the event.
|
filters are applied based on the characteristics of the input and the event.
|
||||||
Some examples of filtering are:
|
Some examples of filtering are:
|
||||||
|
|
||||||
@ -88,16 +89,15 @@ representation of events such as:
|
|||||||
|
|
||||||
These input, output and filter configurations are typically stored in
|
These input, output and filter configurations are typically stored in
|
||||||
:file:`/etc/logstash/conf.d` but may vary by linux distribution. Separate
|
:file:`/etc/logstash/conf.d` but may vary by linux distribution. Separate
|
||||||
configuration files should be created for different logging systems(syslog,
|
configuration files should be created for different logging systems such as
|
||||||
apache, OpenStack, etc.)
|
syslog, Apache, and OpenStack.
|
||||||
|
|
||||||
General examples and configuration guides can be found on the Elastic `Logstash
|
General examples and configuration guides can be found on the Elastic `Logstash
|
||||||
Configuration page
|
Configuration page
|
||||||
<https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html>`_
|
<https://www.elastic.co/guide/en/logstash/current/configuration-file-structure.html>`_.
|
||||||
|
|
||||||
OpenStack input, output and filter examples can be found at
|
OpenStack input, output and filter examples can be found at
|
||||||
`https://github.com/sorantis/elkstack/tree/master/elk/logstash
|
https://github.com/sorantis/elkstack/tree/master/elk/logstash.
|
||||||
<https://github.com/sorantis/elkstack/tree/master/elk/logstash>`_
|
|
||||||
|
|
||||||
Once a configuration is complete, Kibana can be used as a visualization tool
|
Once a configuration is complete, Kibana can be used as a visualization tool
|
||||||
for OpenStack and system logging. This will allow operators to configure custom
|
for OpenStack and system logging. This will allow operators to configure custom
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
=====================
|
====================
|
||||||
Network connectivity
|
Network connectivity
|
||||||
=====================
|
====================
|
||||||
|
|
||||||
The selected server hardware must have the appropriate number of network
|
The selected server hardware must have the appropriate number of network
|
||||||
connections, as well as the right type of network connections, in order to
|
connections, as well as the right type of network connections, in order to
|
||||||
|
@ -1,11 +1,11 @@
|
|||||||
==============
|
==========================
|
||||||
Overcommitting
|
Overcommitting CPU and RAM
|
||||||
==============
|
==========================
|
||||||
|
|
||||||
OpenStack allows you to overcommit CPU and RAM on compute nodes. This
|
OpenStack allows you to overcommit CPU and RAM on compute nodes. This
|
||||||
allows you to increase the number of instances you can have running on
|
allows you to increase the number of instances running on your cloud at the
|
||||||
your cloud, at the cost of reducing the performance of the instances.
|
cost of reducing the performance of the instances. The Compute service uses the
|
||||||
OpenStack Compute uses the following ratios by default:
|
following ratios by default:
|
||||||
|
|
||||||
* CPU allocation ratio: 16:1
|
* CPU allocation ratio: 16:1
|
||||||
* RAM allocation ratio: 1.5:1
|
* RAM allocation ratio: 1.5:1
|
||||||
@ -20,13 +20,13 @@ The formula for the number of virtual instances on a compute node is
|
|||||||
``(OR*PC)/VC``, where:
|
``(OR*PC)/VC``, where:
|
||||||
|
|
||||||
OR
|
OR
|
||||||
CPU overcommit ratio (virtual cores per physical core)
|
CPU overcommit ratio (virtual cores per physical core)
|
||||||
|
|
||||||
PC
|
PC
|
||||||
Number of physical cores
|
Number of physical cores
|
||||||
|
|
||||||
VC
|
VC
|
||||||
Number of virtual cores per instance
|
Number of virtual cores per instance
|
||||||
|
|
||||||
Similarly, the default RAM allocation ratio of 1.5:1 means that the
|
Similarly, the default RAM allocation ratio of 1.5:1 means that the
|
||||||
scheduler allocates instances to a physical node as long as the total
|
scheduler allocates instances to a physical node as long as the total
|
||||||
@ -39,6 +39,7 @@ with the instances reaches 72 GB (such as nine instances, in the case
|
|||||||
where each instance has 8 GB of RAM).
|
where each instance has 8 GB of RAM).
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
Regardless of the overcommit ratio, an instance can not be placed
|
Regardless of the overcommit ratio, an instance can not be placed
|
||||||
on any physical node with fewer raw (pre-overcommit) resources than
|
on any physical node with fewer raw (pre-overcommit) resources than
|
||||||
the instance flavor requires.
|
the instance flavor requires.
|
||||||
|
@ -2,7 +2,7 @@
|
|||||||
Instance storage solutions
|
Instance storage solutions
|
||||||
==========================
|
==========================
|
||||||
|
|
||||||
As part of the architecture design for a compute cluster, you must specify some
|
As part of the architecture design for a compute cluster, you must specify
|
||||||
storage for the disk on which the instantiated instance runs. There are three
|
storage for the disk on which the instantiated instance runs. There are three
|
||||||
main approaches to providing temporary storage:
|
main approaches to providing temporary storage:
|
||||||
|
|
||||||
@ -122,7 +122,7 @@ from one physical host to another, a necessity for performing upgrades
|
|||||||
that require reboots of the compute hosts, but only works well with
|
that require reboots of the compute hosts, but only works well with
|
||||||
shared storage.
|
shared storage.
|
||||||
|
|
||||||
Live migration can also be done with nonshared storage, using a feature
|
Live migration can also be done with non-shared storage, using a feature
|
||||||
known as *KVM live block migration*. While an earlier implementation of
|
known as *KVM live block migration*. While an earlier implementation of
|
||||||
block-based migration in KVM and QEMU was considered unreliable, there
|
block-based migration in KVM and QEMU was considered unreliable, there
|
||||||
is a newer, more reliable implementation of block-based live migration
|
is a newer, more reliable implementation of block-based live migration
|
||||||
|
@ -1,8 +1,8 @@
|
|||||||
.. _network-design:
|
.. _network-design:
|
||||||
|
|
||||||
==============
|
==========
|
||||||
Network design
|
Networking
|
||||||
==============
|
==========
|
||||||
|
|
||||||
.. toctree::
|
.. toctree::
|
||||||
:maxdepth: 2
|
:maxdepth: 2
|
||||||
|
Loading…
Reference in New Issue
Block a user