88e0dbb7ec
process to processes - number of periodic processes which run on Glance changed Change-Id: I2cdf0fd21e588efae9f796b4dc2045db21db5caf
378 lines
18 KiB
XML
378 lines
18 KiB
XML
<?xml version="1.0" encoding="utf-8"?>
|
||
<chapter xmlns="http://docbook.org/ns/docbook"
|
||
xmlns:xi="http://www.w3.org/2001/XInclude"
|
||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
||
version="5.0"
|
||
xml:id="module001-ch004-openstack-architecture">
|
||
<title>OpenStack Architecture</title>
|
||
<para><guilabel>Conceptual Architecture</guilabel></para>
|
||
<para>The OpenStack project as a whole is designed to deliver a
|
||
massively scalable cloud operating system. To achieve this, each
|
||
of the constituent services are designed to work together to
|
||
provide a complete Infrastructure as a Service (IaaS). This
|
||
integration is facilitated through public application
|
||
programming interfaces (APIs) that each service offers (and in
|
||
turn can consume). While these APIs allow each of the services
|
||
to use another service, it also allows an implementer to switch
|
||
out any service as long as they maintain the API. These are
|
||
(mostly) the same APIs that are available to end users of the
|
||
cloud.</para>
|
||
<para>Conceptually, you can picture the relationships between the
|
||
services as so:</para>
|
||
<figure>
|
||
<title>Conceptual Diagram</title>
|
||
<mediaobject>
|
||
<imageobject>
|
||
<imagedata fileref="figures/image13.jpg"/>
|
||
</imageobject>
|
||
</mediaobject>
|
||
</figure>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para>Dashboard ("Horizon") provides a web front end to the
|
||
other OpenStack services</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Compute ("Nova") stores and retrieves virtual disks
|
||
("images") and associated metadata in Image
|
||
("Glance")</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Network ("Quantum") provides virtual networking for
|
||
Compute.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Block Storage ("Cinder") provides storage volumes for
|
||
Compute.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Image ("Glance") can store the actual virtual disk files
|
||
in the Object Store("Swift")</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>All the services authenticate with Identity
|
||
("Keystone")</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
<para>This is a stylized and simplified view of the architecture,
|
||
assuming that the implementer is using all of the services
|
||
together in the most common configuration. It also only shows
|
||
the "operator" side of the cloud -- it does not picture how
|
||
consumers of the cloud may actually use it. For example, many
|
||
users will access object storage heavily (and directly).</para>
|
||
<para><guilabel>Logical Architecture</guilabel></para>
|
||
<para>This picture is consistent with the conceptual architecture
|
||
above:</para>
|
||
<figure>
|
||
<title>Logical Diagram</title>
|
||
<mediaobject>
|
||
<imageobject>
|
||
<imagedata fileref="figures/image31.jpg"/>
|
||
</imageobject>
|
||
</mediaobject>
|
||
</figure>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para>End users can interact through a common web interface
|
||
(Horizon) or directly to each service through their
|
||
API</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>All services authenticate through a common source
|
||
(facilitated through keystone)</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Individual services interact with each other through
|
||
their public APIs (except where privileged administrator
|
||
commands are necessary)</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
<para>In the sections below, we'll delve into the architecture for
|
||
each of the services.</para>
|
||
<para><guilabel>Dashboard</guilabel></para>
|
||
<para>Horizon is a modular Django web application that provides
|
||
an end user and administrator interface to OpenStack
|
||
services.</para>
|
||
<figure>
|
||
<title>Horizon Dashboard</title>
|
||
<mediaobject>
|
||
<imageobject>
|
||
<imagedata fileref="figures/image10.jpg"/>
|
||
</imageobject>
|
||
</mediaobject>
|
||
</figure>
|
||
<para>As with most web applications, the architecture is fairly
|
||
simple:</para>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para>Horizon is usually deployed via mod_wsgi in Apache.
|
||
The code itself is separated into a reusable python module
|
||
with most of the logic (interactions with various
|
||
OpenStack APIs) and presentation (to make it easily
|
||
customizable for different sites).</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>A database (configurable as to which one) which relies
|
||
mostly on the other services for data. It also stores very
|
||
little data of its own.</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
<para>From a network architecture point of view, this service
|
||
will need to be customer accessible as well as be able to talk
|
||
to each service's public APIs. If you wish to use the
|
||
administrator functionality (i.e. for other services), it will
|
||
also need connectivity to their Admin API endpoints (which
|
||
should be non-customer accessible).</para>
|
||
<para><guilabel>Compute</guilabel></para>
|
||
<para>Nova is the most complicated and distributed component of
|
||
OpenStack. A large number of processes cooperate to turn end
|
||
user API requests into running virtual machines. Below is a
|
||
list of these processes and their functions:</para>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para>nova-api accepts and responds to end user compute API
|
||
calls. It supports OpenStack Compute API, Amazon's EC2 API
|
||
and a special Admin API (for privileged users to perform
|
||
administrative actions). It also initiates most of the
|
||
orchestration activities (such as running an instance) as
|
||
well as enforces some policy (mostly quota checks).</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>The nova-compute process is primarily a worker daemon
|
||
that creates and terminates virtual machine instances via
|
||
hypervisor's APIs (XenAPI for XenServer/XCP, libvirt for
|
||
KVM or QEMU, VMwareAPI for VMware, etc.). The process by
|
||
which it does so is fairly complex but the basics are
|
||
simple: accept actions from the queue and then perform a
|
||
series of system commands (like launching a KVM instance)
|
||
to carry them out while updating state in the
|
||
database.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>nova-volume manages the creation, attaching and
|
||
detaching of z volumes to compute instances (similar
|
||
functionality to Amazon’s Elastic Block Storage). It can
|
||
use volumes from a variety of providers such as iSCSI or
|
||
Rados Block Device in Ceph. A new OpenStack project,
|
||
Cinder, will eventually replace nova-volume functionality.
|
||
In the Folsom release, nova-volume and the Block Storage
|
||
service will have similar functionality.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>The nova-network worker daemon is very similar to
|
||
nova-compute and nova-volume. It accepts networking tasks
|
||
from the queue and then performs tasks to manipulate the
|
||
network (such as setting up bridging interfaces or
|
||
changing iptables rules). This functionality is being
|
||
migrated to Quantum, a separate OpenStack service. In the
|
||
Folsom release, much of the functionality will be
|
||
duplicated between nova-network and Quantum.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>The nova-schedule process is conceptually the simplest
|
||
piece of code in OpenStack Nova: it takes a virtual machine
|
||
instance request from the queue and determines where it
|
||
should run (specifically, which compute server host it
|
||
should run on).</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>The queue provides a central hub for passing messages
|
||
between daemons. This is usually implemented with RabbitMQ
|
||
today, but could be any AMPQ message queue (such as Apache
|
||
Qpid). New to the Folsom release is support for Zero
|
||
MQ.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>The SQL database stores most of the build-time and
|
||
runtime state for a cloud infrastructure. This includes
|
||
the instance types that are available for use, instances
|
||
in use, networks available and projects. Theoretically,
|
||
OpenStack Nova can support any database supported by
|
||
SQL-Alchemy but the only databases currently being widely
|
||
used are sqlite3 (only appropriate for test and
|
||
development work), MySQL and PostgreSQL.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Nova also provides console services to allow end users
|
||
to access their virtual instance's console through a
|
||
proxy. This involves several daemons (nova-console,
|
||
nova-novncproxy and nova-consoleauth).</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
<para>Nova interacts with many other OpenStack services:
|
||
Keystone for authentication, Glance for images and Horizon for
|
||
web interface. The Glance interactions are central. The API
|
||
process can upload and query Glance while nova-compute will
|
||
download images for use in launching images.</para>
|
||
<para><guilabel>Object Store</guilabel></para>
|
||
<para>The swift architecture is very distributed to prevent any
|
||
single point of failure as well as to scale horizontally. It
|
||
includes the following components:</para>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para>Proxy server (swift-proxy-server) accepts incoming
|
||
requests via the OpenStack Object API or just raw HTTP. It
|
||
accepts files to upload, modifications to metadata or
|
||
container creation. In addition, it will also serve files
|
||
or container listing to web browsers. The proxy server may
|
||
utilize an optional cache (usually deployed with memcache)
|
||
to improve performance.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Account servers manage accounts defined with the
|
||
object storage service.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Container servers manage a mapping of containers (i.e
|
||
folders) within the object store service.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Object servers manage actual objects (i.e. files) on
|
||
the storage nodes.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>There are also a number of periodic processes which run
|
||
to perform housekeeping tasks on the large data store. The
|
||
most important of these is the replication services, which
|
||
ensures consistency and availability through the cluster.
|
||
Other periodic processes include auditors, updaters and
|
||
reapers.</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
<para>Authentication is handled through configurable WSGI
|
||
middleware (which will usually be Keystone).</para>
|
||
<para><guilabel>Image Store</guilabel></para>
|
||
<para>The Glance architecture has stayed relatively stable since
|
||
the Cactus release. The biggest architectural change has been
|
||
the addition of authentication, which was added in the Diablo
|
||
release. Just as a quick reminder, Glance has four main parts
|
||
to it:</para>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para>glance-api accepts Image API calls for image
|
||
discovery, image retrieval and image storage.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>glance-registry stores, processes and retrieves
|
||
metadata about images (size, type, etc.).</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>A database to store the image metadata. Like Nova, you
|
||
can choose your database depending on your preference (but
|
||
most people use MySQL or SQlite).</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>A storage repository for the actual image files. In
|
||
the diagram above, Swift is shown as the image repository,
|
||
but this is configurable. In addition to Swift, Glance
|
||
supports normal filesystems, RADOS block devices, Amazon
|
||
S3 and HTTP. Be aware that some of these choices are
|
||
limited to read-only usage.</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
<para>There are also a number of periodic processes which run on
|
||
Glance to support caching. The most important of these is the
|
||
replication services, which ensures consistency and
|
||
availability through the cluster. Other periodic processes
|
||
include auditors, updaters and reapers.</para>
|
||
<para>As you can see from the diagram in the Conceptual
|
||
Architecture section, Glance serves a central role to the
|
||
overall IaaS picture. It accepts API requests for images (or
|
||
image metadata) from end users or Nova components and can
|
||
store its disk files in the object storage service,
|
||
Swift.</para>
|
||
<para><guilabel>Identity</guilabel></para>
|
||
<para>Keystone provides a single point of integration for
|
||
OpenStack policy, catalog, token and authentication.</para>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para>keystone handles API requests as well as providing
|
||
configurable catalog, policy, token and identity
|
||
services.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Each Keystone function has a pluggable backend which
|
||
allows different ways to use the particular service. Most
|
||
support standard backends like LDAP or SQL, as well as Key
|
||
Value Stores (KVS).</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
<para>Most people will use this as a point of customization for
|
||
their current authentication services.</para>
|
||
<para><guilabel>Network</guilabel></para>
|
||
<para>Quantum provides "network connectivity as a service"
|
||
between interface devices managed by other OpenStack services
|
||
(most likely Nova). The service works by allowing users to
|
||
create their own networks and then attach interfaces to them.
|
||
Like many of the OpenStack services, Quantum is highly
|
||
configurable due to it's plug-in architecture. These plug-ins
|
||
accommodate different networking equipment and software. As
|
||
such, the architecture and deployment can vary dramatically.
|
||
In the above architecture, a simple Linux networking plug-in
|
||
is shown.</para>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para>quantum-server accepts API requests and then routes
|
||
them to the appropriate quantum plugin for action.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Quantum plugins and agents perform the actual actions
|
||
such as plugging and unplugging ports, creating networks
|
||
or subnets and IP addressing. These plugins and agents
|
||
differ depending on the vendor and technologies used in
|
||
the particular cloud. Quantum ships with plugins and
|
||
agents for: Cisco virtual and physical switches, Nicira
|
||
NVP product, NEC OpenFlow products, Openvswitch, Linux
|
||
bridging and the Ryu Network Operating System.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>The common agents are L3 (layer 3), DHCP (dynamic host
|
||
IP addressing) and the specific plug-in agent.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Most Quantum installations will also make use of a
|
||
messaging queue to route information between the
|
||
quantum-server and various agents as well as a database to
|
||
store networking state for particular plugins.</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
<para>Quantum will interact mainly with Nova, where it will
|
||
provide networks and connectivity for its instances.</para>
|
||
<para><guilabel>Block Storage</guilabel></para>
|
||
<para>Cinder separates out the persistent block storage
|
||
functionality that was previously part of OpenStack Compute
|
||
(in the form of nova-volume) into it's own service. The
|
||
OpenStack Block Storage API allows for manipulation of
|
||
volumes, volume types (similar to compute flavors) and volume
|
||
snapshots.</para>
|
||
<itemizedlist>
|
||
<listitem>
|
||
<para>cinder-api accepts API requests and routes them to
|
||
cinder-volume for action.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>cinder-volume acts upon the requests by reading or
|
||
writing to the Cinder database to maintain state,
|
||
interacting with other processes (like cinder-scheduler)
|
||
through a message queue and directly upon block storage
|
||
providing hardware or software. It can interact with a
|
||
variety of storage providers through a driver
|
||
architecture. Currently, there are drivers for IBM,
|
||
SolidFire, NetApp, Nexenta, Zadara, linux iSCSI and other
|
||
storage providers.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Much like nova-scheduler, the cinder-scheduler daemon
|
||
picks the optimal block storage provider node to create
|
||
the volume on.</para>
|
||
</listitem>
|
||
<listitem>
|
||
<para>Cinder deployments will also make use of a messaging
|
||
queue to route information between the cinder processes as
|
||
well as a database to store volume state.</para>
|
||
</listitem>
|
||
</itemizedlist>
|
||
<para>Like Quantum, Cinder will mainly interact with Nova,
|
||
providing volumes for its instances.</para>
|
||
</chapter>
|