Get started with OpenStack The OpenStack project is an open source cloud computing platform for all types of clouds, which aims to be simple to implement, massively scalable, and feature rich. Developers and cloud computing technologists from around the world create the OpenStack project. OpenStack provides an Infrastructure as a Service (IaaS) solution through a set of interrelated services. Each service offers an application programming interface (API) that facilitates this integration.
OpenStack architecture The following table describes the OpenStack services that make up the OpenStack architecture:
OpenStack services
Service Project name Description
Dashboard Horizon Enables users to interact with all OpenStack services to launch an instance, assign IP addresses, set access controls, and so on.
Identity Service Keystone Provides authentication and authorization for all the OpenStack services. Also provides a service catalog within a particular OpenStack cloud.
Compute Service Nova Provisions and manages large networks of virtual machines on demand.
Object Storage Service Swift Stores and retrieve files. Does not mount directories like a file server.
Block Storage Service Cinder Provides persistent block storage to guest virtual machines.
Image Service Glance Provides a registry of virtual machine images. Compute Service uses it to provision instances.
Networking Service Neutron Enables network connectivity as a service among interface devices managed by other OpenStack services, usually Compute Service. Enables users to create and attach interfaces to networks. Has a pluggable architecture that supports many popular networking vendors and technologies.
Metering/Monitoring Service Ceilometer Monitors and meters the OpenStack cloud for billing, benchmarking, scalability, and statistics purposes.
Orchestration Service Heat Orchestrates multiple composite cloud applications by using the AWS CloudFormation template format, through both an OpenStack-native REST API and a CloudFormation-compatible Query API.
Conceptual architecture The following diagram shows the relationships among the OpenStack services:
Logical architecture To design, install, and configure a cloud, cloud administrators must understand the logical architecture. OpenStack modules are one of the following types: Daemon. Runs as a daemon. On Linux platforms, it's usually installed as a service. Script. Runs installation and tests of a virtual environment. For example, a script called run_tests.sh installs a virtual environment for a service and then may also run tests to verify that virtual environment functions well. Command-line interface (CLI). Enables users to submit API calls to OpenStack services through easy-to-use commands. The following diagram shows the most common, but not the only, architecture for an OpenStack cloud:
OpenStack logical architecture
As in the conceptual architecture, end users can interact through the dashboard, CLIs, and APIs. All services authenticate through a common Identity Service and individual services interact with each other through public APIs, except where privileged administrator commands are necessary.
OpenStack services This section describes OpenStack services in detail.
Dashboard The dashboard is a modular Django web application that provides a graphical interface to OpenStack services. The dashboard is usually deployed through mod_wsgi in Apache. You can modify the dashboard code to make it suitable for different sites. From a network architecture point of view, this service must be accessible to customers and the public API for each OpenStack service. To use the administrator functionality for other services, it must also connect to Admin API endpoints, which should not be accessible by customers.
Identity Service The Identity Service is an OpenStack project that provides identity, token, catalog, and policy services to OpenStack projects. It consists of: keystone-all. Starts both the service and administrative APIs in a single process to provide Catalog, Authorization, and Authentication services for OpenStack. Identity Service functions. Each has a pluggable backend that allows different ways to use the particular service. Most support standard backends like LDAP or SQL, as well as key-value stores (KVS). The Identity Service is mostly used to customize authentication services.
Compute Service The Compute Service is a cloud computing fabric controller, the main part of an IaaS system. It can be used for hosting and managing cloud computing systems. The main modules are implemented in Python. The Compute Service is made up of the following functional areas and their underlying components: API nova-api service. Accepts and responds to end user compute API calls. Supports the OpenStack Compute API, the Amazon EC2 API, and a special Admin API for privileged users to perform administrative actions. Also, initiates most orchestration activities, such as running an instance, and enforces some policies. nova-api-metadata service. Accepts metadata requests from instances. The nova-api-metadata service is generally only used when you run in multi-host mode with nova-network installations. For details, see Metadata service. Compute core nova-compute process. A worker daemon that creates and terminates virtual machine instances through hypervisor APIs. For example, XenAPI for XenServer/XCP, libvirt for KVM or QEMU, VMwareAPI for VMware, and so on. The process by which it does so is fairly complex but the basics are simple: Accept actions from the queue and perform a series of system commands, like launching a KVM instance, to carry them out while updating state in the database. nova-scheduler process. Conceptually the simplest piece of code in Compute. Takes a virtual machine instance request from the queue and determines on which compute server host it should run. nova-conductor module. Mediates interactions between nova-compute and the database. Aims to eliminate direct accesses to the cloud database made by nova-compute. The nova-conductor module scales horizontally. However, do not deploy it on any nodes where nova-compute runs. For more information, see A new Nova service: nova-conductor. Networking for VMs nova-network worker daemon. Similar to nova-compute, it accepts networking tasks from the queue and performs tasks to manipulate the network, such as setting up bridging interfaces or changing iptables rules. This functionality is being migrated to OpenStack Networking, which is a separate OpenStack service. nova-dhcpbridge script. Tracks IP address leases and records them in the database by using the dnsmasq dhcp-script facility. This functionality is being migrated to OpenStack Networking. OpenStack Networking provides a different script. Console interface nova-consoleauth daemon. Authorizes tokens for users that console proxies provide. See nova-novncproxy and nova-xvpnvcproxy. This service must be running for console proxies to work. Many proxies of either type can be run against a single nova-consoleauth service in a cluster configuration. For information, see About nova-consoleauth. nova-novncproxy daemon. Provides a proxy for accessing running instances through a VNC connection. Supports browser-based novnc clients. nova-console daemon. Deprecated for use with Grizzly. Instead, the nova-xvpnvncproxy is used. nova-xvpnvncproxy daemon. A proxy for accessing running instances through a VNC connection. Supports a Java client specifically designed for OpenStack. nova-cert daemon. Manages x509 certificates. Image Management (EC2 scenario) nova-objectstore daemon. Provides an S3 interface for registering images with the Image Service. Mainly used for installations that must support euca2ools. The euca2ools tools talk to nova-objectstore in S3 language, and nova-objectstore translates S3 requests into Image Service requests. euca2ools client. A set of command-line interpreter commands for managing cloud resources. Though not an OpenStack module, you can configure nova-api to support this EC2 interface. For more information, see the Eucalyptus 2.0 Documentation. Command Line Interpreter/Interfaces nova client. Enables users to submit commands as a tenant administrator or end user. nova-manage client. Enables cloud administrators to submit commands. Other components The queue. A central hub for passing messages between daemons. Usually implemented with RabbitMQ, but could be any AMPQ message queue, such as Apache Qpid) or Zero MQ. SQL database. Stores most build-time and runtime states for a cloud infrastructure. Includes instance types that are available for use, instances in use, available networks, and projects. Theoretically, OpenStack Compute can support any database that SQL-Alchemy supports, but the only databases widely used are sqlite3 databases, MySQL (only appropriate for test and development work), and PostgreSQL. The Compute Service interacts with other OpenStack services: Identity Service for authentication, Image Service for images, and the OpenStack Dashboard for a web interface.
Object Storage Service The Object Storage Service is a highly scalable and durable multi-tenant object storage system for large amounts of unstructured data at low cost through a RESTful http API. It includes the following components: swift-proxy-server. Accepts Object Storage API and raw HTTP requests to upload files, modify metadata, and create containers. It also serves file or container listings to web browsers. To improve performance, the proxy server can use an optional cache usually deployed with memcache. Account servers. Manage accounts defined with the Object Storage Service. Container servers. Manage a mapping of containers, or folders, within the Object Storage Service. Object servers. Manage actual objects, such as files, on the storage nodes. A number of periodic processes. Performs housekeeping tasks on the large data store. The replication services ensure consistency and availability through the cluster. Other periodic processes include auditors, updaters, and reapers. Configurable WSGI middleware, which is usually the Identity Service, handles authentication.
Block Storage Service The Block Storage Service enables management of volumes, volume snapshots, and volume types. It includes the following components: cinder-api. Accepts API requests and routes them to cinder-volume for action. cinder-volume. Responds to requests to read from and write to the Object Storage database to maintain state, interacting with other processes (like cinder-scheduler) through a message queue and directly upon block storage providing hardware or software. It can interact with a variety of storage providers through a driver architecture. cinder-scheduler daemon. Like the nova-scheduler, picks the optimal block storage provider node on which to create the volume. Messaging queue. Routes information between the Block Storage Service processes and a database, which stores volume state. The Block Storage Service interacts with Compute to provide volumes for instances.
Image Service The Image Service includes the following components: glance-api. Accepts Image API calls for image discovery, retrieval, and storage. glance-registry. Stores, processes, and retrieves metadata about images. Metadata includes size, type, and so on. Database. Stores image metadata. You can choose your database depending on your preference. Most deployments use MySQL or SQlite. Storage repository for image files. In , the Object Storage Service is the image repository. However, you can configure a different repository. The Image Service supports normal filesystems, RADOS block devices, Amazon S3, and HTTP. Some of these choices are limited to read-only usage. A number of periodic processes run on the Image Service to support caching. Replication services ensures consistency and availability through the cluster. Other periodic processes include auditors, updaters, and reapers. As shown in , the Image Service is central to the overall IaaS picture. It accepts API requests for images or image metadata from end users or Compute components and can store its disk files in the Object Storage Service.
Networking Service Provides network-connectivity-as-a-service between interface devices that are managed by other OpenStack services, usually Compute. Enables users to create and attach interfaces to networks. Like many OpenStack services, OpenStack Networking is highly configurable due to its plug-in architecture. These plug-ins accommodate different networking equipment and software. Consequently, the architecture and deployment vary dramatically. Includes the following components: neutron-server. Accepts and routes API requests to the appropriate OpenStack Networking plug-in for action. OpenStack Networking plug-ins and agents. Plugs and unplugs ports, creates networks or subnets, and provides IP addressing. These plug-ins and agents differ depending on the vendor and technologies used in the particular cloud. OpenStack Networking ships with plug-ins and agents for Cisco virtual and physical switches, Nicira NVP product, NEC OpenFlow products, Open vSwitch, Linux bridging, and the Ryu Network Operating System. The common agents are L3 (layer 3), DHCP (dynamic host IP addressing), and a plug-in agent. Messaging queue. Most OpenStack Networking installations make use of a messaging queue to route information between the neutron-server and various agents as well as a database to store networking state for particular plug-ins. OpenStack Networking interacts mainly with OpenStack Compute, where it provides networks and connectivity for its instances.
Metering/Monitoring Service The Metering Service is designed to: Efficiently collect the metering data about the CPU and network costs. Collect data by monitoring notifications sent from services or by polling the infrastructure. Configure the type of collected data to meet various operating requirements. Accessing and inserting the metering data through the REST API. Expand the framework to collect custom usage data by additional plug-ins. Produce signed metering messages that cannot be repudiated. The system consists of the following basic components: A compute agent. Runs on each compute node and polls for resource utilization statistics. There may be other types of agents in the future, but for now we will focus on creating the compute agent. A central agent. Runs on a central management server to poll for resource utilization statistics for resources not tied to instances or compute nodes. A collector. Runs on one or more central management servers to monitor the message queues (for notifications and for metering data coming from the agent). Notification messages are processed and turned into metering messages and sent back out onto the message bus using the appropriate topic. Metering messages are written to the data store without modification. A data store. A database capable of handling concurrent writes (from one or more collector instances) and reads (from the API server). An API server. Runs on one or more central management servers to provide access to the data from the data store. These services communicate using the standard OpenStack messaging bus. Only the collector and API server have access to the data store. These services communicate by using the standard OpenStack messaging bus. Only the collector and API server have access to the data store.
Orchestration Service The Orchestration Service provides a template-based orchestration for describing a cloud application by running OpenStack API calls to generate running cloud applications. The software integrates other core components of OpenStack into a one-file template system. The templates enable you to create most OpenStack resource types, such as instances, floating IPs, volumes, security groups, users, and so on. Also, provides some more advanced functionality, such as instance high availability, instance auto-scaling, and nested stacks. By providing very tight integration with other OpenStack core projects, all OpenStack core projects could receive a larger user base. Enables deployers to integrate with the Orchestration Service directly or through custom plug-ins. The Orchestration Service consists of the following components: heat tool. A CLI that communicates with the heat-api to run AWS CloudFormation APIs. End developers could also use the heat REST API directly. heat-api component. Provides an OpenStack-native REST API that processes API requests by sending them to the heat-engine over RPC. heat-api-cfn component. Provides an AWS Query API that is compatible with AWS CloudFormation and processes API requests by sending them to the heat-engine over RPC. heat-engine. Orchestrates the launching of templates and provides events back to the API consumer.
Feedback To provide feedback on documentation, join and use the openstack-docs@lists.openstack.org mailing list at OpenStack Documentation Mailing List, or report a bug.