%openstack; ]> Compute The OpenStack Compute service allows you to control an Infrastructure-as-a-Service (IaaS) cloud computing platform. It gives you control over instances and networks, and allows you to manage access to the cloud through users and projects. Compute does not include virtualization software. Instead, it defines drivers that interact with underlying virtualization mechanisms that run on your host operating system, and exposes functionality over a web-based API.
System architecture OpenStack Compute contains several main components. The cloud controller represents the global state and interacts with the other components. The API server acts as the web services front end for the cloud controller. The compute controller provides compute server resources and usually also contains the Compute service. The object store is an optional component that provides storage services; you can also instead use OpenStack Object Storage. An auth manager provides authentication and authorization services when used with the Compute system; you can also instead use OpenStack Identity as a separate authentication service. A volume controller provides fast and permanent block-level storage for the compute servers. The network controller provides virtual networks to enable compute servers to interact with each other and with the public network. You can also instead use OpenStack Networking. The scheduler is used to select the most suitable compute controller to host an instance. Compute uses a messaging-based, shared nothing architecture. All major components exist on multiple servers, including the compute, volume, and network controllers, and the object store or image service. The state of the entire system is stored in a database. The cloud controller communicates with the internal object store using HTTP, but it communicates with the scheduler, network controller, and volume controller using AMQP (advanced message queuing protocol). To avoid blocking a component while waiting for a response, Compute uses asynchronous calls, with a callback that is triggered when a response is received.
Hypervisors Compute controls hypervisors through an API server. Selecting the best hypervisor to use can be difficult, and you must take budget, resource constraints, supported features, and required technical specifications into account. However, the majority of OpenStack development is done on systems using KVM and Xen-based hypervisors. For a detailed list of features and support across different hypervisors, see http://wiki.openstack.org/HypervisorSupportMatrix. You can also orchestrate clouds using multiple hypervisors in different availability zones. Compute supports the following hypervisors: Baremetal Docker Hyper-V Kernel-based Virtual Machine (KVM) Linux Containers (LXC) Quick Emulator (QEMU) User Mode Linux (UML) VMware vSphere Xen For more information about hypervisors, see the Hypervisors section in the OpenStack Configuration Reference.
Tenants, users, and roles The Compute system is designed to be used by different consumers in the form of tenants on a shared system, and role-based access assignments. Roles control the actions that a user is allowed to perform. Tenants are isolated resource containers that form the principal organizational structure within the Compute service. They consist of an individual VLAN, and volumes, instances, images, keys, and users. A user can specify the tenant by appending :project_id to their access key. If no tenant is specified in the API request, Compute attempts to use a tenant with the same ID as the user. For tenants, you can use quota controls to limit the: Number of volumes that may be launched. Number of processor cores and the amount of RAM that can be allocated. Floating IP addresses assigned to any instance when it launches. This allows instances to have the same publicly accessible IP addresses. Fixed IP addresses assigned to the same instance when it launches. This allows instances to have the same publicly or privately accessible IP addresses. Roles control the actions a user is allowed to perform. By default, most actions do not require a particular role, but you can configure them by editing the policy.json file for user roles. For example, a rule can be defined so that a user must have the admin role in order to be able to allocate a public IP address. A tenant limits users' access to particular images. Each user is assigned a user name and password. Keypairs granting access to an instance are enabled for each user, but quotas are set, so that each tenant can control resource consumption across available hardware resources. Earlier versions of OpenStack used the term project instead of tenant. Because of this legacy terminology, some command-line tools use --project_id where you would normally expect to enter a tenant ID.
Block storage OpenStack provides two classes of block storage: ephemeral storage and persistent volumes. Volumes are persistent virtualized block devices independent of any particular instance. Ephemeral storage is associated with a single unique instance, and it exists only for the life of that instance. The amount of ephemeral storage is defined by the flavor of the instance. Generally, the root file system for an instance will be stored on ephemeral storage. It persists across reboots of the guest operating system, but when the instance is deleted, the ephemeral storage is also removed. In addition to the ephemeral root volume, all flavors except the smallest, m1.tiny, also provide an additional ephemeral block device of between 20 and 160 GB. These sizes can be configured to suit your environment. This is presented as a raw block device with no partition table or file system. Cloud-aware operating system images can discover, format, and mount these storage devices. For example, the cloud-init package included in Ubuntu's stock cloud images format this space as an ext3 file system and mount it on /mnt. This is a feature of the guest operating system you are using, and is not an OpenStack mechanism. OpenStack only provisions the raw storage. Persistent volumes are created by users and their size is limited only by the user's quota and availability limits. Upon initial creation, volumes are raw block devices without a partition table or a file system. To partition or format volumes, you must attach them to an instance. Once they are attached to an instance, you can use persistent volumes in much the same way as you would use external hard disk drive. You can attach volumes to only one instance at a time, although you can detach and reattach volumes to as many different instances as you like. You can configure persistent volumes as bootable and use them to provide a persistent virtual instance similar to traditional non-cloud-based virtualization systems. Typically, the resulting instance can also still have ephemeral storage depending on the flavor selected, but the root file system can be on the persistent volume and its state maintained even if the instance is shut down. For more information about this type of configuration, see the OpenStack Configuration Reference. Persistent volumes do not provide concurrent access from multiple instances. That type of configuration requires a traditional network file system like NFS or CIFS, or a cluster file system such as GlusterFS. These systems can be built within an OpenStack cluster or provisioned outside of it, but OpenStack software does not provide these features.
EC2 compatibility API In addition to the native compute API, OpenStack provides an EC2-compatible API. This API allows EC2 legacy workflows built for EC2 to work with OpenStack. The OpenStack Configuration Reference lists configuration options for customizing this compatibility API on your OpenStack cloud. Numerous third-party tools and language-specific SDKs can be used to interact with OpenStack clouds, using both native and compatibility APIs. Some of the more popular third-party tools are: Euca2ools A popular open source command-line tool for interacting with the EC2 API. This is convenient for multi-cloud environments where EC2 is the common API, or for transitioning from EC2-based clouds to OpenStack. For more information, see the euca2ools site. Hybridfox A Firefox browser add-on that provides a graphical interface to many popular public and private cloud technologies, including OpenStack. For more information, see the hybridfox site. boto A Python library for interacting with Amazon Web Services. It can be used to access OpenStack through the EC2 compatibility API. For more information, see the boto project page on GitHub. fog A Ruby cloud services library. It provides methods for interacting with a large number of cloud and virtualization platforms, including OpenStack. For more information, see the fog site. php-opencloud A PHP SDK designed to work with most OpenStack- based cloud deployments, as well as Rackspace public cloud. For more information, see the php-opencloud site.
Building blocks In OpenStack the base operating system is usually copied from an image stored in the OpenStack Image Service. This is the most common case and results in an ephemeral instance that starts from a known template state and loses all accumulated states on virtual machine deletion. It is also possible to put an operating system on a persistent volume in the Cinder volume system. This gives a more traditional persistent system that accumulates states, which are preserved on the Cinder volume across the deletion and re-creation of the virtual machine. To get a list of available images on your system run: $ nova image-list +--------------------------------------+-------------------------------+--------+--------------------------------------+ | ID | Name | Status | Server | +--------------------------------------+-------------------------------+--------+--------------------------------------+ | aee1d242-730f-431f-88c1-87630c0f07ba | Ubuntu 14.04 cloudimg amd64 | ACTIVE | | | 0b27baa1-0ca6-49a7-b3f4-48388e440245 | Ubuntu 14.10 cloudimg amd64 | ACTIVE | | | df8d56fc-9cea-4dfd-a8d3-28764de3cb08 | jenkins | ACTIVE | | +--------------------------------------+-------------------------------+--------+--------------------------------------+ The displayed image attributes are: ID Automatically generated UUID of the image Name Free form, human-readable name for image Status The status of the image. Images marked ACTIVE are available for use. Server For images that are created as snapshots of running instances, this is the UUID of the instance the snapshot derives from. For uploaded images, this field is blank. Virtual hardware templates are called flavors. The default installation provides five flavors. By default, these are configurable by admin users, however that behavior can be changed by redefining the access controls for compute_extension:flavormanage in /etc/nova/policy.json on the compute-api server. For a list of flavors that are available on your system: $ nova flavor-list +----+-----------+-----------+------+-----------+------+-------+-------------+ | ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | +----+-----------+-----------+------+-----------+------+-------+-------------+ | 1 | m1.tiny | 512 | 1 | N/A | 0 | 1 | | | 2 | m1.small | 2048 | 20 | N/A | 0 | 1 | | | 3 | m1.medium | 4096 | 40 | N/A | 0 | 2 | | | 4 | m1.large | 8192 | 80 | N/A | 0 | 4 | | | 5 | m1.xlarge | 16384 | 160 | N/A | 0 | 8 | | +----+-----------+-----------+------+-----------+------+-------+-------------+
Compute service architecture The following basic categories describe the service architecture and what's going on within the cloud controller. API server At the heart of the cloud framework is an API server. This API server makes command and control of the hypervisor, storage, and networking programmatically available to users. The API endpoints are basic HTTP web services which handle authentication, authorization, and basic command and control functions using various API interfaces under the Amazon, Rackspace, and related models. This enables API compatibility with multiple existing tool sets created for interaction with offerings from other vendors. This broad compatibility prevents vendor lock-in. Message queue A messaging queue brokers the interaction between compute nodes (processing), the networking controllers (software which controls network infrastructure), API endpoints, the scheduler (determines which physical hardware to allocate to a virtual resource), and similar components. Communication to and from the cloud controller is by HTTP requests through multiple API endpoints. A typical message passing event begins with the API server receiving a request from a user. The API server authenticates the user and ensures that the user is permitted to issue the subject command. The availability of objects implicated in the request is evaluated and, if available, the request is routed to the queuing engine for the relevant workers. Workers continually listen to the queue based on their role, and occasionally their type host name. When an applicable work request arrives on the queue, the worker takes assignment of the task and begins its execution. Upon completion, a response is dispatched to the queue which is received by the API server and relayed to the originating user. Database entries are queried, added, or removed as necessary throughout the process. Compute worker Compute workers manage computing instances on host machines. The API dispatches commands to compute workers to complete these tasks: Run instances Terminate instances Reboot instances Attach volumes Detach volumes Get console output Network Controller The Network Controller manages the networking resources on host machines. The API server dispatches commands through the message queue, which are subsequently processed by Network Controllers. Specific operations include: Allocate fixed IP addresses Configuring VLANs for projects Configuring networks for compute nodes