diff --git a/doc/admin-guide-cloud/ch_identity_mgmt.xml b/doc/admin-guide-cloud/ch_identity_mgmt.xml index 64aef47a16..d40caf417b 100644 --- a/doc/admin-guide-cloud/ch_identity_mgmt.xml +++ b/doc/admin-guide-cloud/ch_identity_mgmt.xml @@ -11,7 +11,12 @@ (etc/keystone.conf), possibly a separate logging configuration file, and initializing data into keystone using the command line client. - +
+ Identity Service Concepts + + + +
User CRUD Keystone provides a user CRUD filter that can be added to @@ -58,23 +63,23 @@ pipeline = stats_monitoring url_normalize token_auth admin_token_auth xml_body j nova.conf. For example in Compute, you can remove the middleware parameters from api-paste.ini, as follows: - [filter:authtoken] + [filter:authtoken] paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory And set the following values in nova.conf, as follows: - [DEFAULT] -... + [DEFAULT] +... auth_strategy=keystone - -[keystone_authtoken] -auth_host = 127.0.0.1 + +[keystone_authtoken] +auth_host = 127.0.0.1 auth_port = 35357 -auth_protocol = http +auth_protocol = http auth_uri = http://127.0.0.1:5000/ -admin_user = admin +admin_user = admin admin_password = SuperSekretPassword -admin_tenant_name = service +admin_tenant_name = service Middleware parameters in paste config take priority. You must remove them to use values in [keystone_authtoken] diff --git a/doc/common/ch_getstart.xml b/doc/common/ch_getstart.xml index 09eeefc642..5d6cffcb60 100644 --- a/doc/common/ch_getstart.xml +++ b/doc/common/ch_getstart.xml @@ -14,704 +14,24 @@ solution through a set of interrelated services. Each service offers an application programming interface (API) that facilitates this integration. -
- OpenStack architecture - The following table describes the OpenStack services that - make up the OpenStack architecture: - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
OpenStack services
ServiceProject nameDescription
DashboardHorizonEnables users to interact with all OpenStack services to - launch an instance, assign IP addresses, set access - controls, and so on.
Identity ServiceKeystoneProvides authentication and authorization for all the - OpenStack services. Also provides a service catalog within - a particular OpenStack cloud.
Compute ServiceNovaProvisions and manages large networks of virtual - machines on demand.
Object Storage ServiceSwiftStores and retrieve files. Does not mount directories - like a file server.
Block Storage ServiceCinderProvides persistent block storage to guest virtual - machines.
Image ServiceGlanceProvides a registry of virtual machine images. Compute - Service uses it to provision instances.
Networking ServiceNeutronEnables network connectivity as a service among - interface devices managed by other OpenStack services, - usually Compute Service. Enables users to create and - attach interfaces to networks. Has a pluggable - architecture that supports many popular networking vendors - and technologies.
Metering/Monitoring ServiceCeilometerMonitors and meters the OpenStack cloud for billing, - benchmarking, scalability, and statistics purposes.
Orchestration ServiceHeatOrchestrates multiple composite cloud applications by - using the AWS CloudFormation template format, through both - an OpenStack-native REST API and a - CloudFormation-compatible Query API.
- -
- Conceptual architecture - The following diagram shows the relationships among the - OpenStack services: - - - - - - - -
- -
- Logical architecture - To design, install, and configure a cloud, cloud - administrators must understand the logical - architecture. - OpenStack modules are one of the following types: - - - Daemon. Runs as a daemon. On Linux platforms, it's - usually installed as a service. - - - Script. Runs installation and tests of a virtual - environment. For example, a script called - run_tests.sh installs a virtual environment - for a service and then may also run tests to verify that - virtual environment functions well. - - - Command-line interface (CLI). Enables users to submit - API calls to OpenStack services through easy-to-use - commands. - - - The following diagram shows the most common, but not the - only, architecture for an OpenStack cloud: - -
- OpenStack logical architecture - - - - - -
- As in the conceptual architecture, end users can interact - through the dashboard, CLIs, and APIs. All services - authenticate through a common Identity Service and individual - services interact with each other through public APIs, except - where privileged administrator commands are necessary. -
-
+
OpenStack services This section describes OpenStack services in detail. -
- Dashboard - The dashboard is a modular Django web - application that provides a graphical interface to - OpenStack services. - - - - - - - - The dashboard is usually deployed through mod_wsgi in Apache. You can modify the dashboard - code to make it suitable for different sites. - From a network architecture point of view, this service - must be accessible to customers and the public API for each - OpenStack service. To use the administrator functionality for - other services, it must also connect to Admin API endpoints, - which should not be accessible by customers. -
-
- Identity Service - The Identity Service is an OpenStack project that provides - identity, token, catalog, and policy services to OpenStack - projects. It consists of: - - - keystone-all. - Starts both the service and administrative APIs in a - single process to provide Catalog, Authorization, and - Authentication services for OpenStack. - - - Identity Service functions. Each has a pluggable back - end that allows different ways to use the particular - service. Most support standard back ends like LDAP or - SQL. - - - The Identity Service is mostly used to customize - authentication services. -
+ + -
- Compute Service - The Compute Service is a cloud computing fabric - controller, the main part of an IaaS system. It can be used - for hosting and managing cloud computing systems. The main - modules are implemented in Python. - The Compute Service is made up of the following functional - areas and their underlying components: - - API - - nova-api - service. Accepts and responds to end user compute API - calls. Supports the OpenStack Compute API, the Amazon EC2 - API, and a special Admin API for privileged users to - perform administrative actions. Also, initiates most - orchestration activities, such as running an instance, and - enforces some policies. - - - nova-api-metadata service. Accepts - metadata requests from instances. The nova-api-metadata service - is generally only used when you run in multi-host mode - with nova-network - installations. For details, see Metadata service in the Cloud Administrator Guide. - - - - Compute core - - nova-compute - process. A worker daemon that creates and terminates - virtual machine instances through hypervisor APIs. For - example, XenAPI for XenServer/XCP, libvirt for KVM or - QEMU, VMwareAPI for VMware, and so on. The process by - which it does so is fairly complex but the basics are - simple: Accept actions from the queue and perform a series - of system commands, like launching a KVM instance, to - carry them out while updating state in the - database. - - - nova-scheduler process. Conceptually the - simplest piece of code in Compute. Takes a virtual machine - instance request from the queue and determines on which - compute server host it should run. - - - nova-conductor module. Mediates - interactions between nova-compute and the database. Aims to - eliminate direct accesses to the cloud database made by - nova-compute. - The nova-conductor module scales horizontally. - However, do not deploy it on any nodes where nova-compute runs. For more - information, see A new Nova service: nova-conductor. - - - - Networking for VMs - - nova-network - worker daemon. Similar to nova-compute, it accepts networking tasks - from the queue and performs tasks to manipulate the - network, such as setting up bridging interfaces or - changing iptables rules. This functionality is being - migrated to OpenStack Networking, which is a separate - OpenStack service. - - - nova-dhcpbridge script. Tracks IP address - leases and records them in the database by using the - dnsmasq dhcp-script facility. This - functionality is being migrated to OpenStack Networking. - OpenStack Networking provides a different script. - - - - - Console interface - - nova-consoleauth daemon. Authorizes tokens - for users that console proxies provide. See nova-novncproxy and - nova-xvpnvcproxy. This service must be - running for console proxies to work. Many proxies of - either type can be run against a single nova-consoleauth service in - a cluster configuration. For information, see About nova-consoleauth. - - - nova-novncproxy daemon. Provides a proxy - for accessing running instances through a VNC connection. - Supports browser-based novnc clients. - - - nova-console - daemon. Deprecated for use with Grizzly. Instead, the - nova-xvpnvncproxy is used. - - - nova-xvpnvncproxy daemon. A proxy for - accessing running instances through a VNC connection. - Supports a Java client specifically designed for - OpenStack. - - - nova-cert - daemon. Manages x509 certificates. - - - - Image Management (EC2 scenario) - - nova-objectstore daemon. Provides an S3 - interface for registering images with the Image Service. - Mainly used for installations that must support euca2ools. - The euca2ools tools talk to nova-objectstore in S3 language, and nova-objectstore translates - S3 requests into Image Service requests. - - - euca2ools client. A set of command-line interpreter - commands for managing cloud resources. Though not an - OpenStack module, you can configure nova-api to support this - EC2 interface. For more information, see the Eucalyptus 2.0 Documentation. - - - - Command Line Interpreter/Interfaces - - nova client. Enables users to submit commands as a - tenant administrator or end user. - - - nova-manage client. Enables cloud administrators to - submit commands. - - - - Other components - - The queue. A central hub for passing messages between - daemons. Usually implemented with RabbitMQ, - but could be any AMPQ message queue, such as Apache Qpid) - or Zero - MQ. - - - SQL database. Stores most build-time and runtime - states for a cloud infrastructure. Includes instance types - that are available for use, instances in use, available - networks, and projects. Theoretically, OpenStack Compute - can support any database that SQL-Alchemy supports, but - the only databases widely used are sqlite3 databases, - MySQL (only appropriate for test and development work), - and PostgreSQL. - - - The Compute Service interacts with other OpenStack - services: Identity Service for authentication, Image Service - for images, and the OpenStack Dashboard for a web - interface. -
+ -
- Object Storage Service - The Object Storage Service is a highly scalable and - durable multi-tenant object storage system for large amounts - of unstructured data at low cost through a RESTful http - API. - It includes the following components: - - - swift-proxy-server. Accepts Object Storage - API and raw HTTP requests to upload files, modify - metadata, and create containers. It also serves file or - container listings to web browsers. To improve - performance, the proxy server can use an optional cache - usually deployed with memcache. - - - Account servers. Manage accounts defined with the - Object Storage Service. - - - Container servers. Manage a mapping of containers, or - folders, within the Object Storage Service. - - - Object servers. Manage actual objects, such as files, - on the storage nodes. - - - A number of periodic processes. Performs housekeeping - tasks on the large data store. The replication services - ensure consistency and availability through the cluster. - Other periodic processes include auditors, updaters, and - reapers. - - - Configurable WSGI middleware, which is usually the - Identity Service, handles authentication. - -
-
- Block Storage Service - The Block Storage Service enables management of volumes, - volume snapshots, and volume types. It includes the following - components: - - - cinder-api. - Accepts API requests and routes them to cinder-volume for - action. - - - cinder-volume. Responds to requests to read - from and write to the Object Storage database to maintain - state, interacting with other processes (like cinder-scheduler) through a - message queue and directly upon block storage providing - hardware or software. It can interact with a variety of - storage providers through a driver architecture. - - - cinder-scheduler daemon. Like the - nova-scheduler, - picks the optimal block storage provider node on which to - create the volume. - - - Messaging queue. Routes information between the Block - Storage Service processes and a database, which stores - volume state. - - - The Block Storage Service interacts with Compute to - provide volumes for instances. -
-
- Image Service - The Image Service includes the following - components: - - - glance-api. - Accepts Image API calls for image discovery, retrieval, - and storage. - - - glance-registry. Stores, processes, and - retrieves metadata about images. Metadata includes size, - type, and so on. - - - Database. Stores image metadata. You can choose your - database depending on your preference. Most deployments - use MySQL or SQlite. - - - Storage repository for image files. In , the Object Storage Service - is the image repository. However, you can configure a - different repository. The Image Service supports normal - file systems, RADOS block devices, Amazon S3, and HTTP. - Some of these choices are limited to read-only - usage. - - - A number of periodic processes run on the Image Service to - support caching. Replication services ensures consistency and - availability through the cluster. Other periodic processes - include auditors, updaters, and reapers. - As shown in , the Image - Service is central to the overall IaaS picture. It accepts API - requests for images or image metadata from end users or - Compute components and can store its disk files in the Object - Storage Service. -
-
- Networking Service - Provides network-connectivity-as-a-service between - interface devices that are managed by other OpenStack - services, usually Compute. Enables users to create and attach - interfaces to networks. Like many OpenStack services, - OpenStack Networking is highly configurable due to its plug-in - architecture. These plug-ins accommodate different networking - equipment and software. Consequently, the architecture and - deployment vary dramatically. - Includes the following components: - - - neutron-server. Accepts and routes API - requests to the appropriate OpenStack Networking plug-in - for action. - - - OpenStack Networking Plug-ins and Agents. - Plug and unplug ports, create networks or subnets, and - provide IP addressing. These plug-ins and agents differ - depending on the vendor and technologies used in the Cloud - System. OpenStack Networking ships with plug-ins and agents - for Arista, Brocade, Cisco NXOS as well as Nexus 1000V and - Mellanox switches, Linux bridging, Nicira NVP product, NEC - OpenFlow, Open vSwitch, PLUMgrid Platform, and the Ryu - Network Operating System. - The common agents are L3 (layer 3), DHCP (dynamic host - IP addressing), and a plug-in agent. - - - Messaging Queue. Most OpenStack Networking - installations make use of a messaging queue to route - information between the neutron-server and various agents - as well as a database to store networking state for - particular plug-ins. - - - OpenStack Networking interacts mainly with OpenStack - Compute, where it provides networks and connectivity for its - instances. -
+ + + + + -
- Metering/Monitoring Service - The Metering Service is designed to: - - - - Efficiently collect the metering data about the CPU - and network costs. - - - Collect data by monitoring notifications sent from - services or by polling the infrastructure. - - - Configure the type of collected data to meet various - operating requirements. Accessing and inserting the - metering data through the REST API. - - - Expand the framework to collect custom usage data by - additional plug-ins. - - - Produce signed metering messages that cannot be - repudiated. - - - - The system consists of the following basic - components: - - - A compute agent. Runs on each compute node and polls - for resource utilization statistics. There may be other - types of agents in the future, but for now we will focus - on creating the compute agent. - - - A central agent. Runs on a central management server - to poll for resource utilization statistics for resources - not tied to instances or compute nodes. - - - A collector. Runs on one or more central management - servers to monitor the message queues (for notifications - and for metering data coming from the agent). Notification - messages are processed and turned into metering messages - and sent back out onto the message bus using the - appropriate topic. Metering messages are written to the - data store without modification. - - - A data store. A database capable of handling - concurrent writes (from one or more collector instances) - and reads (from the API server). - - - An API server. Runs on one or more central management - servers to provide access to the data from the data store. - These services communicate using the standard OpenStack - messaging bus. Only the collector and API server have - access to the data store. - - - These services communicate by using the standard OpenStack - messaging bus. Only the collector and API server have access - to the data store. -
- -
- Orchestration Service - The Orchestration Service provides a template-based - orchestration for describing a cloud application by running - OpenStack API calls to generate running cloud applications. - The software integrates other core components of OpenStack - into a one-file template system. The templates enable you to - create most OpenStack resource types, such as instances, - floating IPs, volumes, security groups, users, and so on. - Also, provides some more advanced functionality, such as - instance high availability, instance auto-scaling, and nested - stacks. By providing very tight integration with other - OpenStack core projects, all OpenStack core projects could - receive a larger user base. - Enables deployers to integrate with the Orchestration - Service directly or through custom plug-ins. - The Orchestration Service consists of the following - components: - - - heat tool. A CLI that communicates with - the heat-api to run AWS CloudFormation APIs. End - developers could also use the heat REST API - directly. - - - heat-api component. Provides an - OpenStack-native REST API that processes API requests by - sending them to the heat-engine over RPC. - - - heat-api-cfn component. Provides an AWS - Query API that is compatible with AWS CloudFormation and - processes API requests by sending them to the heat-engine - over RPC. - - - heat-engine. Orchestrates the launching - of templates and provides events back to the API - consumer. - - -
+ +
Feedback diff --git a/doc/common/section_dashboard-system-reqs.xml b/doc/common/section_dashboard-system-reqs.xml index b435661b5a..dde16c32aa 100644 --- a/doc/common/section_dashboard-system-reqs.xml +++ b/doc/common/section_dashboard-system-reqs.xml @@ -34,8 +34,7 @@ might differ by platform. - Then, install and configure the dashboard on a node that + Then, install and configure the dashboard on a node that can contact the Identity Service. Provide users with the following information so that they can access the dashboard through a web browser on their local diff --git a/doc/common/section_getstart_architecture.xml b/doc/common/section_getstart_architecture.xml new file mode 100644 index 0000000000..d83062db9a --- /dev/null +++ b/doc/common/section_getstart_architecture.xml @@ -0,0 +1,181 @@ +
+ OpenStack architecture + The following table describes the OpenStack services that + make up the OpenStack architecture. You may only install some + of these, depending on your needs. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
OpenStack services
ServiceProject nameDescription
DashboardHorizonEnables users to interact with all OpenStack services to + launch an instance, assign IP addresses, set access + controls, and so on.
Identity ServiceKeystoneProvides authentication and authorization for all the + OpenStack services. Also provides a service catalog within + a particular OpenStack cloud.
Compute ServiceNovaProvisions and manages large networks of virtual + machines on demand.
Object Storage ServiceSwiftStores and retrieve files. Does not mount directories + like a file server.
Block Storage ServiceCinderProvides persistent block storage to guest virtual + machines.
Image ServiceGlanceProvides a registry of virtual machine images. Compute + Service uses it to provision instances.
Networking ServiceNeutronEnables network connectivity as a service among + interface devices managed by other OpenStack services, + usually Compute Service. Enables users to create and + attach interfaces to networks. Has a pluggable + architecture that supports many popular networking vendors + and technologies.
Metering/Monitoring ServiceCeilometerMonitors and meters the OpenStack cloud for billing, + benchmarking, scalability, and statistics purposes.
Orchestration ServiceHeatOrchestrates multiple composite cloud applications by + using the AWS CloudFormation template format, through both + an OpenStack-native REST API and a + CloudFormation-compatible Query API.
+ +
+ Conceptual architecture + The following diagram shows the relationships among the + OpenStack services: + + + + + + + +
+ +
+ Logical architecture + To design, install, and configure a cloud, cloud + administrators must understand the logical + architecture. + OpenStack modules are one of the following types: + + + Daemon. Runs as a daemon. On Linux platforms, it's + usually installed as a service. + + + Script. Runs installation and tests of a virtual + environment. For example, a script called + run_tests.sh installs a virtual environment + for a service and then may also run tests to verify that + virtual environment functions well. + + + Command-line interface (CLI). Enables users to submit + API calls to OpenStack services through easy-to-use + commands. + + + The following diagram shows the most common, but not the + only, architecture for an OpenStack cloud: + +
+ OpenStack logical architecture + + + + + +
+ As in the conceptual architecture, end users can interact + through the dashboard, CLIs, and APIs. All services + authenticate through a common Identity Service and individual + services interact with each other through public APIs, except + where privileged administrator commands are necessary. +
+
diff --git a/doc/common/section_getstart_block-storage.xml b/doc/common/section_getstart_block-storage.xml new file mode 100644 index 0000000000..8b0167eb38 --- /dev/null +++ b/doc/common/section_getstart_block-storage.xml @@ -0,0 +1,41 @@ +
+ Block Storage Service + The Block Storage Service enables management of volumes, + volume snapshots, and volume types. It includes the following + components: + + + cinder-api. + Accepts API requests and routes them to cinder-volume for + action. + + + cinder-volume. Responds to requests to read + from and write to the Object Storage database to maintain + state, interacting with other processes (like cinder-scheduler) through a + message queue and directly upon block storage providing + hardware or software. It can interact with a variety of + storage providers through a driver architecture. + + + cinder-scheduler daemon. Like the + nova-scheduler, + picks the optimal block storage provider node on which to + create the volume. + + + Messaging queue. Routes information between the Block + Storage Service processes and a database, which stores + volume state. + + + The Block Storage Service interacts with Compute to + provide volumes for instances. +
diff --git a/doc/common/section_getstart_compute.xml b/doc/common/section_getstart_compute.xml new file mode 100644 index 0000000000..621852e18a --- /dev/null +++ b/doc/common/section_getstart_compute.xml @@ -0,0 +1,194 @@ +
+ Compute Service + The Compute Service is a cloud computing fabric + controller, the main part of an IaaS system. It can be used + for hosting and managing cloud computing systems. The main + modules are implemented in Python. + The Compute Service is made up of the following functional + areas and their underlying components: + + API + + nova-api + service. Accepts and responds to end user compute API + calls. Supports the OpenStack Compute API, the Amazon EC2 + API, and a special Admin API for privileged users to + perform administrative actions. Also, initiates most + orchestration activities, such as running an instance, and + enforces some policies. + + + nova-api-metadata service. Accepts + metadata requests from instances. The nova-api-metadata service + is generally only used when you run in multi-host mode + with nova-network + installations. For details, see Metadata service in the Cloud Administrator Guide. + + + + Compute core + + nova-compute + process. A worker daemon that creates and terminates + virtual machine instances through hypervisor APIs. For + example, XenAPI for XenServer/XCP, libvirt for KVM or + QEMU, VMwareAPI for VMware, and so on. The process by + which it does so is fairly complex but the basics are + simple: Accept actions from the queue and perform a series + of system commands, like launching a KVM instance, to + carry them out while updating state in the + database. + + + nova-scheduler process. Conceptually the + simplest piece of code in Compute. Takes a virtual machine + instance request from the queue and determines on which + compute server host it should run. + + + nova-conductor module. Mediates + interactions between nova-compute and the database. Aims to + eliminate direct accesses to the cloud database made by + nova-compute. + The nova-conductor module scales horizontally. + However, do not deploy it on any nodes where nova-compute runs. For more + information, see A new Nova service: nova-conductor. + + + + Networking for VMs + + nova-network + worker daemon. Similar to nova-compute, it accepts networking tasks + from the queue and performs tasks to manipulate the + network, such as setting up bridging interfaces or + changing iptables rules. This functionality is being + migrated to OpenStack Networking, which is a separate + OpenStack service. + + + nova-dhcpbridge script. Tracks IP address + leases and records them in the database by using the + dnsmasq dhcp-script facility. This + functionality is being migrated to OpenStack Networking. + OpenStack Networking provides a different script. + + + + + Console interface + + nova-consoleauth daemon. Authorizes tokens + for users that console proxies provide. See nova-novncproxy and + nova-xvpnvcproxy. This service must be + running for console proxies to work. Many proxies of + either type can be run against a single nova-consoleauth service in + a cluster configuration. For information, see About nova-consoleauth. + + + nova-novncproxy daemon. Provides a proxy + for accessing running instances through a VNC connection. + Supports browser-based novnc clients. + + + nova-console + daemon. Deprecated for use with Grizzly. Instead, the + nova-xvpnvncproxy is used. + + + nova-xvpnvncproxy daemon. A proxy for + accessing running instances through a VNC connection. + Supports a Java client specifically designed for + OpenStack. + + + nova-cert + daemon. Manages x509 certificates. + + + + Image Management (EC2 scenario) + + nova-objectstore daemon. Provides an S3 + interface for registering images with the Image Service. + Mainly used for installations that must support euca2ools. + The euca2ools tools talk to nova-objectstore in S3 language, and nova-objectstore translates + S3 requests into Image Service requests. + + + euca2ools client. A set of command-line interpreter + commands for managing cloud resources. Though not an + OpenStack module, you can configure nova-api to support this + EC2 interface. For more information, see the Eucalyptus 2.0 Documentation. + + + + Command Line Interpreter/Interfaces + + nova client. Enables users to submit commands as a + tenant administrator or end user. + + + nova-manage client. Enables cloud administrators to + submit commands. + + + + Other components + + The queue. A central hub for passing messages between + daemons. Usually implemented with RabbitMQ, + but could be any AMPQ message queue, such as Apache Qpid) + or Zero + MQ. + + + SQL database. Stores most build-time and runtime + states for a cloud infrastructure. Includes instance types + that are available for use, instances in use, available + networks, and projects. Theoretically, OpenStack Compute + can support any database that SQL-Alchemy supports, but + the only databases widely used are sqlite3 databases, + MySQL (only appropriate for test and development work), + and PostgreSQL. + + + The Compute Service interacts with other OpenStack + services: Identity Service for authentication, Image Service + for images, and the OpenStack Dashboard for a web + interface. +
diff --git a/doc/common/section_getstart_dashboard.xml b/doc/common/section_getstart_dashboard.xml new file mode 100644 index 0000000000..6952570ccf --- /dev/null +++ b/doc/common/section_getstart_dashboard.xml @@ -0,0 +1,27 @@ +
+ Dashboard + The dashboard is a modular Django web + application that provides a graphical interface to + OpenStack services. + + + + + + + + The dashboard is usually deployed through mod_wsgi in Apache. You can modify the dashboard + code to make it suitable for different sites. + From a network architecture point of view, this service + must be accessible to customers and the public API for each + OpenStack service. To use the administrator functionality for + other services, it must also connect to Admin API endpoints, + which should not be accessible by customers. +
diff --git a/doc/common/section_getstart_image.xml b/doc/common/section_getstart_image.xml new file mode 100644 index 0000000000..0e73450deb --- /dev/null +++ b/doc/common/section_getstart_image.xml @@ -0,0 +1,44 @@ +
+ Image Service + The Image Service includes the following + components: + + + glance-api. + Accepts Image API calls for image discovery, retrieval, + and storage. + + + glance-registry. Stores, processes, and + retrieves metadata about images. Metadata includes size, + type, and so on. + + + Database. Stores image metadata. You can choose your + database depending on your preference. Most deployments + use MySQL or SQlite. + + + Storage repository for image files. In , the Object Storage Service + is the image repository. However, you can configure a + different repository. The Image Service supports normal + file systems, RADOS block devices, Amazon S3, and HTTP. + Some of these choices are limited to read-only + usage. + + + A number of periodic processes run on the Image Service to + support caching. Replication services ensures consistency and + availability through the cluster. Other periodic processes + include auditors, updaters, and reapers. + As shown in , the Image + Service is central to the overall IaaS picture. It accepts API + requests for images or image metadata from end users or + Compute components and can store its disk files in the Object + Storage Service. +
diff --git a/doc/common/section_getstart_metering.xml b/doc/common/section_getstart_metering.xml new file mode 100644 index 0000000000..4d1162ff4c --- /dev/null +++ b/doc/common/section_getstart_metering.xml @@ -0,0 +1,71 @@ +
+ Metering/Monitoring Service + The Metering Service is designed to: + + + + Efficiently collect the metering data about the CPU + and network costs. + + + Collect data by monitoring notifications sent from + services or by polling the infrastructure. + + + Configure the type of collected data to meet various + operating requirements. Accessing and inserting the + metering data through the REST API. + + + Expand the framework to collect custom usage data by + additional plug-ins. + + + Produce signed metering messages that cannot be + repudiated. + + + + The system consists of the following basic + components: + + + A compute agent. Runs on each compute node and polls + for resource utilization statistics. There may be other + types of agents in the future, but for now we will focus + on creating the compute agent. + + + A central agent. Runs on a central management server + to poll for resource utilization statistics for resources + not tied to instances or compute nodes. + + + A collector. Runs on one or more central management + servers to monitor the message queues (for notifications + and for metering data coming from the agent). Notification + messages are processed and turned into metering messages + and sent back out onto the message bus using the + appropriate topic. Metering messages are written to the + data store without modification. + + + A data store. A database capable of handling + concurrent writes (from one or more collector instances) + and reads (from the API server). + + + An API server. Runs on one or more central management + servers to provide access to the data from the data store. + These services communicate using the standard OpenStack + messaging bus. Only the collector and API server have + access to the data store. + + + These services communicate by using the standard OpenStack + messaging bus. Only the collector and API server have access + to the data store. +
diff --git a/doc/common/section_getstart_networking.xml b/doc/common/section_getstart_networking.xml new file mode 100644 index 0000000000..083e5b97ba --- /dev/null +++ b/doc/common/section_getstart_networking.xml @@ -0,0 +1,45 @@ +
+ Networking Service + Provides network-connectivity-as-a-service between + interface devices that are managed by other OpenStack + services, usually Compute. Enables users to create and attach + interfaces to networks. Like many OpenStack services, + OpenStack Networking is highly configurable due to its plug-in + architecture. These plug-ins accommodate different networking + equipment and software. Consequently, the architecture and + deployment vary dramatically. + Includes the following components: + + + neutron-server. Accepts and routes API + requests to the appropriate OpenStack Networking plug-in + for action. + + + OpenStack Networking plug-ins and agents. Plugs and + unplugs ports, creates networks or subnets, and provides + IP addressing. These plug-ins and agents differ depending + on the vendor and technologies used in the particular + cloud. OpenStack Networking ships with plug-ins and agents + for Cisco virtual and physical switches, Nicira NVP + product, NEC OpenFlow products, Open vSwitch, Linux + bridging, and the Ryu Network Operating System. + The common agents are L3 (layer 3), DHCP (dynamic host + IP addressing), and a plug-in agent. + + + Messaging queue. Most OpenStack Networking + installations make use of a messaging queue to route + information between the neutron-server and various agents + as well as a database to store networking state for + particular plug-ins. + + + OpenStack Networking interacts mainly with OpenStack + Compute, where it provides networks and connectivity for its + instances. +
diff --git a/doc/common/section_getstart_object-storage.xml b/doc/common/section_getstart_object-storage.xml new file mode 100644 index 0000000000..2da5f48543 --- /dev/null +++ b/doc/common/section_getstart_object-storage.xml @@ -0,0 +1,43 @@ +
+ Object Storage Service + The Object Storage Service is a highly scalable and + durable multi-tenant object storage system for large amounts + of unstructured data at low cost through a RESTful http + API. + It includes the following components: + + + swift-proxy-server. Accepts Object Storage + API and raw HTTP requests to upload files, modify + metadata, and create containers. It also serves file or + container listings to web browsers. To improve + performance, the proxy server can use an optional cache + usually deployed with memcache. + + + Account servers. Manage accounts defined with the + Object Storage Service. + + + Container servers. Manage a mapping of containers, or + folders, within the Object Storage Service. + + + Object servers. Manage actual objects, such as files, + on the storage nodes. + + + A number of periodic processes. Performs housekeeping + tasks on the large data store. The replication services + ensure consistency and availability through the cluster. + Other periodic processes include auditors, updaters, and + reapers. + + + Configurable WSGI middleware, which is usually the + Identity Service, handles authentication. +
diff --git a/doc/common/section_getstart_orchestration.xml b/doc/common/section_getstart_orchestration.xml new file mode 100644 index 0000000000..b23e01f143 --- /dev/null +++ b/doc/common/section_getstart_orchestration.xml @@ -0,0 +1,46 @@ +
+ Orchestration Service + The Orchestration Service provides a template-based + orchestration for describing a cloud application by running + OpenStack API calls to generate running cloud applications. + The software integrates other core components of OpenStack + into a one-file template system. The templates enable you to + create most OpenStack resource types, such as instances, + floating IPs, volumes, security groups, users, and so on. + Also, provides some more advanced functionality, such as + instance high availability, instance auto-scaling, and nested + stacks. By providing very tight integration with other + OpenStack core projects, all OpenStack core projects could + receive a larger user base. + Enables deployers to integrate with the Orchestration + Service directly or through custom plug-ins. + The Orchestration Service consists of the following + components: + + + heat tool. A CLI that communicates with + the heat-api to run AWS CloudFormation APIs. End + developers could also use the heat REST API + directly. + + + heat-api component. Provides an + OpenStack-native REST API that processes API requests by + sending them to the heat-engine over RPC. + + + heat-api-cfn component. Provides an AWS + Query API that is compatible with AWS CloudFormation and + processes API requests by sending them to the heat-engine + over RPC. + + + heat-engine. Orchestrates the launching + of templates and provides events back to the API + consumer. + + +
diff --git a/doc/common/section_keystone-concepts-group-management.xml b/doc/common/section_keystone-concepts-group-management.xml new file mode 100644 index 0000000000..f8185fa47b --- /dev/null +++ b/doc/common/section_keystone-concepts-group-management.xml @@ -0,0 +1,70 @@ + +
+ Groups + A group is a collection of users. Administrators can + create groups and add users to them. Then, rather than + assign a role to each user individually, assign a role to + the group. Every group is in a domain. Groups were + introduced with version 3 of the Identity API (the Grizzly + release of Keystone). + Identity API V3 provides the following group-related + operations: + + + Create a group + + + Delete a group + + + Update a group (change its name or + description) + + + Add a user to a group + + + Remove a user from a group + + + List group members + + + List groups for a user + + + Assign a role on a tenant to a group + + + Assign a role on a domain to a group + + + Query role assignments to groups + + + + The Identity service server might not allow all + operations. For example, if using the Keystone server + with the LDAP Identity back end and group updates are + disabled, then a request to create, delete, or update + a group fails. + + Here are a couple examples: + + + Group A is granted Role A on Tenant A. If User A + is a member of Group A, when User A gets a token + scoped to Tenant A, the token also includes Role + A. + + + Group B is granted Role B on Domain B. If User B + is a member of Domain B, if User B gets a token + scoped to Domain B, the token also includes Role + B. + + +
diff --git a/doc/common/section_keystone-concepts-service-management.xml b/doc/common/section_keystone-concepts-service-management.xml new file mode 100644 index 0000000000..94fc220d8d --- /dev/null +++ b/doc/common/section_keystone-concepts-service-management.xml @@ -0,0 +1,33 @@ +
+ Service management + The Identity Service provides + identity, token, catalog, and policy services. + It consists of: + + + keystone-all. + Starts both the service and administrative APIs in a + single process to provide Catalog, Authorization, and + Authentication services for OpenStack. + + + Identity Service functions. Each has a pluggable back + end that allows different ways to use the particular + service. Most support standard back ends like LDAP or + SQL. + + + The Identity Service also maintains a user that + corresponds to each service, such as, a user named + nova for the Compute service, and + a special service tenant called + service. + For information about how to create services and + endpoints, see the OpenStack Admin User + Guide. +
diff --git a/doc/common/section_keystone-concepts-user-management.xml b/doc/common/section_keystone-concepts-user-management.xml new file mode 100644 index 0000000000..c2b0ea4467 --- /dev/null +++ b/doc/common/section_keystone-concepts-user-management.xml @@ -0,0 +1,219 @@ + +
+ User management + The main components of Identity user management are: + + Users + + + Tenants + + + Roles + + + A user represents a human user, and + has associated information such as user name, password, + and email. This example creates a user named + "alice": + $ keystone user-create --name=alice \ + --pass=mypassword123 --email=alice@example.com + A tenant can be a project, group, + or organization. Whenever you make requests to OpenStack + services, you must specify a tenant. For example, if you + query the Compute service for a list of running instances, + you receive a list of all of the running instances in the + tenant that you specified in your query. This example + creates a tenant named "acme": + $ keystone tenant-create --name=acme + + Because the term project was + used instead of tenant in earlier + versions of OpenStack Compute, some command-line tools + use --project_id instead of + --tenant-id or + --os-tenant-id to refer to a + tenant ID. + + A role captures what operations a + user is permitted to perform in a given tenant. This + example creates a role named "compute-user": + $ keystone role-create --name=compute-user + + It is up to individual services such as the Compute + service and Image service to assign meaning to these + roles. As far as the Identity service is concerned, a + role is simply a name. + + + The Identity service associates a user with a tenant and + a role. To continue with the previous examples, you might + to assign the "alice" user the "compute-user" role in the + "acme" tenant: + $ keystone user-list + +--------+---------+-------------------+--------+ +| id | enabled | email | name | ++--------+---------+-------------------+--------+ +| 892585 | True | alice@example.com | alice | ++--------+---------+-------------------+--------+ + $ keystone role-list + +--------+--------------+ +| id | name | ++--------+--------------+ +| 9a764e | compute-user | ++--------+--------------+ + $ keystone tenant-list + +--------+------+---------+ +| id | name | enabled | ++--------+------+---------+ +| 6b8fd2 | acme | True | ++--------+------+---------+ + $ keystone user-role-add --user=892585 --role=9a764e --tenant-id=6b8fd2 + A user can be assigned different roles in different + tenants: for example, Alice might also have the "admin" + role in the "Cyberdyne" tenant. A user can also be + assigned multiple roles in the same tenant. + The + /etc/[SERVICE_CODENAME]/policy.json + file controls the tasks that users can perform for a given + service. For example, + /etc/nova/policy.json specifies + the access policy for the Compute service, + /etc/glance/policy.json specifies + the access policy for the Image service, and + /etc/keystone/policy.json + specifies the access policy for the Identity + service. + The default policy.json files in + the Compute, Identity, and Image service recognize only + the admin role: all operations that do + not require the admin role are + accessible by any user that has any role in a + tenant. + If you wish to restrict users from performing operations + in, say, the Compute service, you need to create a role in + the Identity service and then modify + /etc/nova/policy.json so that + this role is required for Compute operations. + + For example, this line in + /etc/nova/policy.json specifies + that there are no restrictions on which users can create + volumes: if the user has any role in a tenant, they can + create volumes in that tenant. + "volume:create": [], + To restrict creation of volumes to users who had the + compute-user role in a particular + tenant, you would add + "role:compute-user", like + so: + "volume:create": ["role:compute-user"], + To restrict all Compute service requests to require this + role, the resulting file would look like: + { + "admin_or_owner": [["role:admin"], ["project_id:%(project_id)s"]], + "default": [["rule:admin_or_owner"]], + + "compute:create": ["role":"compute-user"], + "compute:create:attach_network": ["role":"compute-user"], + "compute:create:attach_volume": ["role":"compute-user"], + "compute:get_all": ["role":"compute-user"], + + "admin_api": [["role:admin"]], + "compute_extension:accounts": [["rule:admin_api"]], + "compute_extension:admin_actions": [["rule:admin_api"]], + "compute_extension:admin_actions:pause": [["rule:admin_or_owner"]], + "compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]], + "compute_extension:admin_actions:suspend": [["rule:admin_or_owner"]], + "compute_extension:admin_actions:resume": [["rule:admin_or_owner"]], + "compute_extension:admin_actions:lock": [["rule:admin_api"]], + "compute_extension:admin_actions:unlock": [["rule:admin_api"]], + "compute_extension:admin_actions:resetNetwork": [["rule:admin_api"]], + "compute_extension:admin_actions:injectNetworkInfo": [["rule:admin_api"]], + "compute_extension:admin_actions:createBackup": [["rule:admin_or_owner"]], + "compute_extension:admin_actions:migrateLive": [["rule:admin_api"]], + "compute_extension:admin_actions:migrate": [["rule:admin_api"]], + "compute_extension:aggregates": [["rule:admin_api"]], + "compute_extension:certificates": ["role":"compute-user"], + "compute_extension:cloudpipe": [["rule:admin_api"]], + "compute_extension:console_output": ["role":"compute-user"], + "compute_extension:consoles": ["role":"compute-user"], + "compute_extension:createserverext": ["role":"compute-user"], + "compute_extension:deferred_delete": ["role":"compute-user"], + "compute_extension:disk_config": ["role":"compute-user"], + "compute_extension:evacuate": [["rule:admin_api"]], + "compute_extension:extended_server_attributes": [["rule:admin_api"]], + "compute_extension:extended_status": ["role":"compute-user"], + "compute_extension:flavorextradata": ["role":"compute-user"], + "compute_extension:flavorextraspecs": ["role":"compute-user"], + "compute_extension:flavormanage": [["rule:admin_api"]], + "compute_extension:floating_ip_dns": ["role":"compute-user"], + "compute_extension:floating_ip_pools": ["role":"compute-user"], + "compute_extension:floating_ips": ["role":"compute-user"], + "compute_extension:hosts": [["rule:admin_api"]], + "compute_extension:keypairs": ["role":"compute-user"], + "compute_extension:multinic": ["role":"compute-user"], + "compute_extension:networks": [["rule:admin_api"]], + "compute_extension:quotas": ["role":"compute-user"], + "compute_extension:rescue": ["role":"compute-user"], + "compute_extension:security_groups": ["role":"compute-user"], + "compute_extension:server_action_list": [["rule:admin_api"]], + "compute_extension:server_diagnostics": [["rule:admin_api"]], + "compute_extension:simple_tenant_usage:show": [["rule:admin_or_owner"]], + "compute_extension:simple_tenant_usage:list": [["rule:admin_api"]], + "compute_extension:users": [["rule:admin_api"]], + "compute_extension:virtual_interfaces": ["role":"compute-user"], + "compute_extension:virtual_storage_arrays": ["role":"compute-user"], + "compute_extension:volumes": ["role":"compute-user"], + "compute_extension:volume_attachments:index": ["role":"compute-user"], + "compute_extension:volume_attachments:show": ["role":"compute-user"], + "compute_extension:volume_attachments:create": ["role":"compute-user"], + "compute_extension:volume_attachments:delete": ["role":"compute-user"], + "compute_extension:volumetypes": ["role":"compute-user"], + + "volume:create": ["role":"compute-user"], + "volume:get_all": ["role":"compute-user"], + "volume:get_volume_metadata": ["role":"compute-user"], + "volume:get_snapshot": ["role":"compute-user"], + "volume:get_all_snapshots": ["role":"compute-user"], + + "network:get_all_networks": ["role":"compute-user"], + "network:get_network": ["role":"compute-user"], + "network:delete_network": ["role":"compute-user"], + "network:disassociate_network": ["role":"compute-user"], + "network:get_vifs_by_instance": ["role":"compute-user"], + "network:allocate_for_instance": ["role":"compute-user"], + "network:deallocate_for_instance": ["role":"compute-user"], + "network:validate_networks": ["role":"compute-user"], + "network:get_instance_uuids_by_ip_filter": ["role":"compute-user"], + + "network:get_floating_ip": ["role":"compute-user"], + "network:get_floating_ip_pools": ["role":"compute-user"], + "network:get_floating_ip_by_address": ["role":"compute-user"], + "network:get_floating_ips_by_project": ["role":"compute-user"], + "network:get_floating_ips_by_fixed_address": ["role":"compute-user"], + "network:allocate_floating_ip": ["role":"compute-user"], + "network:deallocate_floating_ip": ["role":"compute-user"], + "network:associate_floating_ip": ["role":"compute-user"], + "network:disassociate_floating_ip": ["role":"compute-user"], + + "network:get_fixed_ip": ["role":"compute-user"], + "network:add_fixed_ip_to_instance": ["role":"compute-user"], + "network:remove_fixed_ip_from_instance": ["role":"compute-user"], + "network:add_network_to_project": ["role":"compute-user"], + "network:get_instance_nw_info": ["role":"compute-user"], + + "network:get_dns_domains": ["role":"compute-user"], + "network:add_dns_entry": ["role":"compute-user"], + "network:modify_dns_entry": ["role":"compute-user"], + "network:delete_dns_entry": ["role":"compute-user"], + "network:get_dns_entries_by_address": ["role":"compute-user"], + "network:get_dns_entries_by_name": ["role":"compute-user"], + "network:create_private_dns_domain": ["role":"compute-user"], + "network:create_public_dns_domain": ["role":"compute-user"], + "network:delete_dns_domain": ["role":"compute-user"] +} +
diff --git a/doc/common/section_keystone-concepts.xml b/doc/common/section_keystone-concepts.xml index a01a62f2c4..a1e3d5eff7 100644 --- a/doc/common/section_keystone-concepts.xml +++ b/doc/common/section_keystone-concepts.xml @@ -50,7 +50,7 @@ The act of confirming the identity of a user. The Identity Service confirms an incoming request by validating a set of credentials supplied by the - user. + user.
These credentials are initially a user name and password or a user name and API key. In response to these credentials, the Identity Service issues @@ -136,310 +136,4 @@ format="PNG" scale="10"/> - -
- User management - The main components of Identity user management are: - - Users - - - Tenants - - - Roles - - - A user represents a human user, and - has associated information such as user name, password, - and email. This example creates a user named - "alice": - $ keystone user-create --name=alice \ - --pass=mypassword123 --email=alice@example.com - A tenant can be a project, group, - or organization. Whenever you make requests to OpenStack - services, you must specify a tenant. For example, if you - query the Compute service for a list of running instances, - you receive a list of all of the running instances in the - tenant that you specified in your query. This example - creates a tenant named "acme": - $ keystone tenant-create --name=acme - - Because the term project was - used instead of tenant in earlier - versions of OpenStack Compute, some command-line tools - use --project_id instead of - --tenant-id or - --os-tenant-id to refer to a - tenant ID. - - A role captures what operations a - user is permitted to perform in a given tenant. This - example creates a role named "compute-user": - $ keystone role-create --name=compute-user - - It is up to individual services such as the Compute - service and Image service to assign meaning to these - roles. As far as the Identity service is concerned, a - role is simply a name. - - - The Identity service associates a user with a tenant and - a role. To continue with the previous examples, you might - to assign the "alice" user the "compute-user" role in the - "acme" tenant: - $ keystone user-list - +--------+---------+-------------------+--------+ -| id | enabled | email | name | -+--------+---------+-------------------+--------+ -| 892585 | True | alice@example.com | alice | -+--------+---------+-------------------+--------+ - $ keystone role-list - +--------+--------------+ -| id | name | -+--------+--------------+ -| 9a764e | compute-user | -+--------+--------------+ - $ keystone tenant-list - +--------+------+---------+ -| id | name | enabled | -+--------+------+---------+ -| 6b8fd2 | acme | True | -+--------+------+---------+ - $ keystone user-role-add --user=892585 --role=9a764e --tenant-id=6b8fd2 - A user can be assigned different roles in different - tenants: for example, Alice might also have the "admin" - role in the "Cyberdyne" tenant. A user can also be - assigned multiple roles in the same tenant. - The - /etc/[SERVICE_CODENAME]/policy.json - file controls the tasks that users can perform for a given - service. For example, - /etc/nova/policy.json specifies - the access policy for the Compute service, - /etc/glance/policy.json specifies - the access policy for the Image service, and - /etc/keystone/policy.json - specifies the access policy for the Identity - service. - The default policy.json files in - the Compute, Identity, and Image service recognize only - the admin role: all operations that do - not require the admin role are - accessible by any user that has any role in a - tenant. - If you wish to restrict users from performing operations - in, say, the Compute service, you need to create a role in - the Identity service and then modify - /etc/nova/policy.json so that - this role is required for Compute operations. - - For example, this line in - /etc/nova/policy.json specifies - that there are no restrictions on which users can create - volumes: if the user has any role in a tenant, they can - create volumes in that tenant. - "volume:create": [], - To restrict creation of volumes to users who had the - compute-user role in a particular - tenant, you would add - "role:compute-user", like - so: - "volume:create": ["role:compute-user"], - To restrict all Compute service requests to require this - role, the resulting file would look like: - { - "admin_or_owner": [["role:admin"], ["project_id:%(project_id)s"]], - "default": [["rule:admin_or_owner"]], - - "compute:create": ["role":"compute-user"], - "compute:create:attach_network": ["role":"compute-user"], - "compute:create:attach_volume": ["role":"compute-user"], - "compute:get_all": ["role":"compute-user"], - - "admin_api": [["role:admin"]], - "compute_extension:accounts": [["rule:admin_api"]], - "compute_extension:admin_actions": [["rule:admin_api"]], - "compute_extension:admin_actions:pause": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:unpause": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:suspend": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:resume": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:lock": [["rule:admin_api"]], - "compute_extension:admin_actions:unlock": [["rule:admin_api"]], - "compute_extension:admin_actions:resetNetwork": [["rule:admin_api"]], - "compute_extension:admin_actions:injectNetworkInfo": [["rule:admin_api"]], - "compute_extension:admin_actions:createBackup": [["rule:admin_or_owner"]], - "compute_extension:admin_actions:migrateLive": [["rule:admin_api"]], - "compute_extension:admin_actions:migrate": [["rule:admin_api"]], - "compute_extension:aggregates": [["rule:admin_api"]], - "compute_extension:certificates": ["role":"compute-user"], - "compute_extension:cloudpipe": [["rule:admin_api"]], - "compute_extension:console_output": ["role":"compute-user"], - "compute_extension:consoles": ["role":"compute-user"], - "compute_extension:createserverext": ["role":"compute-user"], - "compute_extension:deferred_delete": ["role":"compute-user"], - "compute_extension:disk_config": ["role":"compute-user"], - "compute_extension:evacuate": [["rule:admin_api"]], - "compute_extension:extended_server_attributes": [["rule:admin_api"]], - "compute_extension:extended_status": ["role":"compute-user"], - "compute_extension:flavorextradata": ["role":"compute-user"], - "compute_extension:flavorextraspecs": ["role":"compute-user"], - "compute_extension:flavormanage": [["rule:admin_api"]], - "compute_extension:floating_ip_dns": ["role":"compute-user"], - "compute_extension:floating_ip_pools": ["role":"compute-user"], - "compute_extension:floating_ips": ["role":"compute-user"], - "compute_extension:hosts": [["rule:admin_api"]], - "compute_extension:keypairs": ["role":"compute-user"], - "compute_extension:multinic": ["role":"compute-user"], - "compute_extension:networks": [["rule:admin_api"]], - "compute_extension:quotas": ["role":"compute-user"], - "compute_extension:rescue": ["role":"compute-user"], - "compute_extension:security_groups": ["role":"compute-user"], - "compute_extension:server_action_list": [["rule:admin_api"]], - "compute_extension:server_diagnostics": [["rule:admin_api"]], - "compute_extension:simple_tenant_usage:show": [["rule:admin_or_owner"]], - "compute_extension:simple_tenant_usage:list": [["rule:admin_api"]], - "compute_extension:users": [["rule:admin_api"]], - "compute_extension:virtual_interfaces": ["role":"compute-user"], - "compute_extension:virtual_storage_arrays": ["role":"compute-user"], - "compute_extension:volumes": ["role":"compute-user"], - "compute_extension:volume_attachments:index": ["role":"compute-user"], - "compute_extension:volume_attachments:show": ["role":"compute-user"], - "compute_extension:volume_attachments:create": ["role":"compute-user"], - "compute_extension:volume_attachments:delete": ["role":"compute-user"], - "compute_extension:volumetypes": ["role":"compute-user"], - - "volume:create": ["role":"compute-user"], - "volume:get_all": ["role":"compute-user"], - "volume:get_volume_metadata": ["role":"compute-user"], - "volume:get_snapshot": ["role":"compute-user"], - "volume:get_all_snapshots": ["role":"compute-user"], - - "network:get_all_networks": ["role":"compute-user"], - "network:get_network": ["role":"compute-user"], - "network:delete_network": ["role":"compute-user"], - "network:disassociate_network": ["role":"compute-user"], - "network:get_vifs_by_instance": ["role":"compute-user"], - "network:allocate_for_instance": ["role":"compute-user"], - "network:deallocate_for_instance": ["role":"compute-user"], - "network:validate_networks": ["role":"compute-user"], - "network:get_instance_uuids_by_ip_filter": ["role":"compute-user"], - - "network:get_floating_ip": ["role":"compute-user"], - "network:get_floating_ip_pools": ["role":"compute-user"], - "network:get_floating_ip_by_address": ["role":"compute-user"], - "network:get_floating_ips_by_project": ["role":"compute-user"], - "network:get_floating_ips_by_fixed_address": ["role":"compute-user"], - "network:allocate_floating_ip": ["role":"compute-user"], - "network:deallocate_floating_ip": ["role":"compute-user"], - "network:associate_floating_ip": ["role":"compute-user"], - "network:disassociate_floating_ip": ["role":"compute-user"], - - "network:get_fixed_ip": ["role":"compute-user"], - "network:add_fixed_ip_to_instance": ["role":"compute-user"], - "network:remove_fixed_ip_from_instance": ["role":"compute-user"], - "network:add_network_to_project": ["role":"compute-user"], - "network:get_instance_nw_info": ["role":"compute-user"], - - "network:get_dns_domains": ["role":"compute-user"], - "network:add_dns_entry": ["role":"compute-user"], - "network:modify_dns_entry": ["role":"compute-user"], - "network:delete_dns_entry": ["role":"compute-user"], - "network:get_dns_entries_by_address": ["role":"compute-user"], - "network:get_dns_entries_by_name": ["role":"compute-user"], - "network:create_private_dns_domain": ["role":"compute-user"], - "network:create_public_dns_domain": ["role":"compute-user"], - "network:delete_dns_domain": ["role":"compute-user"] -} -
-
- Service management - The Identity Service provides the following service - management functions: - - - Services - - - Endpoints - - - The Identity Service also maintains a user that - corresponds to each service, such as, a user named - nova for the Compute service, and - a special service tenant called - service. - For information about how to create services and - endpoints, see the OpenStack Admin User - Guide. -
- -
- Groups - A group is a collection of users. Administrators can - create groups and add users to them. Then, rather than - assign a role to each user individually, assign a role to - the group. Every group is in a domain. Groups were - introduced with version 3 of the Identity API (the Grizzly - release of Keystone). - Identity API V3 provides the following group-related - operations: - - - Create a group - - - Delete a group - - - Update a group (change its name or - description) - - - Add a user to a group - - - Remove a user from a group - - - List group members - - - List groups for a user - - - Assign a role on a tenant to a group - - - Assign a role on a domain to a group - - - Query role assignments to groups - - - - The Identity service server might not allow all - operations. For example, if using the Keystone server - with the LDAP Identity back end and group updates are - disabled, then a request to create, delete, or update - a group fails. - - Here are a couple examples: - - - Group A is granted Role A on Tenant A. If User A - is a member of Group A, when User A gets a token - scoped to Tenant A, the token also includes Role - A. - - - Group B is granted Role B on Domain B. If User B - is a member of Domain B, if User B gets a token - scoped to Domain B, the token also includes Role - B. - - -
diff --git a/doc/install-guide/bk_openstackinstallguide.xml b/doc/install-guide/bk_openstackinstallguide.xml index 83842f8113..187a41a6db 100644 --- a/doc/install-guide/bk_openstackinstallguide.xml +++ b/doc/install-guide/bk_openstackinstallguide.xml @@ -55,7 +55,7 @@ Ubuntu 12.04 (LTS). This guide shows you how to install OpenStack by using packages - available through Fedora 17 as well as on RHEL and + available through Fedora 19 as well as on RHEL and derivatives through the EPEL repository. This guide shows you how to install OpenStack by using packages @@ -486,16 +486,13 @@ include statements. You can add additional chapters using these types of statements. --> - - - - - - - - - - - - + + + + + + + + + diff --git a/doc/install-guide/ch_basics.xml b/doc/install-guide/ch_basics.xml new file mode 100644 index 0000000000..bc22b7e7d1 --- /dev/null +++ b/doc/install-guide/ch_basics.xml @@ -0,0 +1,271 @@ + + Basic Operating System Configuration + + This guide starts by creating two nodes: a controller node to host most + services, and a compute node to run virtual machine instances. Later + chapters create additional nodes to run more services. OpenStack offers a + lot of flexibility in how and where you run each service, so this is not the + only possible configuration. However, you do need to configure certain + aspects of the operating system on each node. + + This chapters details a sample configuration for both the controller + node and any additional nodes. It's possible to configure the operating + system in other ways, but the remainder of this guide assumes you have a + configuration compatible with the one shown here. + + All of the commands throughout this guide assume you have administrative + privileges. Either run the commands as the root user, or prefix them with + the sudo command. + +
+ Networking + + For a production deployment of OpenStack, most nodes should have two + network interface cards: one for external network traffic, and one to + communicate only with other OpenStack nodes. For simple test cases, you + can use machines with only a single network interface card. + + This section sets up networking on two networks with static IP + addresses and manually manages a list of hostnames on each machine. If you + manage a large network, you probably already have systems in place to + manage this. You may skip this section, but note that the rest of this + guide assumes that each node can reach the other nodes on the internal + network using hostnames like controller and + compute1. + + Start by disabling the NetworkManager service and + enabling the network service. The + network service is more suitable for the static + network configuration done in this guide. + + # service NetworkManager stop +# service network start +# chkconfig NetworkManager off +# chkconfig network on + + + On Fedora 19, firewalld replaced + iptables as the default firewall. You can configure + iptables to allow OpenStack to work, but this guide + currently recommends switching to iptables. + # service firewalld stop +# service iptables start +# chkconfig firewalld off +# chkconfig iptables on + + + Next, create the configuration files for both eth0 + and eth1. This guide uses + 192.168.0.x address for the internal network and + 10.0.0.x addresses for the external network. Make + sure that the corresponding network devices are connected to the correct + network. + + In this guide, the controller node uses the IP addresses + 192.168.0.10 and 10.0.0.10. When + creating the compute node, use 192.168.0.11 and + 10.0.0.11 instead. Additional nodes added in later + chapters will follow this pattern. + + + <filename>/etc/sysconfig/network-scripts/ifcfg-eth0</filename> + # Internal Network +DEVICE=eth0 +TYPE=Ethernet +BOOTPROTO=static +IPADDR=192.168.0.10 +NETMASK=255.255.255.0 +DEFROUTE=yes +ONBOOT=yes + + + + <filename>/etc/sysconfig/network-scripts/ifcfg-eth1</filename> + # External Network +DEVICE=eth1 +TYPE=Ethernet +BOOTPROTO=static +IPADDR=10.0.0.10 +NETMASK=255.255.255.0 +DEFROUTE=yes +ONBOOT=yes + + + Set the hostname of each machine. Name the controller node + controller and the first compute node + compute1. These are the hostnames used in the + examples throughout this guide. Use the hostname + command to set the hostname. + + # hostname controller + + To have this hostname set when the system + reboots, you need to specify it in the proper configuration file. In Red + Het Enterprise Linux, Centos, and older versions of Fedora, you set this + in the file /etc/sysconfig/network. Change the line + starting with HOSTNAME=. + + HOSTNAME=controller + + As of Fedora 18, Fedora now uses the file + /etc/hostname. This file contains a single line + with just the hostname. + + To have this hostname set when the system + reboots, you need to specify it in the file + /etc/hostname. This file contains a single line + with just the hostname. + + Finally, ensure that each node can reach the other nodes using + hostnames. In this guide, we will manually edit the + /etc/hosts file on each system. For large-scale + deployments, you should use DNS or a configuration management system like + Puppet. + + 127.0.0.1 localhost +192.168.0.10 controller +192.168.0.11 compute1 + +
+ +
+ Network Time Protocol (NTP) + + To keep all the services in sync across multiple machines, you need to + install NTP. In this guide, we will configure the controller node to be + the reference server, and configure all additional nodes to set their time + from the controller node. + + Install the ntp package on each system running + OpenStack services. + + # apt-get install ntp + # yum install ntp + # zypper install ntp + + Set up the NTP server on your controller node so that it receives data + by modifying the ntp.conf file and restarting the + service. + + + # sed -i 's/server ntp.ubuntu.com/server ntp.ubuntu.com\nserver 127.127.1.0\nfudge 127.127.1.0 stratum 10/g' /etc/ntp.conf +# service ntp restart +# chkconfig ntpd on + # service ntpd start +# chkconfig ntpd on + # systemctl start ntp.service +# systemctl enable ntp.service + + Set up all additional nodes to synchronize their time from the + controller node. The simplest way to do this is to add a daily cron job. + Add a file at /etc/cron.daily/ntpdate that contains + the following: + + ntpdate controller +hwclock -w + + Make sure to mark this file as executable. + + # chmod a+x /etc/cron.daily/ntpdate + +
+ +
+ MySQL Database + + Most OpenStack services require a database to store information. In + this guide, we use a MySQL database running on the controller node. The + controller node needs to have the MySQL database installed. Any additional + nodes that access MySQL need to have the MySQL client software + installed. + + On any nodes besides the controller node, just install the MySQL + client and the MySQL Python library. This is all you need to do on any + system not hosting the MySQL database. + + # apt-get install python-mysqldb + # yum install mysql MySQL-python + # zypper install mysql-community-server-client python-mysql + + On the controller node, install the MySQL client, the MySQL database, + and the MySQL Python library. + + # apt-get install python-mysqldb mysql-server + # yum install mysql mysql-server MySQL-python + # zypper install mysql-community-server-client mysql-community-server python-mysql + + Start the MySQL database server and set it to start automatically when + the system boots. + + # service mysqld start +# chkconfig mysqld on + # systemctl enable mysqld.service +# systemctl enable mysqld.service + + Finally, it's a good idea to set a root password for your MySQL + database. The OpenStack programs that set up databases and tables will + prompt you for this password if it's set. + + # mysqladmin password + + Enter your desired password when prompted. + +
+ +
+ Messaging Server + Install the messaging queue server. Typically this is RabbitMQQpid but QpidRabbitMQ and ZeroMQ (0MQ) are also + available. + + # apt-get install rabbitmq-server + # zypper install rabbitmq-server + # yum install qpid-cpp-server memcached openstack-utils + + + + Disable Qpid authentication by setting the + value of the auth configuration key to + no in the /etc/qpidd.conf + file. + + # echo "auth=no" >> /etc/qpidd.conf + + Start Qpid and set it to start automatically + when the system boots. + + # service qpidd start +# chkconfig qpidd on +
+ +
+ OpenStack Packages + + + FIXME + This guide uses the OpenStack packages from + the RDO repository. These packages work on Red Hat Enterprise Linux 6 and + compatible versions of CentOS, as well as Fedora 19. Enable the repository + by donwloading and installing the rdo-release-havana + package. + + # curl -O http://repos.fedorapeople.org/repos/openstack/openstack-havana/rdo-release-havana-6.noarch.rpm +# rpm -Uvh rdo-release-havana-6.noarch.rpm + + The openstack-utils package + contains utility programs that make installation and configuration easier. + These programs will be used throughout this guide. Install + openstack-utils. This will also verity that you can + access the RDO repository. + + # yum install openstack-utils + +
+ +
diff --git a/doc/install-guide/ch_cinder.xml b/doc/install-guide/ch_cinder.xml new file mode 100644 index 0000000000..0938534eb5 --- /dev/null +++ b/doc/install-guide/ch_cinder.xml @@ -0,0 +1,9 @@ + + + Adding Block Storage + + FIXME + diff --git a/doc/install-guide/ch_glance.xml b/doc/install-guide/ch_glance.xml new file mode 100644 index 0000000000..2c08a9ae75 --- /dev/null +++ b/doc/install-guide/ch_glance.xml @@ -0,0 +1,12 @@ + + + Configuring the Image Service + + + + + + diff --git a/doc/install-guide/ch_horizon.xml b/doc/install-guide/ch_horizon.xml new file mode 100644 index 0000000000..806b465e23 --- /dev/null +++ b/doc/install-guide/ch_horizon.xml @@ -0,0 +1,38 @@ + + + Adding a Dashboard + + The OpenStack dashboard, also known as Horizon, + is a Web interface that allows cloud administrators and users to + manage various OpenStack resources and services. + + The dashboard enables web-based interactions with the + OpenStack Compute cloud controller through the OpenStack APIs. + + The following instructions show an example deployment + configured with an Apache web server. + + After you + install and configure + the dashboard, you can complete the following tasks: + + + + Customize your dashboard. See . + + + Set up session storage for the dashboard. See . + + + + + + + + diff --git a/doc/install-guide/ch_installcompute.xml b/doc/install-guide/ch_installcompute.xml index b8997073ef..303516fc3d 100644 --- a/doc/install-guide/ch_installcompute.xml +++ b/doc/install-guide/ch_installcompute.xml @@ -14,7 +14,6 @@ - diff --git a/doc/install-guide/ch_installdashboard.xml b/doc/install-guide/ch_installdashboard.xml deleted file mode 100644 index 134f08b0e4..0000000000 --- a/doc/install-guide/ch_installdashboard.xml +++ /dev/null @@ -1,34 +0,0 @@ - - - Install the OpenStack dashboard - The - OpenStack dashboard, also known as horizon, is a Web interface that allows cloud - administrators and users to manage various OpenStack resources - and services. - The dashboard enables web-based interactions with the - OpenStack Compute cloud controller through the OpenStack APIs. - The following instructions show an example deployment - configured with an Apache web server. - After you install and configure the dashboard, you can - complete the following tasks: - - - Customize your dashboard. See . - - - Set up session storage for the dashboard. See . - - - - - - - diff --git a/doc/install-guide/ch_installidentity.xml b/doc/install-guide/ch_installidentity.xml deleted file mode 100644 index 2a4eaecb03..0000000000 --- a/doc/install-guide/ch_installidentity.xml +++ /dev/null @@ -1,14 +0,0 @@ - - - Install the OpenStack Identity Service - The Identity Service manages users and tenants, which are - accounts or projects, and offers a common identity system for - all OpenStack projects. - - - - - diff --git a/doc/install-guide/ch_installimage.xml b/doc/install-guide/ch_installimage.xml deleted file mode 100644 index 5b73672903..0000000000 --- a/doc/install-guide/ch_installimage.xml +++ /dev/null @@ -1,11 +0,0 @@ - - - Installing OpenStack Image Service - - - - - diff --git a/doc/install-guide/ch_installing-openstack-overview.xml b/doc/install-guide/ch_installing-openstack-overview.xml index 3b3b93c60d..edfa5847c4 100644 --- a/doc/install-guide/ch_installing-openstack-overview.xml +++ b/doc/install-guide/ch_installing-openstack-overview.xml @@ -81,81 +81,6 @@ entire installation. -
- Installing on Ubuntu - How you go about installing OpenStack Compute depends on - your goals for the installation. You can use an ISO image, - you can use a scripted installation, and you can manually - install with a step-by-step installation as described in - this manual. -
- ISO Installation - See Installing Rackspace Private Cloud on Physical - Hardware for download links and - instructions for the Rackspace Private Cloud ISO. For - documentation on the Rackspace, see http://www.rackspace.com/cloud/private. - -
- -
- Manual Installation on Ubuntu - - The manual installation involves installing from - packages backported on Ubuntu 12.04 LTS using the Cloud - Archive as a user with root (or sudo) permission. This - guide provides instructions for installing using - Ubuntu packages. -
-
-
- Scripted Development Installation - - You can download a script for a standalone install - for proof-of-concept, learning, or for development - purposes for Ubuntu 12.04, Fedora 18 or openSUSE 12.3 at https://devstack.org. - - - - Install Ubuntu 12.04 or Fedora 18 or openSUSE 12.3: - - In order to correctly install all the - dependencies, we assume a specific version of - the OS to make it as easy as possible. - - - - Download DevStack: - - $ git clone git://github.com/openstack-dev/devstack.git - - The devstack repository contains a script - that installs OpenStack Compute, Object - Storage, the Image Service, Volumes, the - Dashboard and the Identity Service and offers - templates for configuration files plus data - scripts. - - - - Start the install: - - $ cd devstack; ./stack.sh - - It takes a few minutes. We recommend reading the well-documented script - while it is building to learn more about what - is going on. - - -
diff --git a/doc/install-guide/ch_keystone.xml b/doc/install-guide/ch_keystone.xml new file mode 100644 index 0000000000..4198ca2f92 --- /dev/null +++ b/doc/install-guide/ch_keystone.xml @@ -0,0 +1,19 @@ + + + + Configuring the Identity Service + + + + + + + + diff --git a/doc/install-guide/ch_neutron.xml b/doc/install-guide/ch_neutron.xml new file mode 100644 index 0000000000..c300d29a03 --- /dev/null +++ b/doc/install-guide/ch_neutron.xml @@ -0,0 +1,9 @@ + + + Using Neutron Networking + + FIXME + diff --git a/doc/install-guide/ch_nova.xml b/doc/install-guide/ch_nova.xml new file mode 100644 index 0000000000..a8a28cdcba --- /dev/null +++ b/doc/install-guide/ch_nova.xml @@ -0,0 +1,18 @@ + + + Configuring the Compute Services + + + + + + + + + + + diff --git a/doc/install-guide/ch_overview.xml b/doc/install-guide/ch_overview.xml new file mode 100644 index 0000000000..8108f8f785 --- /dev/null +++ b/doc/install-guide/ch_overview.xml @@ -0,0 +1,22 @@ + + + Overview and Architecture +
+ OpenStack Overview + The OpenStack project is an open source cloud computing + platform for all types of clouds, which aims to be simple to + implement, massively scalable, and feature rich. Developers and + cloud computing technologists from around the world create the + OpenStack project. + +
+ +
+ Sample Architecture + +
+
diff --git a/doc/install-guide/ch_swift.xml b/doc/install-guide/ch_swift.xml new file mode 100644 index 0000000000..f0cfea40cb --- /dev/null +++ b/doc/install-guide/ch_swift.xml @@ -0,0 +1,9 @@ + + + Adding Object Storage + + FIXME + diff --git a/doc/install-guide/section_configure-creds.xml b/doc/install-guide/section_configure-creds.xml deleted file mode 100644 index 467198b78e..0000000000 --- a/doc/install-guide/section_configure-creds.xml +++ /dev/null @@ -1,48 +0,0 @@ - -
- Defining Compute and Image Service Credentials - The commands in this section can be run on any machine that can access the cloud - controller node over the network. You can run commands directly on the cloud controller, if - you like, but it isn't required. - Create an openrc file that can contain these variables that are used - by the nova (Compute) and glance (Image) command-line - interface clients. These commands can be run by any user, and the - openrc file can be stored anywhere. In this document, we store the - openrc file in the ~/creds directory: - -$ mkdir ~/creds -$ nano ~/creds/openrc - - In this example, we are going to create an openrc file with - credentials associated with a user who is not an administrator. Because the user is not an - administrator, the credential file will use the URL associated with the keystone service - API, which runs on port 5000. If we wanted to use the - keystone command-line tool to perform administrative commands, we - would use the URL associated with the keystone admin API, which runs on port - 35357. - - In the openrc file you create, paste these values: - - Next, ensure these are used in your environment. If you see - 401 Not Authorized errors on commands using tokens, ensure - that you have properly sourced your credentials and that all - the pipelines are accurate in the configuration files. - -$ source ~/creds/openrc - -Verify your credentials are working by using the nova -client to list the available images: -$ nova image-list - -+--------------------------------------+--------------+--------+--------+ -| ID | Name | Status | Server | -+--------------------------------------+--------------+--------+--------+ -| acafc7c0-40aa-4026-9673-b879898e1fc2 | CirrOS 0.3.1 | ACTIVE | | -+--------------------------------------+--------------+--------+--------+ - -Note that the ID value on your installation will be different. -
- diff --git a/doc/install-guide/section_glance-install.xml b/doc/install-guide/section_glance-install.xml new file mode 100644 index 0000000000..878c184d88 --- /dev/null +++ b/doc/install-guide/section_glance-install.xml @@ -0,0 +1,113 @@ + +
+ Installing the Image Service + + Install the Image Service on the controller node. + # sudo apt-get install glance + # yum install openstack-glance + # zypper install openstack-glance + + The Image Service stores information about images in a database. + This guide uses the MySQL database used by other OpenStack services. + The Ubuntu packages create an sqlite database by + default. Delete the glance.sqlite file created in + the /var/lib/glance/ directory. + + Use the openstack-db command to create the + database and tables for the Image Service, as well as a database user + called glance to connect to the database. Replace + GLANCE_DBPASS with a + password of your choosing. + + # openstack-db --init --service glance --password GLANCE_DBPASS + + You now have to tell the Image Service to use that database. The Image + Service provides two OpenStack services: glance-api and + glance-registry. They each have separate configuration + files, so you will have to configure both throughout this section. + + # openstack-config --set /etc/glance/glance-api.conf \ + DEFAULT sql_connection mysql://glance:GLANCE_PASS@controller/glance +# openstack-config --set /etc/glance/glance-registry.conf \ + DEFAULT sql_connection mysql://glance:GLANCE_PASS@controller/glance + + Create a user called glance that the Image + Service can use to authenticate with the Identity Service. Use the + service tenant and give the user the + admin role. + + + These examples assume you have the appropriate environment + variables set to specify your credentials, as described in + . + + + # keystone user-create --name=glance --pass=GLANCE_PASS --email=glance@example.com +# keystone user-role-add --user=glance --tenant=service --role=admin + + For the Image Service to use these credentials, you have to add + them to the configuration files. + + # openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_host controller +# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_user keystone +# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_tenant_name service +# openstack-config --set /etc/glance/glance-api.conf keystone_authtoken admin_password GLANCE_PASS +# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_host controller +# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_user keystone +# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_tenant_name service +# openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken admin_password GLANCE_PASS + + You also have to add the credentials to the files + /etc/glance/glance-api-paste.ini and + /etc/glance/glance-registry-paste.ini. Open each file + in a text editor and locate the section [filter:authtoken]. + Make sure the following options are set: + + [filter:authtoken] +paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory +auth_host=controller +admin_user=glance +admin_tenant_name=service +admin_password=GLANCE_PASS + + + You have to register the Image Service with the Identity Service + so that other OpenStack services can locate it. Register the service and + specify the endpoint using the keystone command. + + # keystone service-create --name=glance --type=image \ + --description="Glance Image Service" + + Note the id property returned and use it when + creating the endpoint. + + # keystone endpoint-create \ + --service-id=the_service_id_above \ + --publicurl=http://controller:9292 \ + --internalurl=http://controller:9292 \ + --adminurl=http://controller:9292 + + Finally, start the glance-api and + glance-registry services and configure them to + start when the system boots. + + # service glance-api start +# service glance-registry start +# chkconfig glance-api on +# chkconfig glance-registry on + # service openstack-glance-api start +# service openstack-glance-registry start +# chkconfig openstack-glance-api on +# chkconfig openstack-glance-registry on + # systemctl start openstack-glance-api.service +# systemctl start openstack-glance-registry.service +# systemctl enable openstack-glance-api.service +# systemctl enable openstack-glance-registry.service + +
diff --git a/doc/install-guide/section_images-verifying-install.xml b/doc/install-guide/section_glance-verify.xml similarity index 74% rename from doc/install-guide/section_images-verifying-install.xml rename to doc/install-guide/section_glance-verify.xml index 9f1cabcf5b..cf0ea9cfc1 100644 --- a/doc/install-guide/section_images-verifying-install.xml +++ b/doc/install-guide/section_glance-verify.xml @@ -1,5 +1,5 @@ -
@@ -21,34 +21,24 @@ The download is done in a dedicated directory: $ mkdir images $ cd images/ -$ wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img +$ curl -O http://cdn.download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img You can now use the glance image-create command to upload the image to the Image Service, passing the image file through standard input: - The following commands show --os-username, - --os-password, - --os-tenant-name, - --os-auth-url parameters. You could also use - the OS_* environment variables by setting them in - an example openrc file: - - Then you would source these environment variables by running source openrc. - $ glance --os-username=admin --os-password=secrete --os-tenant-name=demo --os-auth-url=http://192.168.206.130:5000/v2.0 \ - image-create \ - --name="CirrOS 0.3.1" \ - --disk-format=qcow2 \ - --container-format bare < cirros-0.3.1-x86_64-disk.img + + # glance image-create --name="CirrOS 0.3.1" --disk-format=qcow2 \ + --container-format=bare --is-public=true < cirros-0.3.1-x86_64-disk.img +------------------+--------------------------------------+ | Property | Value | +------------------+--------------------------------------+ | checksum | d972013792949d0d3ba628fbe8685bce | | container_format | bare | -| created_at | 2013-05-08T18:59:18 | +| created_at | 2013-10-08T18:59:18 | | deleted | False | | deleted_at | None | | disk_format | qcow2 | | id | acafc7c0-40aa-4026-9673-b879898e1fc2 | -| is_public | False | +| is_public | True | | min_disk | 0 | | min_ram | 0 | | name | CirrOS 0.3.1 | @@ -109,12 +99,11 @@ Now a glance image-list should show the image attributes: - $ glance --os-username=admin --os-password=secrete --os-tenant-name=demo --os-auth-url=http://192.168.206.130:5000/v2.0 \ - image-list -+--------------------------------------+---------------------------------+-------------+------------------+----------+--------+ -| ID | Name | Disk Format | Container Format | Size | Status | -+--------------------------------------+---------------------------------+-------------+------------------+----------+--------+ -| acafc7c0-40aa-4026-9673-b879898e1fc2 | CirrOS 0.3.1 | qcow2 | bare | 13147648 | active | -+--------------------------------------+---------------------------------+-------------+------------------+----------+--------+ + # glance image-list ++--------------------------------------+-----------------+-------------+------------------+----------+--------+ +| ID | Name | Disk Format | Container Format | Size | Status | ++--------------------------------------+-----------------+-------------+------------------+----------+--------+ +| acafc7c0-40aa-4026-9673-b879898e1fc2 | CirrOS 0.3.1 | qcow2 | bare | 13147648 | active | ++--------------------------------------+-----------------+-------------+------------------+----------+--------+
diff --git a/doc/install-guide/section_identity-install-keystone.xml b/doc/install-guide/section_identity-install-keystone.xml deleted file mode 100644 index f00a818a72..0000000000 --- a/doc/install-guide/section_identity-install-keystone.xml +++ /dev/null @@ -1,673 +0,0 @@ - -
- Installing and Configuring the Identity Service - Install the Identity service on any server that is accessible - to the other servers you intend to use for OpenStack services, as - root: - # apt-get install keystone python-keystone python-keystoneclient - $ yum install openstack-utils openstack-keystone python-keystoneclient - $ zypper install openstack-utils openstack-keystone python-keystoneclient - After installing, you need to delete the sqlite database it - creates, then change the configuration to point to a MySQL - database. This configuration enables easier scaling scenarios - since you can bring up multiple Keystone front ends when needed, - and configure them all to point back to the same database. Plus a - database backend has built-in data replication features and - documentation surrounding high availability and data redundancy - configurations. - Delete the keystone.db file created in - the /var/lib/keystone - directory.# rm /var/lib/keystone/keystone.db - Delete the keystone.db file created in - the /var/lib/keystone - directory.$ sudo rm /var/lib/keystone/keystone.db - Configure the production-ready backend data store rather than - using the catalog supplied by default for the ability to back up - the service and endpoint data. This example shows MySQL. - The following sequence of commands will create a MySQL - database named "keystone" and a MySQL user named "keystone" with - full access to the "keystone" MySQL database. - On Fedora, RHEL, CentOS, and openSUSE, you can configure the Keystone - database with the openstack-db - command.$ sudo openstack-db --init --service keystone - To manually create the database, start the mysql command line client by - running: - $ mysql -u root -p - Enter the mysql root user's password when prompted. - To configure the MySQL database, create the keystone - database. - mysql> CREATE DATABASE keystone; - Create a MySQL user for the newly-created keystone database that has full control of the - keystone database. - - Note - Choose a secure password for the keystone user and replace - all references to - [YOUR_KEYSTONEDB_PASSWORD] with - this password. - - mysql> GRANT ALL ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '[YOUR_KEYSTONEDB_PASSWORD]'; -mysql> GRANT ALL ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '[YOUR_KEYSTONEDB_PASSWORD]'; - - - In the above commands, even though the 'keystone'@'%' also matches - 'keystone'@'localhost', you must explicitly specify the - 'keystone'@'localhost' entry. - By default, MySQL will create entries in the user table with User='' - and Host='localhost'. The User='' acts as a wildcard, - matching all users. If you do not have the 'keystone'@'localhost' account, - and you try to log in as the keystone user, the precedence rules of MySQL will match against - the User='' Host='localhost' account before it matches against the - User='keystone' Host='%' account. This will result in an error message - that looks like: - - ERROR 1045 (28000): Access denied for user 'keystone'@'localhost' (using password: YES) - - Thus, we create a separate User='keystone' Host='localhost' entry - that will match with higher precedence. - See the MySQL documentation on connection verification for more details on how MySQL - determines which row in the user table it uses when authenticating connections. - - - Enter quit at the mysql> prompt to exit - MySQL. - mysql> quit - - Reminder - Recall that this document assumes the Cloud Controller node - has an IP address of 192.168.206.130. - - Once Keystone is installed, it is configured via a primary - configuration file - (/etc/keystone/keystone.conf), a PasteDeploy - configuration file - (/etc/keystone/keystone-paste.ini) and by - initializing data into keystone using the command line client. By - default, Keystone's data store is sqlite. To change the data store - to mysql, change the line defining connection in - /etc/keystone/keystone.conf like so: - connection = mysql://keystone:[YOUR_KEYSTONEDB_PASSWORD]@192.168.206.130/keystone - Also, ensure that the proper service token is used in the - keystone.conf file. An example is provided in the Appendix or you can - generate a random string. The sample token is: - admin_token = 012345SECRET99TOKEN012345 - $ export ADMIN_TOKEN=$(openssl rand -hex 10) -$ sudo openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN - By default Keystone will use PKI tokens. To - create the signing keys and certificates run: - - $ sudo keystone-manage pki_setup -$ sudo chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log - - # keystone-manage pki_setup -# chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log - # keystone-manage pki_setup -# chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log - - - In Ubuntu, keystone.conf is shipped as - root:root 644, but /etc/keystone has permissions for keystone:keystone - 700 so the files under it are protected from unauthorized users. - Next, restart the keystone service so that it picks up the new - database configuration. - # sudo service keystone restart - $ sudo service openstack-keystone start && sudo chkconfig openstack-keystone on - # systemctl restart openstack-keystone.service -# systemctl enable openstack-keystone.service - Lastly, initialize the new keystone database, as root: - # keystone-manage db_sync -
- Configuring Services to work with Keystone - Once Keystone is installed and running, you set up users and - tenants and services to be configured to work with it. You can - either follow the manual - steps or use a - script. -
- Setting up tenants, users, and roles - manually - You need to minimally define a tenant, user, and role to - link the tenant and user as the most basic set of details to - get other services authenticating and authorizing with the - Identity service. - - Scripted method available - These are the manual, unscripted steps using the - keystone client. A scripted method is available at Setting up tenants, - users, and roles - scripted. - - Typically, you would use a username and password to - authenticate with the Identity service. However, at this point - in the install, we have not yet created a user. Instead, we - use the service token to authenticate against the Identity - service. With the keystone command-line, - you can specify the token and the endpoint as arguments, as - follows:$ keystone --token 012345SECRET99TOKEN012345 --endpoint http://192.168.206.130:35357/v2.0 <command parameters> - You can also specify the token and endpoint as environment - variables, so they do not need to be explicitly specified each time. If - you are using the bash shell, the following commands will set these - variables in your current session so you don't have to pass them to the - client each time. Best practice for bootstrapping the first - administrative user is to use the OS_SERVICE_ENDPOINT and - OS_SERVICE_TOKEN together as environment - variables.$ export OS_SERVICE_TOKEN=012345SECRET99TOKEN012345 -$ export OS_SERVICE_ENDPOINT=http://192.168.206.130:35357/v2.0 - In the remaining examples, we will assume you have set the above environment - variables. - Because it is more secure to use a username and password to authenticate rather than the - service token, when you use the token the keystone client may output the - following warning, depending on the version of python-keystoneclient you are - running:WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored). - First, create a default tenant. We'll name it - demo in this example. There is an --enabled - parameter available for tenant-create and user-create that defaults to - true. Refer to the help in keystone help user-create - and keystone help user-update for more - details. - $ keystone tenant-create --name demo --description "Default Tenant" - +-------------+----------------------------------+ - | Property | Value | - +-------------+----------------------------------+ - | description | Default Tenant | - | enabled | True | - | id | b5815b046cfe47bb891a7b64119e7f80 | - | name | demo | - +-------------+----------------------------------+ - Set the id value from previous command as a shell variable. - $ export TENANT_ID=b5815b046cfe47bb891a7b64119e7f80 - Create a default user named admin. - $ keystone user-create --tenant-id $TENANT_ID --name admin --pass secrete - +----------+----------------------------------+ - | Property | Value | - +----------+----------------------------------+ - | email | | - | enabled | True | - | id | a4c2d43f80a549a19864c89d759bb3fe | - | name | admin | - | tenantId | b5815b046cfe47bb891a7b64119e7f80 | - +----------+----------------------------------+ - Set the admin id value from previous command's output as a shell variable. - $ export ADMIN_USER_ID=a4c2d43f80a549a19864c89d759bb3fe - Create an administrative role based on keystone's default - policy.json file, - admin. - $ keystone role-create --name admin - +----------+----------------------------------+ - | Property | Value | - +----------+----------------------------------+ - | id | e3d9d157cc95410ea45d23bbbc2e5c10 | - | name | admin | - +----------+----------------------------------+ - Set the role id value from previous command's output as a shell variable. - $ export ROLE_ID=e3d9d157cc95410ea45d23bbbc2e5c10 - Grant the admin role to the - admin user in the - demo tenant with - "user-role-add". - $ keystone user-role-add --user-id $ADMIN_USER_ID --tenant-id $TENANT_ID --role-id $ROLE_ID - - Create a service tenant named service. This tenant contains all the - services that we make known to the service catalog. - $ keystone tenant-create --name service --description "Service Tenant" - +-------------+----------------------------------+ - | Property | Value | - +-------------+----------------------------------+ - | description | Service Tenant | - | enabled | True | - | id | eb7e0c10a99446cfa14c244374549e9d | - | name | service | - +-------------+----------------------------------+ - - Set the tenant id value from previous command's output as a shell variable. - $ export SERVICE_TENANT_ID=eb7e0c10a99446cfa14c244374549e9d - Create a glance service user in the service tenant. You'll do this - for any service you add to be in the Identity service catalog. - $ keystone user-create --tenant-id $SERVICE_TENANT_ID --name glance --pass glance -WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored). - +----------+----------------------------------+ - | Property | Value | - +----------+----------------------------------+ - | email | | - | enabled | True | - | id | 46b2667a7807483d983e0b4037a1623b | - | name | glance | - | tenantId | eb7e0c10a99446cfa14c244374549e9d | - +----------+----------------------------------+ -Set the id value from previous command as a shell variable. - $ export GLANCE_USER_ID=46b2667a7807483d983e0b4037a1623b - Grant the admin role to the - glance user in the - service tenant. - $ keystone user-role-add --user-id $GLANCE_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ROLE_ID - - Create a nova service user in the service tenant. - $ keystone user-create --tenant-id $SERVICE_TENANT_ID --name nova --pass nova -WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored). - +----------+----------------------------------+ - | Property | Value | - +----------+----------------------------------+ - | email | | - | enabled | True | - | id | 54b3776a8707834d983e0b4037b1345c | - | name | nova | - | tenantId | eb7e0c10a99446cfa14c244374549e9d | - +----------+----------------------------------+ - -Set the nova user's id value from previous command's output as a shell variable. - $ export NOVA_USER_ID=54b3776a8707834d983e0b4037b1345c - Grant the admin role to the - nova user in the - service tenant. - $ keystone user-role-add --user-id $NOVA_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ROLE_ID - - Create a cinder service user in the service tenant. - $ keystone user-create --tenant-id $SERVICE_TENANT_ID --name cinder --pass openstack -WARNING: Bypassing authentication using a token & endpoint (authentication credentials are being ignored). - +----------+----------------------------------+ - | Property | Value | - +----------+----------------------------------+ - | email | | - | enabled | True | - | id | c95bf79153874ac69b4758ebf75498a6 | - | name | cinder | - | tenantId | eb7e0c10a99446cfa14c244374549e9d | - +----------+----------------------------------+ -Set the cinder user's id value from previous command's output as a shell variable. - $ export CINDER_USER_ID=c95bf79153874ac69b4758ebf75498a6 - Grant the admin role to the - cinder user in the service - tenant. - $ keystone user-role-add --user-id $CINDER_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ROLE_ID - - Create an ec2 service user in the service tenant. - $ keystone user-create --tenant-id $SERVICE_TENANT_ID --name ec2 --pass ec2 - +----------+----------------------------------+ - | Property | Value | - +----------+----------------------------------+ - | email | | - | enabled | True | - | id | 32e7668b8707834d983e0b4037b1345c | - | name | ec2 | - | tenantId | eb7e0c10a99446cfa14c244374549e9d | - +----------+----------------------------------+ -Set the ec2 user's id value from previous command's output as a shell variable. - $ export EC2_USER_ID=32e7668b8707834d983e0b4037b1345c - Grant the admin role to the - ec2 user in the - service tenant. - $ keystone user-role-add --user-id $EC2_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ROLE_ID - - - Create an Object Storage service user in the service tenant. - $ keystone user-create --tenant-id $SERVICE_TENANT_ID --name swift --pass swiftpass - +----------+----------------------------------+ - | Property | Value | - +----------+----------------------------------+ - | email | | - | enabled | True | - | id | 4346677b8909823e389f0b4037b1246e | - | name | swift | - | tenantId | eb7e0c10a99446cfa14c244374549e9d | - +----------+----------------------------------+ -Set the swift user's id value from previous command's output as a shell variable. - $ export SWIFT_USER_ID=4346677b8909823e389f0b4037b1246e - Grant the admin role to the - swift user in the - service tenant. - $ keystone user-role-add --user-id $SWIFT_USER_ID --tenant-id $SERVICE_TENANT_ID --role-id $ROLE_ID - - Next you create definitions for the services. - -
-
-
- Defining Services - Keystone also acts as a service catalog to let other - OpenStack systems know where relevant API endpoints exist for - OpenStack Services. The OpenStack Dashboard, in particular, uses - the service catalog heavily. This must - be configured for the OpenStack Dashboard to - properly function. - There are two alternative ways of defining services with - keystone: - - Using a template file - - - Using a database backend - - While using a template file is simpler, it is not - recommended except for development environments such as DevStack. The - template file does not enable CRUD operations on the service - catalog through keystone commands, but you can use the - service-list command when using the template catalog. A database - backend can provide better reliability, availability, and data - redundancy. This section describes how to populate the Keystone - service catalog using the database backend. Your - /etc/keystone/keystone.conf file should - contain the following lines if it is properly configured to use - the database backend. - [catalog] -driver = keystone.catalog.backends.sql.Catalog -
- Elements of a Keystone service catalog entry - For each service in the catalog, you must perform two - keystone operations: - - Use the keystone service-create - command to create a database entry for the service, with - the following attributes: - - --name - - Name of the service (e.g., - nova, - ec2, - glance, - keystone) - - - - --type - - Type of service (e.g., - compute, - ec2, - image, - identity) - - - - --description - - A description of the service, (e.g., - "Nova Compute - Service") - - - - - - - Use the keystone endpoint-create - command to create a database entry that describes how - different types of clients can connect to the service, - with the following attributes: - - - --region - - the region name you've given to the OpenStack - cloud you are deploying (e.g., RegionOne) - - - - --service-id - - The ID field returned by the keystone - service-create (e.g., - 935fd37b6fa74b2f9fba6d907fa95825) - - - - --publicurl - - The URL of the public-facing endpoint for the - service (e.g., - http://192.168.206.130:9292 - or - http://192.168.206.130:8774/v2/%(tenant_id)s) - - - - - --internalurl - - The URL of an internal-facing endpoint for the - service. - This typically has the same value as - publicurl. - - - - --adminurl - - The URL for the admin endpoint for the - service. The Keystone and EC2 services use - different endpoints for - adminurl and - publicurl, but for other - services these endpoints will be the same. - - - - - - - Keystone allows some URLs to contain special variables, - which are automatically substituted with the correct value at - runtime. Some examples in this document employ the - tenant_id variable, which we use when - specifying the Volume and Compute service endpoints. Variables - can be specified using either - %(varname)s - or $(varname)s - notation. In this document, we always use the - %(varname)s - notation (e.g., %(tenant_id)s) since - $ is interpreted as a special character - by Unix shells. -
-
- Creating keystone services and service endpoints - Here we define the services and their endpoints. Recall that you must have the following - environment variables - set.$ export OS_SERVICE_TOKEN=012345SECRET99TOKEN012345 -$ export OS_SERVICE_ENDPOINT=http://192.168.206.130:35357/v2.0 - Define the Identity service: - -$ keystone service-create --name=keystone --type=identity --description="Identity Service" - -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Identity Service | -| id | 15c11a23667e427e91bc31335b45f4bd | -| name | keystone | -| type | identity | -+-------------+----------------------------------+ -$ keystone endpoint-create \ - --region RegionOne \ - --service-id=15c11a23667e427e91bc31335b45f4bd \ - --publicurl=http://192.168.206.130:5000/v2.0 \ - --internalurl=http://192.168.206.130:5000/v2.0 \ - --adminurl=http://192.168.206.130:35357/v2.0 -+-------------+-----------------------------------+ -| Property | Value | -+-------------+-----------------------------------+ -| adminurl | http://192.168.206.130:35357/v2.0 | -| id | 11f9c625a3b94a3f8e66bf4e5de2679f | -| internalurl | http://192.168.206.130:5000/v2.0 | -| publicurl | http://192.168.206.130:5000/v2.0 | -| region | RegionOne | -| service_id | 15c11a23667e427e91bc31335b45f4bd | -+-------------+-----------------------------------+ - - - Define the Compute service, which requires a separate - endpoint for each tenant. Here we use the - service tenant from the previous section. - The %(tenant_id)s and single quotes - around the publicurl, - internalurl, and - adminurl must be typed exactly as - shown for both the Compute endpoint and the Volume - endpoint. - - -$ keystone service-create --name=nova --type=compute --description="Compute Service" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Compute Service | -| id | abc0f03c02904c24abdcc3b7910e2eed | -| name | nova | -| type | compute | -+-------------+----------------------------------+ - -$ keystone endpoint-create \ - --region RegionOne \ - --service-id=abc0f03c02904c24abdcc3b7910e2eed \ - --publicurl='http://192.168.206.130:8774/v2/%(tenant_id)s' \ - --internalurl='http://192.168.206.130:8774/v2/%(tenant_id)s' \ - --adminurl='http://192.168.206.130:8774/v2/%(tenant_id)s' -+-------------+----------------------------------------------+ -| Property | Value | -+-------------+----------------------------------------------+ -| adminurl | http://192.168.206.130:8774/v2/%(tenant_id)s | -| id | 935fd37b6fa74b2f9fba6d907fa95825 | -| internalurl | http://192.168.206.130:8774/v2/%(tenant_id)s | -| publicurl | http://192.168.206.130:8774/v2/%(tenant_id)s | -| region | RegionOne | -| service_id | abc0f03c02904c24abdcc3b7910e2eed | -+-------------+----------------------------------------------+ - - - Define the Volume service, which also requires a separate - endpoint for each tenant. - $ keystone service-create --name=cinder --type=volume --description="Volume Service" - -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Volume Service | -| id | 1ff4ece13c3e48d8a6461faebd9cd38f | -| name | volume | -| type | volume | -+-------------+----------------------------------+ - -$ keystone endpoint-create \ - --region RegionOne \ - --service-id=1ff4ece13c3e48d8a6461faebd9cd38f \ - --publicurl='http://192.168.206.130:8776/v1/%(tenant_id)s' \ - --internalurl='http://192.168.206.130:8776/v1/%(tenant_id)s' \ - --adminurl='http://192.168.206.130:8776/v1/%(tenant_id)s' - -+-------------+----------------------------------------------+ -| Property | Value | -+-------------+----------------------------------------------+ -| adminurl | http://192.168.206.130:8776/v1/%(tenant_id)s | -| id | 1ff4ece13c3e48d8a6461faebd9cd38f | -| internalurl | http://192.168.206.130:8776/v1/%(tenant_id)s | -| publicurl | http://192.168.206.130:8776/v1/%(tenant_id)s | -| region | RegionOne | -| service_id | 8a70cd235c7d4a05b43b2dffb9942cc0 | -+-------------+----------------------------------------------+ - - Define the Image service: - $ keystone service-create --name=glance --type=image --description="Image Service" - -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Image Service | -| id | 7d5258c490144c8c92505267785327c1 | -| name | glance | -| type | image | -+-------------+----------------------------------+ - -$ keystone endpoint-create \ - --region RegionOne \ - --service-id=7d5258c490144c8c92505267785327c1 \ - --publicurl=http://192.168.206.130:9292 \ - --internalurl=http://192.168.206.130:9292 \ - --adminurl=http://192.168.206.130:9292 - -+-------------+-----------------------------------+ -| Property | Value | -+-------------+-----------------------------------+ -| adminurl | http://192.168.206.130:9292 | -| id | 3c8c0d749f21490b90163bfaed9befe7 | -| internalurl | http://192.168.206.130:9292 | -| publicurl | http://192.168.206.130:9292 | -| region | RegionOne | -| service_id | 7d5258c490144c8c92505267785327c1 | -+-------------+-----------------------------------+ - - Define the EC2 compatibility service: - $ keystone service-create --name=ec2 --type=ec2 --description="EC2 Compatibility Layer" -+-------------+----------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | EC2 Compatibility Layer | -| id | 181cdad1d1264387bcc411e1c6a6a5fd | -| name | ec2 | -| type | ec2 | -+-------------+----------------------------------+ - -$ keystone endpoint-create \ - --region RegionOne \ - --service-id=181cdad1d1264387bcc411e1c6a6a5fd \ - --publicurl=http://192.168.206.130:8773/services/Cloud \ - --internalurl=http://192.168.206.130:8773/services/Cloud \ - --adminurl=http://192.168.206.130:8773/services/Admin - -+-------------+--------------------------------------------+ -| Property | Value | -+-------------+--------------------------------------------+ -| adminurl | http://192.168.206.130:8773/services/Admin | -| id | d2a3d7490c61442f9b2c8c8a2083c4b6 | -| internalurl | http://192.168.206.130:8773/services/Cloud | -| publicurl | http://192.168.206.130:8773/services/Cloud | -| region | RegionOne | -| service_id | 181cdad1d1264387bcc411e1c6a6a5fd | -+-------------+--------------------------------------------+ - - Define the Object Storage service: - $ keystone service-create --name=swift --type=object-store --description="Object Storage Service" -+-------------+---------------------------------+ -| Property | Value | -+-------------+----------------------------------+ -| description | Object Storage Service | -| id | 272efad2d1234376cbb911c1e5a5a6ed | -| name | swift | -| type | object-store | -+-------------+----------------------------------+ - -$ keystone endpoint-create \ - --region RegionOne \ - --service-id=272efad2d1234376cbb911c1e5a5a6ed \ - --publicurl 'http://192.168.206.130:8888/v1/AUTH_%(tenant_id)s' \ - --internalurl 'http://192.168.206.130:8888/v1/AUTH_%(tenant_id)s' \ - --adminurl 'http://192.168.206.130:8888/v1' - -+-------------+---------------------------------------------------+ -| Property | Value | -+-------------+---------------------------------------------------+ -| adminurl | http://192.168.206.130:8888/v1 | -| id | e32b3c4780e51332f9c128a8c208a5a4 | -| internalurl | http://192.168.206.130:8888/v1/AUTH_%(tenant_id)s | -| publicurl | http://192.168.206.130:8888/v1/AUTH_%(tenant_id)s | -| region | RegionOne | -| service_id | 272efad2d1234376cbb911c1e5a5a6ed | -+-------------+---------------------------------------------------+ - -
-
- Setting up Tenants, Users, Roles, and Services - - Scripted - The Keystone project offers a bash script for populating - tenants, users, roles and services at http://git.openstack.org/cgit/openstack/keystone/plain/tools/sample_data.sh - with sample data. This script uses 127.0.0.1 for all endpoint - IP addresses. This script also defines services for you. -
-
-
diff --git a/doc/install-guide/section_identity-verify-install.xml b/doc/install-guide/section_identity-verify-install.xml deleted file mode 100644 index 79079e730e..0000000000 --- a/doc/install-guide/section_identity-verify-install.xml +++ /dev/null @@ -1,113 +0,0 @@ - -
- Verifying the Identity Service Installation - - Verify that authentication is behaving as expected by using your - established username and password to generate an authentication token: - - $ keystone --os-username=admin --os-password=secrete --os-auth-url=http://192.168.206.130:35357/v2.0 token-get - -+----------+----------------------------------+ -| Property | Value | -+----------+----------------------------------+ -| expires | 2012-10-04T16:08:03Z | -| id | 960ad732a0eb4b2a88516f18384c1fba | -| user_id | a4c2d43f80a549a19864c89d759bb3fe | -+----------+----------------------------------+ - - - You should receive a token in response, paired with your user ID. - - - This verifies that keystone is running on the expected endpoint, and - that your user account is established with the expected credentials. - - - Next, verify that authorization is behaving as expected by requesting - authorization on a tenant: - - $ keystone --os-username=admin --os-password=secrete --os-tenant-name=demo --os-auth-url=http://192.168.206.130:35357/v2.0 token-get - -+-----------+----------------------------------+ -| Property | Value | -+-----------+----------------------------------+ -| expires | 2012-10-04T16:10:14Z | -| id | 8787f264d2a34607b37aa8d58d956afa | -| tenant_id | c1ac0f7f0e55448fa3940fa6b8b54911 | -| user_id | a4c2d43f80a549a19864c89d759bb3fe | -+-----------+----------------------------------+ - - You should receive a new token in response, this time including the ID - of the tenant you specified. - This verifies that your user account has an explicitly defined role on - the specified tenant, and that the tenant exists as expected. - You can also set your --os-* variables in your - environment to simplify CLI usage. - Best practice for bootstrapping the first administrative user is to - use the OS_SERVICE_ENDPOINT and OS_SERVICE_TOKEN together as environment - variables. - Once the admin user credentials are established, you can set up a - keystonerc file with the admin credentials and - admin endpoint (note the use of port 35357): - export OS_USERNAME=admin -export OS_PASSWORD=secrete -export OS_TENANT_NAME=demo -export OS_AUTH_URL=http://192.168.206.130:35357/v2.0 - - Save and source the keystonerc file. - - $ source keystonerc - - Verify that your keystonerc is configured correctly - by performing the same command as above, but without any - --os-* arguments. - - $ keystone token-get - -+-----------+----------------------------------+ -| Property | Value | -+-----------+----------------------------------+ -| expires | 2012-10-04T16:12:38Z | -| id | 03a13f424b56440fb39278b844a776ae | -| tenant_id | c1ac0f7f0e55448fa3940fa6b8b54911 | -| user_id | a4c2d43f80a549a19864c89d759bb3fe | -+-----------+----------------------------------+ - - The command returns a token and the ID of the specified tenant. - - This verifies that you have configured your environment variables - correctly. - - - Finally, verify that your admin account has authorization to perform - administrative commands. - - - Reminder - Unlike basic authentication/authorization, which can be performed - against either port 5000 or 35357, administrative commands MUST be - performed against the admin API port: 35357). This means that you - MUST use port 35357 in your OS_AUTH_URL or - --os-auth-url setting when working with - keystone CLI. - - $ keystone user-list - -+----------------------------------+---------+-------+--------+ -| id | enabled | email | name | -+----------------------------------+---------+-------+--------+ -| 318003c9a97342dbab6ff81675d68364 | True | None | swift | -| 3a316b32f44941c0b9ebc577feaa5b5c | True | None | nova | -| ac4dd12ebad84e55a1cd964b356ddf65 | True | None | glance | -| a4c2d43f80a549a19864c89d759bb3fe | True | None | admin | -| ec47114af7014afd9a8994cbb6057a8b | True | None | ec2 | -+----------------------------------+---------+-------+--------+ - - - This verifies that your user account has the admin - role, as defined in keystone's policy.json file. - -
diff --git a/doc/install-guide/section_install-config-glance.xml b/doc/install-guide/section_install-config-glance.xml deleted file mode 100644 index f160f0eeff..0000000000 --- a/doc/install-guide/section_install-config-glance.xml +++ /dev/null @@ -1,182 +0,0 @@ - -
- Installing and Configuring the Image Service - - Install the Image service, as root: - # sudo apt-get install glance - # yum install openstack-glance - # zypper install openstack-glance - If you are using Ubuntu, delete the glance.sqlite file created in the - /var/lib/glance/ directory: - # rm /var/lib/glance/glance.sqlite - - -
- Configuring the Image Service database backend - - Configure the backend data store. For MySQL, create a glance - MySQL database and a glance MySQL user. Grant the "glance" user - full access to the glance MySQL database. - Start the MySQL command line client by running: - $ mysql -u root -p - Enter the MySQL root user's password when prompted. - To configure the MySQL database, create the glance database. - mysql> CREATE DATABASE glance; - Create a MySQL user for the newly-created glance database that has full control of the database. - mysql> GRANT ALL ON glance.* TO 'glance'@'%' IDENTIFIED BY '[YOUR_GLANCEDB_PASSWORD]'; -mysql> GRANT ALL ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '[YOUR_GLANCEDB_PASSWORD]'; - - In the above commands, even though the 'glance'@'%' also matches - 'glance'@'localhost', you must explicitly specify the - 'glance'@'localhost' entry. - By default, MySQL will create entries in the user table with - User='' and Host='localhost'. The - User='' acts as a wildcard, matching all users. If you do not - have the 'glance'@'localhost' account, and you try to log in as the - glance user, the precedence rules of MySQL will match against the User='' - Host='localhost' account before it matches against the - User='glance' Host='%' account. This will result in an error - message that looks like: - - ERROR 1045 (28000): Access denied for user 'glance'@'localhost' (using password: YES) - - Thus, we create a separate User='glance' Host='localhost' entry that - will match with higher precedence. - See the MySQL documentation on connection verification for more details on how MySQL - determines which row in the user table it uses when authenticating connections. - - - Enter quit at the - mysql> prompt to exit MySQL. - - mysql> quit - -
-
- Edit the Glance configuration files - The Image service has a number of options that you can - use to configure the Glance API server, optionally the - Glance Registry server, and the various storage backends - that Glance can use to store images. By default, the - storage backend is in a file, specified in the - glance-api.conf config file in the section [DEFAULT]. - - The glance-api service implements - versions 1 and 2 of the OpenStack Images API. By default, - both are enabled by setting these configuration options to - True in the glance-api.conf - file. - - enable_v1_api=True - enable_v2_api=True - Disable either version of the Images API by setting the - option to False in the - glance-api.conf file. - - In order to use the v2 API, you must copy the - necessary SQL configuration from your glance-registry - service to your glance-api configuration file. The - following instructions assume that you want to use the - v2 Image API for your installation. The v1 API is - implemented on top of the glance-registry service - while the v2 API is not. - - Most configuration is done via configuration files, with the Glance API server (and - possibly the Glance Registry server) using separate configuration files. When installing - through an operating system package management system, sample configuration files are - installed in /etc/glance. - This walkthrough installs the image service using a file - backend and the Identity service (Keystone) for - authentication. - Add the admin and service identifiers and - flavor=keystone to the end of - /etc/glance/glance-api.conf as - shown below. - [keystone_authtoken] -auth_host = 127.0.0.1 -auth_port = 35357 -auth_protocol = http -admin_tenant_name = service -admin_user = glance -admin_password = glance - -[paste_deploy] -# Name of the paste configuration file that defines the available pipelines -config_file = /etc/glance/glance-api-paste.ini - -# Partial name of a pipeline in your paste configuration file with the -# service name removed. For example, if your paste section name is -# [pipeline:glance-api-keystone], you would configure the flavor below -# as 'keystone'. -flavor=keystone - Ensure that - /etc/glance/glance-api.conf - points to the MySQL database rather than - sqlite.sql_connection = mysql://glance:[YOUR_GLANCEDB_PASSWORD]@192.168.206.130/glance - Restart glance-api to pick up these changed - settings. - # service openstack-glance-api restart - Update the last sections of - /etc/glance/glance-registry.conf - to reflect the values you set earlier for admin user and - the service tenant, plus enable the Identity service with - flavor=keystone. - [keystone_authtoken] -auth_host = 127.0.0.1 -auth_port = 35357 -auth_protocol = http -admin_tenant_name = service -admin_user = glance -admin_password = glance - -[paste_deploy] -# Name of the paste configuration file that defines the available pipelines -config_file = /etc/glance/glance-registry-paste.ini - -# Partial name of a pipeline in your paste configuration file with the -# service name removed. For example, if your paste section name is -# [pipeline:glance-api-keystone], you would configure the flavor below -# as 'keystone'. -flavor=keystone - Update - /etc/glance/glance-registry-paste.ini - by enabling the Identity service, keystone: - # Use this pipeline for keystone auth -[pipeline:glance-registry-keystone] -pipeline = authtoken context registryapp - Ensure that - /etc/glance/glance-registry.conf - points to the MySQL database rather than - sqlite.sql_connection = mysql://glance:[YOUR_GLANCEDB_PASSWORD]@192.168.206.130/glance - Restart glance-registry to pick up these changed - settings. - # service openstack-glance-registry restart - - Any time you change the .conf files, restart the - corresponding service. - - On Ubuntu 12.04, the database tables are - under version control and you must do these steps on a new - install to prevent the Image service from breaking - possible upgrades, as root: - # glance-manage version_control 0 - Now you can populate or migrate the database. - # glance-manage db_sync - Restart glance-registry and glance-api services, as - root: - # service glance-registry restart -# service glance-api restart - - This guide does not configure image caching. Refer - to http://docs.openstack.org/developer/glance/ - for more information. - -
diff --git a/doc/install-guide/section_keystone-install.xml b/doc/install-guide/section_keystone-install.xml new file mode 100644 index 0000000000..d195297dd4 --- /dev/null +++ b/doc/install-guide/section_keystone-install.xml @@ -0,0 +1,64 @@ + +
+ Installing the Identity Service + + + Install the Identity Service on the controller node: + # apt-get install keystone python-keystone python-keystoneclient + # yum install openstack-keystone python-keystoneclient + # zypper install openstack-keystone python-keystoneclient + + + + The Identity Service uses a database to store information. + Specify the location of the database in the configuration file. + In this guide, we use a MySQL database on the controller node + with the username keystone. Replace + KEYSTONE_DBPASS + with a suitable password for the database user. + # openstack-config --set /etc/keystone/keystone.conf \ + sql connection mysql://keystone:KEYSTONE_DBPASS@controller/keystone + + + + Use the openstack-db command to create the + database and tables, as well as a database user called + keystone to connect to the database. Replace + KEYSTONE_DBPASS + with the same password used in the previous step. + # openstack-db --init --service keystone --password KEYSTONE_DBPASS + + + + You need to define an authorization token that is used as a + shared secret between the Identity Service and other OpenStack services. + Use openssl to generate a random token, then store it + in the configuration file. + # ADMIN_TOKEN=$(openssl rand -hex 10) +# echo $ADMIN_TOKEN +# openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token $ADMIN_TOKEN + + + + By default Keystone will use PKI tokens. Create the signing + keys and certificates. + # keystone-manage pki_setup --keystone-user keystone --keystone-group keystone +# chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log + + + + Start the Identiy Service and enable it so it start when + the system boots. + # service keystone start +# chkconfig keystone on + # service openstack-keystone start +# chkconfig openstack-keystone on + # systemctl start openstack-keystone.service +# systemctl enable openstack-keystone.service + + + +
diff --git a/doc/install-guide/section_keystone-services.xml b/doc/install-guide/section_keystone-services.xml new file mode 100644 index 0000000000..1e685d2966 --- /dev/null +++ b/doc/install-guide/section_keystone-services.xml @@ -0,0 +1,62 @@ +
+ Defining Services and API Endpoints + + The Identiy Service also tracks what OpenStack services are + installed and where to locate them on the network. For each service + on your OpenStack installation, you must call + keystone service-create to describe the service + and keystone endpoint-create to specify the API + endpoints associated with the service. + + For now, create a service for the Identity Service itself. + This will allow you to stop using the authorization token and instead + use normal authentication when using the keystone + command in the future. + + First, create a service entry for the Identity Service. + + # keystone service-create --name=keystone --type=identity \ + --description="Keystone Identity Service" ++-------------+----------------------------------+ +| Property | Value | ++-------------+----------------------------------+ +| description | Keystone Identity Service | +| id | 15c11a23667e427e91bc31335b45f4bd | +| name | keystone | +| type | identity | ++-------------+----------------------------------+ + + The service id is randomly generated, and will be different + from the one shown above when you run the command. Next, specify + an API endpoint for the Identity Service using the service id you + received. When you specify an endpoint, you provide three URLs + for the public API, the internal API, and the admin API. In this + guide, we use the hostname controller. Note + that the Identity Service uses a different port for the admin + API. + + # keystone endpoint-create \ + --service-id=15c11a23667e427e91bc31335b45f4bd \ + --publicurl=http://controller:5000/v2.0 \ + --internalurl=http://controller:5000/v2.0 \ + --adminurl=http://controller:35357/v2.0 ++-------------+-----------------------------------+ +| Property | Value | ++-------------+-----------------------------------+ +| adminurl | http://controller:35357/v2.0 | +| id | 11f9c625a3b94a3f8e66bf4e5de2679f | +| internalurl | http://controller:5000/v2.0 | +| publicurl | http://controller:5000/v2.0 | +| region | regionOne | +| service_id | 15c11a23667e427e91bc31335b45f4bd | ++-------------+-----------------------------------+ + + + + As you add other services to your OpenStack installation, you + will call these commands again to register those services with the + Identity Service. + +
\ No newline at end of file diff --git a/doc/install-guide/section_keystone-users.xml b/doc/install-guide/section_keystone-users.xml new file mode 100644 index 0000000000..eb26cb13f0 --- /dev/null +++ b/doc/install-guide/section_keystone-users.xml @@ -0,0 +1,52 @@ +
+ Defining Users, Tenants, and Roles + + Once Keystone is installed and running, you set up users, tenants, + and roles to authenticate against. These are used to allow access to + services and endpoints, described in the next section. + + Typically, you would use a username and password to authenticate + with the Identity service. At this point, however, we have not created + any users, so we have to use the authorization token created in the + previous section. You can pass this with the + option to the keystone command or set the + OS_SERVICE_TOKEN environment variable. We'll set + OS_SERVICE_TOKEN, as well as + OS_SERVICE_ENDPOINT to specify where the Identity + Service is running. Replace + FCAF3E... + with your authorization token. + + # export OS_SERVICE_TOKEN=FCAF3E... +# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0 + + First, create a tenant for an administrative user and a tenant + for other OpenStack services to use. + + # keystone tenant-create --name=admin --description="Admin Tenant" +# keystone tenant-create --name=service --description="Service Tenant" + + Next, create an administrative user called admin. + Choose a password for the admin user and specify an + email address for the account. + + # keystone user-create --name=admin --pass=ADMIN_PASS --email=admin@example.com + + Create a role for administrative tasks called admin. + Any roles you create should map to roles specified in the + policy.json files of the various OpenStack services. + The default policy files use the admin role to allow + access to most services. + + # keystone role-create --name=admin + + Finally, you have to add roles to users. Users always log in with + a tenant, and roles are assigned to users within roles. Add the + admin role to the admin user when + logging in with the admin tenant. + + # keystone user-role-add --user=admin --tenant=admin --role=admin + +
diff --git a/doc/install-guide/section_keystone-verify.xml b/doc/install-guide/section_keystone-verify.xml new file mode 100644 index 0000000000..ec24864406 --- /dev/null +++ b/doc/install-guide/section_keystone-verify.xml @@ -0,0 +1,77 @@ + +
+ + Verifying the Identity Service Installation + + To verify the Identity Service is installed and configured + correctly, first unset the OS_SERVICE_TOKEN and + OS_SERVICE_ENDPOINT environment variables. These + were only used to bootstrap the administrative user and register + the Identity Service. + + # unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT + + You can now use regular username-based authentication. + Request a authentication token using the admin + user and the password you chose for that user. + + # keystone --os-username=admin --os-password=ADMIN_PASS + --os-auth-url=http://controller:35357/v2.0 token-get + + You should receive a token in response, paired with your user ID. + This verifies that keystone is running on the expected endpoint, and + that your user account is established with the expected credentials. + + Next, verify that authorization is behaving as expected by requesting + authorization on a tenant. + + # keystone --os-username=admin --os-password=ADMIN_PASS + --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 token-get + + You should receive a new token in response, this time including the + ID of the tenant you specified. This verifies that your user account has + an explicitly defined role on the specified tenant, and that the tenant + exists as expected. + + You can also set your --os-* variables in your + environment to simplify command-line usage. Set up a + keystonerc file with the admin credentials and + admin endpoint. + + export OS_USERNAME=admin +export OS_PASSWORD=ADMIN_PASS +export OS_TENANT_NAME=admin +export OS_AUTH_URL=http://controller:35357/v2.0 + + You can source this file to read in the environment variables. + + # source keystonerc + + Verify that your keystonerc is configured + correctly by performing the same command as above, but without the + --os-* arguments. + + $ keystone token-get + + The command returns a token and the ID of the specified tenant. + This verifies that you have configured your environment variables + correctly. + + Finally, verify that your admin account has authorization to + perform administrative commands. + + # keystone user-list + ++----------------------------------+---------+--------------------+--------+ +| id | enabled | email | name | ++----------------------------------+---------+--------------------+--------+ +| a4c2d43f80a549a19864c89d759bb3fe | True | admin@example.com | admin | + + + This verifies that your user account has the + admin role, which matches the role used in + the Identity Service's policy.json file. +
diff --git a/doc/install-guide/section_nova-boot.xml b/doc/install-guide/section_nova-boot.xml new file mode 100644 index 0000000000..8473f5ed87 --- /dev/null +++ b/doc/install-guide/section_nova-boot.xml @@ -0,0 +1,9 @@ +
+ Booting an Image + + FIXME +
diff --git a/doc/install-guide/section_nova-compute.xml b/doc/install-guide/section_nova-compute.xml new file mode 100644 index 0000000000..cb98bdb435 --- /dev/null +++ b/doc/install-guide/section_nova-compute.xml @@ -0,0 +1,104 @@ +
+ Installing a Compute Node + + After configuring the Compute Services on the controller node, + configure a second system to be a compute node. The compute node receives + requests from the controller node and hosts virtual machine instances. + You can run all services on a single node, but this guide uses separate + systems. This makes it easy to scale horizontally by adding additional + compute nodes following the instructions in this section. + + The Compute Service relies on a hypervisor to run virtual machine + instances. OpenStack can use various hypervisors, but this guide uses + KVM. + + Begin by configuring the system using the instructions in + . Note the following differences from the + controller node: + + + + Use different IP addresses when editing the files + ifcfg-eth0 and ifcfg-eht1. + This guide uses 192.168.0.11 for the internal network + and 10.0.0.11 for the external network. + + + Set the hostname to compute1. Ensure that the + IP addresses and hostnames for both nodes are listed in the + /etc/hosts file on each system. + + + Do not run the NTP server. Follow the instructions in + to synchronize from the controller node. + + + You do not need to install the MySQL database server or start + the MySQL service. Just install the client libraries. + + + You do not need to install a messaging queue server. + + + + After configuring the operating system, install the appropriate + packages for the compute service. + + # apt-get install nova-compute-kvm + # yum install openstack-nova-compute + # zypper install openstack-nova-compute kvm + + Either copy the file /etc/nova/nova.conf from the + controller node, or run the same configuration commands. + + # openstack-config --set /etc/nova/nova.conf \ + database connection mysql://nova:NOVA_DBPASS@controller/nova +# openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone +# openstack-config --set /etc/nova/nova.conf DEFAULT auth_host controller +# openstack-config --set /etc/nova/nova.conf DEFAULT admin_user nova +# openstack-config --set /etc/nova/nova.conf DEFAULT admin_tenant_name service +# openstack-config --set /etc/nova/nova.conf DEFAULT admin_password NOVA_PASS + + + # openstack-config --set /etc/nova/nova.conf \ + DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid +# openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller + + Set the configuration keys my_ip, + vncserver_listen, and + vncserver_proxyclient_address to the IP address of the + compute node on the internal network. + + # openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.11 +# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.0.11 +# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.0.11 + + Copy the file /etc/nova/api-paste.ini from the + controller node, or edit the file to add the credentials in the + [filter:authtoken] section. + + [filter:authtoken] +paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory +auth_host=controller +admin_user=nova +admin_tenant_name=service +admin_password=NOVA_PASS + + + + + Finally, start the compute service and configure it to start when + the system boots. + + # service nova-compute start +# chkconfig nova-compute on + # service openstack-nova-compute start +# chkconfig openstack-nova-compute on + # systemctl start openstack-nova-compute +# systemctl enable openstack-nova-compute + +
diff --git a/doc/install-guide/section_nova-controller.xml b/doc/install-guide/section_nova-controller.xml new file mode 100644 index 0000000000..9704db3909 --- /dev/null +++ b/doc/install-guide/section_nova-controller.xml @@ -0,0 +1,160 @@ +
+ Installing the Nova Controller Services + + The OpenStack Compute Service is a collection of services that allow + you to spin up virtual machine instances. These services can be configured + to run on separate nodes or all on the same system. In this guide, we run + most of the services on the controller node, and use a dedicated compute + node to run the service that launches virtual machines. This section + details the installation and configuration on the controller node. + + Install the opentack-nova + meta-package. This package will install all of the various Nova packages, most of + which will be used on the controller node in this guide. + + # yum install openstack-nova + # zypper install openstack-nova + + Install the following Nova packages. These packages provide + the OpenStack Compute services that will be run on the controller node in this + guide. + + # apt-get install nova-novncproxy novnc nova-api nova-ajax-console-proxy nova-cert \ + nova-conductor nova-consoleauth nova-doc nova-scheduler nova-network + + The Compute Service stores information in a database. This guide uses + the MySQL database used by other OpenStack services. Use the + openstack-db command to create the database and tables + for the Compute Service, as well as a database user called + nova to connect to the database. Replace + NOVA_DBPASS with a + password of your choosing. + + # openstack-db --init --service nova --password NOVA_DBPASS + + You now have to tell the Compute Service to use that database. + + # openstack-config --set /etc/nova/nova.conf \ + database connection mysql://nova:NOVA_DBPASS@controller/nova + + Set the configuration keys my_ip, + vncserver_listen, and + vncserver_proxyclient_address to the IP address of the + controller node. + + # openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.0.10 +# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_listen 192.168.0.10 +# openstack-config --set /etc/nova/nova.conf DEFAULT vncserver_proxyclient_address 192.168.0.10 + + Create a user called nova that the Compute Service + can use to authenticate with the Identity Service. Use the + service tenant and give the user the + admin role. + + # keystone user-create --name=nova --pass=NOVA_PASS --email=nova@example.com +# keystone user-role-add --user=nova --tenant=service --role=admin + + For the Compute Service to use these credentials, you have to add + them to the nova.conf configuration file. + + # openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone +# openstack-config --set /etc/nova/nova.conf DEFAULT auth_host controller +# openstack-config --set /etc/nova/nova.conf DEFAULT admin_user nova +# openstack-config --set /etc/nova/nova.conf DEFAULT admin_tenant_name service +# openstack-config --set /etc/nova/nova.conf DEFAULT admin_password NOVA_PASS + + You also have to add the credentials to the file + /etc/nova/api-paste.ini. Open the file in a text editor + and locate the section [filter:authtoken]. + Make sure the following options are set: + + [filter:authtoken] +paste.filter_factory=keystoneclient.middleware.auth_token:filter_factory +auth_host=controller +admin_user=nova +admin_tenant_name=service +admin_password=NOVA_PASS + + + You have to register the Compute Service with the Identity Service + so that other OpenStack services can locate it. Register the service and + specify the endpoint using the keystone command. + + # keystone service-create --name=nova --type=compute \ + --description="Nova Compute Service" + + Note the id property returned and use it when + creating the endpoint. + + # keystone endpoint-create \ + --service-id=the_service_id_above \ + --publicurl=http://controller:8774/v2/%(tenant_id)s \ + --internalurl=http://controller:8774/v2/%(tenant_id)s \ + --adminurl=http://controller:8774/v2/%(tenant_id)s + + + Configure the Compute Service to use the + Qpid message broker by setting the following configuration keys. + + # openstack-config --set /etc/nova/nova.conf \ + DEFAULT rpc_backend nova.openstack.common.rpc.impl_qpid +# openstack-config --set /etc/nova/nova.conf DEFAULT qpid_hostname controller + + + + Finally, start the various Nova services and configure them + to start when the system boots. + + # service nova-api start +# service nova-cert start +# service nova-consoleauth start +# service nova-scheduler start +# service nova-conductor start +# service nova-novncproxy start +# chkconfig nova-api on +# chkconfig nova-cert on +# chkconfig nova-consoleauth on +# chkconfig nova-scheduler on +# chkconfig nova-conductor on +# chkconfig nova-novncproxy on + # service openstack-nova-api start +# service openstack-nova-cert start +# service openstack-nova-consoleauth start +# service openstack-nova-scheduler start +# service openstack-nova-conductor start +# service openstack-nova-novncproxy start +# chkconfig openstack-nova-api on +# chkconfig openstack-nova-cert on +# chkconfig openstack-nova-consoleauth on +# chkconfig openstack-nova-scheduler on +# chkconfig openstack-nova-conductor on +# chkconfig openstack-nova-novncproxy on + # systemctl start openstack-nova-api.service +# systemctl start openstack-nova-cert.service +# systemctl start openstack-nova-consoleauth.service +# systemctl start openstack-nova-scheduler.service +# systemctl start openstack-nova-conductor.service +# systemctl start openstack-nova-novncproxy.service +# systemctl enable openstack-nova-api.service +# systemctl enable openstack-nova-cert.service +# systemctl enable openstack-nova-consoleauth.service +# systemctl enable openstack-nova-scheduler.service +# systemctl enable openstack-nova-conductor.service +# systemctl enable openstack-nova-novncproxy.service + + To verify that everything is configured correctly, use the + nova image-list to get a list of available images. The + output is similar to the output of glance image-list. + + # nova image-list ++--------------------------------------+-----------------+--------+--------+ +| ID | Name | Status | Server | ++--------------------------------------+-----------------+--------+--------+ +| acafc7c0-40aa-4026-9673-b879898e1fc2 | CirrOS 0.3.1 | ACTIVE | | ++--------------------------------------+-----------------+--------+--------+ + +
diff --git a/doc/install-guide/section_nova-kvm.xml b/doc/install-guide/section_nova-kvm.xml new file mode 100644 index 0000000000..10ce5f21b6 --- /dev/null +++ b/doc/install-guide/section_nova-kvm.xml @@ -0,0 +1,9 @@ +
+ Enabling KVM on the Compute Node + + FIXME +
diff --git a/doc/install-guide/section_nova-network.xml b/doc/install-guide/section_nova-network.xml new file mode 100644 index 0000000000..ed7dbd5efe --- /dev/null +++ b/doc/install-guide/section_nova-network.xml @@ -0,0 +1,9 @@ +
+ Enabling Networking + + FIXME +