diff --git a/doc/common/get-started-block-storage.rst b/doc/common/get-started-block-storage.rst deleted file mode 100644 index 846b9c7780..0000000000 --- a/doc/common/get-started-block-storage.rst +++ /dev/null @@ -1,36 +0,0 @@ -============================== -Block Storage service overview -============================== - -The OpenStack Block Storage service (cinder) adds persistent storage -to a virtual machine. Block Storage provides an infrastructure for managing -volumes, and interacts with OpenStack Compute to provide volumes for -instances. The service also enables management of volume snapshots, and -volume types. - -The Block Storage service consists of the following components: - -cinder-api - Accepts API requests, and routes them to the ``cinder-volume`` for - action. - -cinder-volume - Interacts directly with the Block Storage service, and processes - such as the ``cinder-scheduler``. It also interacts with these processes - through a message queue. The ``cinder-volume`` service responds to read - and write requests sent to the Block Storage service to maintain - state. It can interact with a variety of storage providers through a - driver architecture. - -cinder-scheduler daemon - Selects the optimal storage provider node on which to create the - volume. A similar component to the ``nova-scheduler``. - -cinder-backup daemon - The ``cinder-backup`` service provides backing up volumes of any type to - a backup storage provider. Like the ``cinder-volume`` service, it can - interact with a variety of storage providers through a driver - architecture. - -Messaging queue - Routes information between the Block Storage processes. diff --git a/doc/common/get-started-compute.rst b/doc/common/get-started-compute.rst deleted file mode 100644 index 7692fa452a..0000000000 --- a/doc/common/get-started-compute.rst +++ /dev/null @@ -1,102 +0,0 @@ -======================== -Compute service overview -======================== - -Use OpenStack Compute to host and manage cloud computing systems. -OpenStack Compute is a major part of an :term:`Infrastructure-as-a-Service -(IaaS)` system. The main modules are implemented in Python. - -OpenStack Compute interacts with OpenStack Identity for authentication; -OpenStack Image service for disk and server images; and OpenStack -Dashboard for the user and administrative interface. Image access is -limited by projects, and by users; quotas are limited per project (the -number of instances, for example). OpenStack Compute can scale -horizontally on standard hardware, and download images to launch -instances. - -OpenStack Compute consists of the following areas and their components: - -``nova-api`` service - Accepts and responds to end user compute API calls. The service - supports the OpenStack Compute API, the Amazon EC2 API, and a - special Admin API for privileged users to perform administrative - actions. It enforces some policies and initiates most orchestration - activities, such as running an instance. - -``nova-api-metadata`` service - Accepts metadata requests from instances. The ``nova-api-metadata`` - service is generally used when you run in multi-host mode with - ``nova-network`` installations. For details, see `Metadata - service `__ - in the OpenStack Administrator Guide. - -``nova-compute`` service - A worker daemon that creates and terminates virtual machine - instances through hypervisor APIs. For example: - - - XenAPI for XenServer/XCP - - - libvirt for KVM or QEMU - - - VMwareAPI for VMware - - Processing is fairly complex. Basically, the daemon accepts actions - from the queue and performs a series of system commands such as - launching a KVM instance and updating its state in the database. - -``nova-placement-api`` service - Tracks the inventory and usage of each provider. For details, see - `Placement API `__. - -``nova-scheduler`` service - Takes a virtual machine instance request from the queue and - determines on which compute server host it runs. - -``nova-conductor`` module - Mediates interactions between the ``nova-compute`` service and the - database. It eliminates direct accesses to the cloud database made - by the ``nova-compute`` service. The ``nova-conductor`` module scales - horizontally. However, do not deploy it on nodes where the - ``nova-compute`` service runs. For more information, see `Configuration - Reference Guide `__. - -``nova-consoleauth`` daemon - Authorizes tokens for users that console proxies provide. See - ``nova-novncproxy`` and ``nova-xvpvncproxy``. This service must be running - for console proxies to work. You can run proxies of either type - against a single nova-consoleauth service in a cluster - configuration. For information, see `About - nova-consoleauth `__. - -``nova-novncproxy`` daemon - Provides a proxy for accessing running instances through a VNC - connection. Supports browser-based novnc clients. - -``nova-spicehtml5proxy`` daemon - Provides a proxy for accessing running instances through a SPICE - connection. Supports browser-based HTML5 client. - -``nova-xvpvncproxy`` daemon - Provides a proxy for accessing running instances through a VNC - connection. Supports an OpenStack-specific Java client. - -The queue - A central hub for passing messages between daemons. Usually - implemented with `RabbitMQ `__, also can be - implemented with another AMQP message queue, such as `ZeroMQ `__. - -SQL database - Stores most build-time and run-time states for a cloud - infrastructure, including: - - - Available instance types - - - Instances in use - - - Available networks - - - Projects - - Theoretically, OpenStack Compute can support any database that - SQLAlchemy supports. Common databases are SQLite3 for test and - development work, MySQL, MariaDB, and PostgreSQL. diff --git a/doc/common/get-started-dashboard.rst b/doc/common/get-started-dashboard.rst deleted file mode 100644 index c38a78b5ff..0000000000 --- a/doc/common/get-started-dashboard.rst +++ /dev/null @@ -1,21 +0,0 @@ -================== -Dashboard overview -================== - -The OpenStack Dashboard is a modular `Django web -application `__ that provides a -graphical interface to OpenStack services. - -.. image:: figures/horizon-screenshot.png - :width: 100% - -The dashboard is usually deployed through -`mod_wsgi `__ in Apache. You can -modify the dashboard code to make it suitable for different sites. - -From a network architecture point of view, this service must be -accessible to customers and the public API for each OpenStack service. -To use the administrator functionality for other services, it must also -connect to Admin API endpoints, which should not be accessible by -customers. - diff --git a/doc/common/get-started-data-processing.rst b/doc/common/get-started-data-processing.rst deleted file mode 100644 index e3bc756a3f..0000000000 --- a/doc/common/get-started-data-processing.rst +++ /dev/null @@ -1,40 +0,0 @@ -================================ -Data Processing service overview -================================ - -The Data processing service for OpenStack (sahara) aims to provide users -with a simple means to provision data processing (Hadoop, Spark) -clusters by specifying several parameters like Hadoop version, cluster -topology, node hardware details and a few more. After a user fills in -all the parameters, the Data processing service deploys the cluster in a -few minutes. Sahara also provides a means to scale already provisioned -clusters by adding or removing worker nodes on demand. - -The solution addresses the following use cases: - -* Fast provisioning of Hadoop clusters on OpenStack for development and - QA. - -* Utilization of unused compute power from general purpose OpenStack - IaaS cloud. - -* Analytics-as-a-Service for ad-hoc or bursty analytic workloads. - -Key features are: - -* Designed as an OpenStack component. - -* Managed through REST API with UI available as part of OpenStack - Dashboard. - -* Support for different Hadoop distributions: - - * Pluggable system of Hadoop installation engines. - - * Integration with vendor specific management tools, such as Apache - Ambari or Cloudera Management Console. - -* Predefined templates of Hadoop configurations with the ability to - modify parameters. - -* User-friendly UI for ad-hoc analytics queries based on Hive or Pig. diff --git a/doc/common/get-started-database-service.rst b/doc/common/get-started-database-service.rst deleted file mode 100644 index 2b2edae59d..0000000000 --- a/doc/common/get-started-database-service.rst +++ /dev/null @@ -1,66 +0,0 @@ -========================= -Database service overview -========================= - -The Database service provides scalable and reliable cloud provisioning -functionality for both relational and non-relational database engines. -Users can quickly and easily use database features without the burden of -handling complex administrative tasks. Cloud users and database -administrators can provision and manage multiple database instances as -needed. - -The Database service provides resource isolation at high performance -levels and automates complex administrative tasks such as deployment, -configuration, patching, backups, restores, and monitoring. - -**Process flow example** - -This example is a high-level process flow for using Database services: - -#. The OpenStack Administrator configures the basic infrastructure using - the following steps: - - #. Install the Database service. - #. Create an image for each type of database. For example, one for MySQL - and one for MongoDB. - #. Use the :command:`trove-manage` command to import images and offer them - to projects. - -#. The OpenStack end user deploys the Database service using the following - steps: - - #. Create a Database service instance using the :command:`trove create` - command. - #. Use the :command:`trove list` command to get the ID of the instance, - followed by the :command:`trove show` command to get the IP address of - it. - #. Access the Database service instance using typical database access - commands. For example, with MySQL: - - .. code-block:: console - - $ mysql -u myuser -p -h TROVE_IP_ADDRESS mydb - -**Components** - -The Database service includes the following components: - -``python-troveclient`` command-line client - A CLI that communicates with the ``trove-api`` component. - -``trove-api`` component - Provides an OpenStack-native RESTful API that supports JSON to - provision and manage Trove instances. - -``trove-conductor`` service - Runs on the host, and receives messages from guest instances that - want to update information on the host. - -``trove-taskmanager`` service - Instruments the complex system flows that support provisioning - instances, managing the lifecycle of instances, and performing - operations on instances. - -``trove-guestagent`` service - Runs within the guest instance. Manages and performs operations on - the database itself. diff --git a/doc/common/get-started-identity.rst b/doc/common/get-started-identity.rst deleted file mode 100644 index 887a649c22..0000000000 --- a/doc/common/get-started-identity.rst +++ /dev/null @@ -1,54 +0,0 @@ -========================= -Identity service overview -========================= - -The OpenStack :term:`Identity service ` provides -a single point of integration for managing authentication, authorization, and -a catalog of services. - -The Identity service is typically the first service a user interacts with. Once -authenticated, an end user can use their identity to access other OpenStack -services. Likewise, other OpenStack services leverage the Identity service to -ensure users are who they say they are and discover where other services are -within the deployment. The Identity service can also integrate with some -external user management systems (such as LDAP). - -Users and services can locate other services by using the service catalog, -which is managed by the Identity service. As the name implies, a service -catalog is a collection of available services in an OpenStack deployment. Each -service can have one or many endpoints and each endpoint can be one of three -types: admin, internal, or public. In a production environment, different -endpoint types might reside on separate networks exposed to different types of -users for security reasons. For instance, the public API network might be -visible from the Internet so customers can manage their clouds. The admin API -network might be restricted to operators within the organization that manages -cloud infrastructure. The internal API network might be restricted to the hosts -that contain OpenStack services. Also, OpenStack supports multiple regions for -scalability. For simplicity, this guide uses the management network for all -endpoint types and the default ``RegionOne`` region. Together, regions, -services, and endpoints created within the Identity service comprise the -service catalog for a deployment. Each OpenStack service in your deployment -needs a service entry with corresponding endpoints stored in the Identity -service. This can all be done after the Identity service has been installed and -configured. - -The Identity service contains these components: - -Server - A centralized server provides authentication and authorization - services using a RESTful interface. - -Drivers - Drivers or a service back end are integrated to the centralized - server. They are used for accessing identity information in - repositories external to OpenStack, and may already exist in - the infrastructure where OpenStack is deployed (for example, SQL - databases or LDAP servers). - -Modules - Middleware modules run in the address space of the OpenStack - component that is using the Identity service. These modules - intercept service requests, extract user credentials, and send them - to the centralized server for authorization. The integration between - the middleware modules and OpenStack components uses the Python Web - Server Gateway Interface. diff --git a/doc/common/get-started-image-service.rst b/doc/common/get-started-image-service.rst deleted file mode 100644 index c000779dbb..0000000000 --- a/doc/common/get-started-image-service.rst +++ /dev/null @@ -1,71 +0,0 @@ -====================== -Image service overview -====================== - -The Image service (glance) enables users to discover, -register, and retrieve virtual machine images. It offers a -:term:`REST ` API that enables you to query virtual -machine image metadata and retrieve an actual image. -You can store virtual machine images made available through -the Image service in a variety of locations, from simple file -systems to object-storage systems like OpenStack Object Storage. - -.. important:: - - For simplicity, this guide describes configuring the Image service to - use the ``file`` back end, which uploads and stores in a - directory on the controller node hosting the Image service. By - default, this directory is ``/var/lib/glance/images/``. - - Before you proceed, ensure that the controller node has at least - several gigabytes of space available in this directory. Keep in - mind that since the ``file`` back end is often local to a controller - node, it is not typically suitable for a multi-node glance deployment. - - For information on requirements for other back ends, see - `Configuration Reference - `__. - -The OpenStack Image service is central to Infrastructure-as-a-Service -(IaaS) as shown in :ref:`get_started_conceptual_architecture`. It accepts API -requests for disk or server images, and metadata definitions from end users or -OpenStack Compute components. It also supports the storage of disk or server -images on various repository types, including OpenStack Object Storage. - -A number of periodic processes run on the OpenStack Image service to -support caching. Replication services ensure consistency and -availability through the cluster. Other periodic processes include -auditors, updaters, and reapers. - -The OpenStack Image service includes the following components: - -glance-api - Accepts Image API calls for image discovery, retrieval, and storage. - -glance-registry - Stores, processes, and retrieves metadata about images. Metadata - includes items such as size and type. - - .. warning:: - - The registry is a private internal service meant for use by - OpenStack Image service. Do not expose this service to users. - -Database - Stores image metadata and you can choose your database depending on - your preference. Most deployments use MySQL or SQLite. - -Storage repository for image files - Various repository types are supported including normal file - systems (or any filesystem mounted on the glance-api controller - node), Object Storage, RADOS block devices, VMware datastore, - and HTTP. Note that some repositories will only support read-only - usage. - -Metadata definition service - A common API for vendors, admins, services, and users to meaningfully - define their own custom metadata. This metadata can be used on - different types of resources like images, artifacts, volumes, - flavors, and aggregates. A definition includes the new property's key, - description, constraints, and the resource types which it can be - associated with. diff --git a/doc/common/get-started-logical-architecture.rst b/doc/common/get-started-logical-architecture.rst index 181d131b74..f3da2f2e48 100644 --- a/doc/common/get-started-logical-architecture.rst +++ b/doc/common/get-started-logical-architecture.rst @@ -24,7 +24,7 @@ several message broker and database solutions, such as RabbitMQ, MySQL, MariaDB, and SQLite. Users can access OpenStack via the web-based user interface implemented -by :doc:`Dashboard `, via `command-line +by the Horizon Dashboard, via `command-line clients `__ and by issuing API requests through tools like browser plug-ins or :command:`curl`. For applications, `several SDKs `__ diff --git a/doc/common/get-started-networking.rst b/doc/common/get-started-networking.rst deleted file mode 100644 index 61e0767ec3..0000000000 --- a/doc/common/get-started-networking.rst +++ /dev/null @@ -1,33 +0,0 @@ -=========================== -Networking service overview -=========================== - -OpenStack Networking (neutron) allows you to create and attach interface -devices managed by other OpenStack services to networks. Plug-ins can be -implemented to accommodate different networking equipment and software, -providing flexibility to OpenStack architecture and deployment. - -It includes the following components: - -neutron-server - Accepts and routes API requests to the appropriate OpenStack - Networking plug-in for action. - -OpenStack Networking plug-ins and agents - Plug and unplug ports, create networks or subnets, and provide - IP addressing. These plug-ins and agents differ depending on the - vendor and technologies used in the particular cloud. OpenStack - Networking ships with plug-ins and agents for Cisco virtual and - physical switches, NEC OpenFlow products, Open vSwitch, Linux - bridging, and the VMware NSX product. - - The common agents are L3 (layer 3), DHCP (dynamic host IP - addressing), and a plug-in agent. - -Messaging queue - Used by most OpenStack Networking installations to route information - between the neutron-server and various agents. Also acts as a database - to store networking state for particular plug-ins. - -OpenStack Networking mainly interacts with OpenStack Compute to provide -networks and connectivity for its instances. diff --git a/doc/common/get-started-object-storage.rst b/doc/common/get-started-object-storage.rst deleted file mode 100644 index d35a6a84d1..0000000000 --- a/doc/common/get-started-object-storage.rst +++ /dev/null @@ -1,53 +0,0 @@ -=============================== -Object Storage service overview -=============================== - -The OpenStack Object Storage is a multi-project object storage system. It -is highly scalable and can manage large amounts of unstructured data at -low cost through a RESTful HTTP API. - -It includes the following components: - -Proxy servers (swift-proxy-server) - Accepts OpenStack Object Storage API and raw HTTP requests to upload - files, modify metadata, and create containers. It also serves file - or container listings to web browsers. To improve performance, the - proxy server can use an optional cache that is usually deployed with - memcache. - -Account servers (swift-account-server) - Manages accounts defined with Object Storage. - -Container servers (swift-container-server) - Manages the mapping of containers or folders, within Object Storage. - -Object servers (swift-object-server) - Manages actual objects, such as files, on the storage nodes. - -Various periodic processes - Performs housekeeping tasks on the large data store. The replication - services ensure consistency and availability through the cluster. - Other periodic processes include auditors, updaters, and reapers. - -WSGI middleware - Handles authentication and is usually OpenStack Identity. - -swift client - Enables users to submit commands to the REST API through a - command-line client authorized as either a admin user, reseller - user, or swift user. - -swift-init - Script that initializes the building of the ring file, takes daemon - names as parameter and offers commands. Documented in - `Managing Services - `_. - -swift-recon - A cli tool used to retrieve various metrics and telemetry information - about a cluster that has been collected by the swift-recon middleware. - -swift-ring-builder - Storage ring build and rebalance utility. Documented in - `Managing the Rings - `_. diff --git a/doc/common/get-started-openstack-services.rst b/doc/common/get-started-openstack-services.rst deleted file mode 100644 index 130eddeefb..0000000000 --- a/doc/common/get-started-openstack-services.rst +++ /dev/null @@ -1,23 +0,0 @@ -================== -OpenStack services -================== - -This section describes OpenStack services in detail. - -.. toctree:: - :maxdepth: 2 - - - get-started-compute.rst - get-started-storage-concepts.rst - get-started-object-storage.rst - get-started-block-storage.rst - get-started-shared-file-systems.rst - get-started-networking.rst - get-started-dashboard.rst - get-started-identity.rst - get-started-image-service.rst - get-started-telemetry.rst - get-started-orchestration.rst - get-started-database-service.rst - get-started-data-processing.rst diff --git a/doc/common/get-started-orchestration.rst b/doc/common/get-started-orchestration.rst deleted file mode 100644 index 52721a1e99..0000000000 --- a/doc/common/get-started-orchestration.rst +++ /dev/null @@ -1,35 +0,0 @@ -============================== -Orchestration service overview -============================== - -The Orchestration service provides a template-based orchestration for -describing a cloud application by running OpenStack API calls to -generate running cloud applications. The software integrates other core -components of OpenStack into a one-file template system. The templates -allow you to create most OpenStack resource types such as instances, -floating IPs, volumes, security groups, and users. It also provides -advanced functionality such as instance high availability, instance -auto-scaling, and nested stacks. This enables OpenStack core projects to -receive a larger user base. - -The service enables deployers to integrate with the Orchestration service -directly or through custom plug-ins. - -The Orchestration service consists of the following components: - -``heat`` command-line client - A CLI that communicates with the ``heat-api`` to run AWS - CloudFormation APIs. End developers can directly use the Orchestration - REST API. - -``heat-api`` component - An OpenStack-native REST API that processes API requests by sending - them to the ``heat-engine`` over :term:`Remote Procedure Call (RPC)`. - -``heat-api-cfn`` component - An AWS Query API that is compatible with AWS CloudFormation. It - processes API requests by sending them to the ``heat-engine`` over RPC. - -``heat-engine`` - Orchestrates the launching of templates and provides events back to - the API consumer. diff --git a/doc/common/get-started-shared-file-systems.rst b/doc/common/get-started-shared-file-systems.rst deleted file mode 100644 index 1782360319..0000000000 --- a/doc/common/get-started-shared-file-systems.rst +++ /dev/null @@ -1,38 +0,0 @@ -==================================== -Shared File Systems service overview -==================================== - -The OpenStack Shared File Systems service (manila) provides file storage to a -virtual machine. The Shared File Systems service provides an infrastructure -for managing and provisioning of file shares. The service also enables -management of share types as well as share snapshots if a driver supports -them. - -The Shared File Systems service consists of the following components: - -manila-api - A WSGI app that authenticates and routes requests throughout the Shared File - Systems service. It supports the OpenStack APIs. - -manila-data - A standalone service whose purpose is to receive requests, process data - operations such as copying, share migration or backup, and send back a - response after an operation has been completed. - -manila-scheduler - Schedules and routes requests to the appropriate share service. The - scheduler uses configurable filters and weighers to route requests. The - Filter Scheduler is the default and enables filters on things like Capacity, - Availability Zone, Share Types, and Capabilities as well as custom filters. - -manila-share - Manages back-end devices that provide shared file systems. A manila-share - process can run in one of two modes, with or without handling of share - servers. Share servers export file shares via share networks. When share - servers are not used, the networking requirements are handled outside of - Manila. - -Messaging queue - Routes information between the Shared File Systems processes. - -For more information, see `OpenStack Configuration Reference `__. diff --git a/doc/common/get-started-storage-concepts.rst b/doc/common/get-started-storage-concepts.rst deleted file mode 100644 index 48916f6ca5..0000000000 --- a/doc/common/get-started-storage-concepts.rst +++ /dev/null @@ -1,61 +0,0 @@ -================ -Storage concepts -================ - -The OpenStack stack uses the following storage types: - -.. list-table:: Storage types - :header-rows: 1 - :widths: 30 30 30 30 - - * - On-instance / ephemeral - - Block storage (cinder) - - Object Storage (swift) - - File Storage (manila) - * - Runs operating systems and provides scratch space - - Used for adding additional persistent storage to a virtual machine (VM) - - Used for storing virtual machine images and data - - Used for providing file shares to a virtual machine - * - Persists until VM is terminated - - Persists until deleted - - Persists until deleted - - Persists until deleted - * - Access associated with a VM - - Access associated with a VM - - Available from anywhere - - Access can be provided to a VM - * - Implemented as a filesystem underlying OpenStack Compute - - Mounted via OpenStack Block Storage controlled protocol (for example, iSCSI) - - REST API - - Provides Shared File System service via nfs, cifs, glusterfs, or hdfs protocol - * - Encryption is available - - Encryption is available - - Work in progress - expected for the Mitaka release - - Encryption is not available yet - * - Administrator configures size setting, based on flavors - - Sizings based on need - - Easily scalable for future growth - - Sizing based on need - * - Example: 10 GB first disk, 30 GB/core second disk - - Example: 1 TB "extra hard drive" - - Example: 10s of TBs of data set storage - - Example: 1 TB of file share - -.. note:: - - - *You cannot use OpenStack Object Storage like a traditional hard - drive.* The Object Storage relaxes some of the constraints of a - POSIX-style file system to get other gains. You can access the - objects through an API which uses HTTP. Subsequently you don't have - to provide atomic operations (that is, relying on eventual - consistency), you can scale a storage system easily and avoid a - central point of failure. - - - *The OpenStack Image service is used to manage the virtual machine - images in an OpenStack cluster, not store them.* It provides an - abstraction to different methods for storage - a bridge to the - storage, not the storage itself. - - - *The OpenStack Object Storage can function on its own.* The Object - Storage (swift) product can be used independently of the Compute - (nova) product. diff --git a/doc/common/get-started-telemetry.rst b/doc/common/get-started-telemetry.rst deleted file mode 100644 index aa0ae2f9bc..0000000000 --- a/doc/common/get-started-telemetry.rst +++ /dev/null @@ -1,70 +0,0 @@ -========================== -Telemetry service overview -========================== - -Telemetry Data Collection service -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The Telemetry Data Collection services provide the following functions: - -* Efficiently polls metering data related to OpenStack services. - -* Collects event and metering data by monitoring notifications sent - from services. - -* Publishes collected data to various targets including data stores and - message queues. - -The Telemetry service consists of the following components: - -A compute agent (``ceilometer-agent-compute``) - Runs on each compute node and polls for resource utilization - statistics. There may be other types of agents in the future, but - for now our focus is creating the compute agent. - -A central agent (``ceilometer-agent-central``) - Runs on a central management server to poll for resource utilization - statistics for resources not tied to instances or compute nodes. - Multiple agents can be started to scale service horizontally. - -A notification agent (``ceilometer-agent-notification``) - Runs on a central management server(s) and consumes messages from - the message queue(s) to build event and metering data. - -A collector (``ceilometer-collector``) - Runs on central management server(s) and dispatches collected - telemetry data to a data store or external consumer without - modification. - -An API server (``ceilometer-api``) - Runs on one or more central management servers to provide data - access from the data store. - -Telemetry Alarming service -~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The Telemetry Alarming services trigger alarms when the collected metering -or event data break the defined rules. - -The Telemetry Alarming service consists of the following components: - -An API server (``aodh-api``) - Runs on one or more central management servers to provide access - to the alarm information stored in the data store. - -An alarm evaluator (``aodh-evaluator``) - Runs on one or more central management servers to determine when - alarms fire due to the associated statistic trend crossing a - threshold over a sliding time window. - -A notification listener (``aodh-listener``) - Runs on a central management server and determines when to fire alarms. - The alarms are generated based on defined rules against events, which are - captured by the Telemetry Data Collection service's notification agents. - -An alarm notifier (``aodh-notifier``) - Runs on one or more central management servers to allow alarms to be - set based on the threshold evaluation for a collection of samples. - -These services communicate by using the OpenStack messaging bus. Only -the collector and API server have access to the data store. diff --git a/doc/common/get-started-with-openstack.rst b/doc/common/get-started-with-openstack.rst index e726c8c71d..f859e492a4 100644 --- a/doc/common/get-started-with-openstack.rst +++ b/doc/common/get-started-with-openstack.rst @@ -86,4 +86,3 @@ OpenStack architecture: get-started-conceptual-architecture.rst get-started-logical-architecture.rst - get-started-openstack-services.rst diff --git a/doc/install-guide/source/additional-services.rst b/doc/install-guide/source/additional-services.rst deleted file mode 100644 index 41b4a4add2..0000000000 --- a/doc/install-guide/source/additional-services.rst +++ /dev/null @@ -1,137 +0,0 @@ -.. _additional-services: - -=================== -Additional services -=================== - -Installation and configuration of additional OpenStack services is documented -in separate, project-specific installation guides. - -Application Catalog service (murano) -==================================== - -The Application Catalog service (murano) combines an application catalog with -versatile tooling to simplify and accelerate packaging and deployment. - -Installation and configuration is documented in the -`Application Catalog installation guide -`_. - -Bare Metal service (ironic) -=========================== - -The Bare Metal service is a collection of components that provides -support to manage and provision physical machines. - -Installation and configuration is documented in the -`Bare Metal installation guide -`_. - -Container Infrastructure Management service (magnum) -==================================================== - -The Container Infrastructure Management service (magnum) is an OpenStack API -service making container orchestration engines (COE) such as Docker Swarm, -Kubernetes and Mesos available as first class resources in OpenStack. - -Installation and configuration is documented in the -`Container Infrastructure Management installation guide -`_. - -Database service (trove) -======================== - -The Database service (trove) provides cloud provisioning functionality for -database engines. - -Installation and configuration is documented in the -`Database installation guide -`_. - -DNS service (designate) -======================== - -The DNS service (designate) provides cloud provisioning functionality for -DNS Zones and Recordsets. - -Installation and configuration is documented in the -`DNS installation guide -`_. - -Key Manager service (barbican) -============================== - -The Key Manager service provides a RESTful API for the storage and provisioning -of secret data such as passphrases, encryption keys, and X.509 certificates. - -Installation and configuration is documented in the -`Key Manager installation guide -`_. - -Messaging service (zaqar) -========================= - -The Messaging service allows developers to share data between distributed -application components performing different tasks, without losing messages or -requiring each component to be always available. - -Installation and configuration is documented in the -`Messaging installation guide -`_. - -Object Storage services (swift) -=============================== - -The Object Storage services (swift) work together to provide object storage and -retrieval through a REST API. - -Installation and configuration is documented in the -`Object Storage installation guide -`_. - -Orchestration service (heat) -============================ - -The Orchestration service (heat) uses a -`Heat Orchestration Template (HOT) -`_ -to create and manage cloud resources. - -Installation and configuration is documented in the -`Orchestration installation guide -`_. - -Shared File Systems service (manila) -==================================== - -The Shared File Systems service (manila) provides coordinated access to shared -or distributed file systems. - -Installation and configuration is documented in the -`Shared File Systems installation guide -`_. - -Telemetry Alarming services (aodh) -================================== - -The Telemetry Alarming services trigger alarms when the collected metering or -event data break the defined rules. - -Installation and configuration is documented in the -`Telemetry Alarming installation guide -`_. - -Telemetry Data Collection service (ceilometer) -============================================== - -The Telemetry Data Collection services provide the following functions: - -* Efficiently polls metering data related to OpenStack services. -* Collects event and metering data by monitoring notifications sent from - services. -* Publishes collected data to various targets including data stores and message - queues. - -Installation and configuration is documented in the -`Telemetry Data Collection installation guide -`_. diff --git a/doc/install-guide/source/cinder-backup-install-debian.rst b/doc/install-guide/source/cinder-backup-install-debian.rst deleted file mode 100644 index 1bb115500c..0000000000 --- a/doc/install-guide/source/cinder-backup-install-debian.rst +++ /dev/null @@ -1,71 +0,0 @@ -:orphan: - -Install and configure the backup service -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Optionally, install and configure the backup service. For simplicity, -this configuration uses the Block Storage node and the Object Storage -(swift) driver, thus depending on the -`Object Storage service `_. - -.. note:: - - You must :ref:`install and configure a storage node ` prior - to installing and configuring the backup service. - -Install and configure components --------------------------------- - -.. note:: - - Perform these steps on the Block Storage node. - - - - -#. Install the packages: - - .. code-block:: console - - # apt install cinder-backup - - .. end - - -2. Edit the ``/etc/cinder/cinder.conf`` file - and complete the following actions: - - * In the ``[DEFAULT]`` section, configure backup options: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - backup_driver = cinder.backup.drivers.swift - backup_swift_url = SWIFT_URL - - .. end - - Replace ``SWIFT_URL`` with the URL of the Object Storage service. The - URL can be found by showing the object-store API endpoints: - - .. code-block:: console - - $ openstack catalog show object-store - - .. end - -Finalize installation ---------------------- - - - -Restart the Block Storage backup service: - -.. code-block:: console - - # service cinder-backup restart - -.. end - diff --git a/doc/install-guide/source/cinder-backup-install-obs.rst b/doc/install-guide/source/cinder-backup-install-obs.rst deleted file mode 100644 index b8c9191b8e..0000000000 --- a/doc/install-guide/source/cinder-backup-install-obs.rst +++ /dev/null @@ -1,73 +0,0 @@ -:orphan: - -Install and configure the backup service -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Optionally, install and configure the backup service. For simplicity, -this configuration uses the Block Storage node and the Object Storage -(swift) driver, thus depending on the -`Object Storage service `_. - -.. note:: - - You must :ref:`install and configure a storage node ` prior - to installing and configuring the backup service. - -Install and configure components --------------------------------- - -.. note:: - - Perform these steps on the Block Storage node. - - -#. Install the packages: - - .. code-block:: console - - # zypper install openstack-cinder-backup - - .. end - - - - -2. Edit the ``/etc/cinder/cinder.conf`` file - and complete the following actions: - - * In the ``[DEFAULT]`` section, configure backup options: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - backup_driver = cinder.backup.drivers.swift - backup_swift_url = SWIFT_URL - - .. end - - Replace ``SWIFT_URL`` with the URL of the Object Storage service. The - URL can be found by showing the object-store API endpoints: - - .. code-block:: console - - $ openstack catalog show object-store - - .. end - -Finalize installation ---------------------- - - -Start the Block Storage backup service and configure it to -start when the system boots: - -.. code-block:: console - - # systemctl enable openstack-cinder-backup.service - # systemctl start openstack-cinder-backup.service - -.. end - - diff --git a/doc/install-guide/source/cinder-backup-install-rdo.rst b/doc/install-guide/source/cinder-backup-install-rdo.rst deleted file mode 100644 index d7ccfc152f..0000000000 --- a/doc/install-guide/source/cinder-backup-install-rdo.rst +++ /dev/null @@ -1,73 +0,0 @@ -:orphan: - -Install and configure the backup service -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Optionally, install and configure the backup service. For simplicity, -this configuration uses the Block Storage node and the Object Storage -(swift) driver, thus depending on the -`Object Storage service `_. - -.. note:: - - You must :ref:`install and configure a storage node ` prior - to installing and configuring the backup service. - -Install and configure components --------------------------------- - -.. note:: - - Perform these steps on the Block Storage node. - - - -#. Install the packages: - - .. code-block:: console - - # yum install openstack-cinder - - .. end - - - -2. Edit the ``/etc/cinder/cinder.conf`` file - and complete the following actions: - - * In the ``[DEFAULT]`` section, configure backup options: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - backup_driver = cinder.backup.drivers.swift - backup_swift_url = SWIFT_URL - - .. end - - Replace ``SWIFT_URL`` with the URL of the Object Storage service. The - URL can be found by showing the object-store API endpoints: - - .. code-block:: console - - $ openstack catalog show object-store - - .. end - -Finalize installation ---------------------- - - -Start the Block Storage backup service and configure it to -start when the system boots: - -.. code-block:: console - - # systemctl enable openstack-cinder-backup.service - # systemctl start openstack-cinder-backup.service - -.. end - - diff --git a/doc/install-guide/source/cinder-backup-install-ubuntu.rst b/doc/install-guide/source/cinder-backup-install-ubuntu.rst deleted file mode 100644 index 1bb115500c..0000000000 --- a/doc/install-guide/source/cinder-backup-install-ubuntu.rst +++ /dev/null @@ -1,71 +0,0 @@ -:orphan: - -Install and configure the backup service -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Optionally, install and configure the backup service. For simplicity, -this configuration uses the Block Storage node and the Object Storage -(swift) driver, thus depending on the -`Object Storage service `_. - -.. note:: - - You must :ref:`install and configure a storage node ` prior - to installing and configuring the backup service. - -Install and configure components --------------------------------- - -.. note:: - - Perform these steps on the Block Storage node. - - - - -#. Install the packages: - - .. code-block:: console - - # apt install cinder-backup - - .. end - - -2. Edit the ``/etc/cinder/cinder.conf`` file - and complete the following actions: - - * In the ``[DEFAULT]`` section, configure backup options: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - backup_driver = cinder.backup.drivers.swift - backup_swift_url = SWIFT_URL - - .. end - - Replace ``SWIFT_URL`` with the URL of the Object Storage service. The - URL can be found by showing the object-store API endpoints: - - .. code-block:: console - - $ openstack catalog show object-store - - .. end - -Finalize installation ---------------------- - - - -Restart the Block Storage backup service: - -.. code-block:: console - - # service cinder-backup restart - -.. end - diff --git a/doc/install-guide/source/cinder-backup-install.rst b/doc/install-guide/source/cinder-backup-install.rst deleted file mode 100644 index 77de40223a..0000000000 --- a/doc/install-guide/source/cinder-backup-install.rst +++ /dev/null @@ -1,11 +0,0 @@ -:orphan: - -.. _cinder-backup-install: - -Install and configure the backup service -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. toctree:: - :glob: - - cinder-backup-install-* diff --git a/doc/install-guide/source/cinder-controller-install-debian.rst b/doc/install-guide/source/cinder-controller-install-debian.rst deleted file mode 100644 index d9e141a265..0000000000 --- a/doc/install-guide/source/cinder-controller-install-debian.rst +++ /dev/null @@ -1,394 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Block -Storage service, code-named cinder, on the controller node. This -service requires at least one additional storage node that provides -volumes to instances. - -Prerequisites -------------- - -Before you install and configure the Block Storage service, you -must create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``cinder`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE cinder; - - .. end - - * Grant proper access to the ``cinder`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - - .. end - - Replace ``CINDER_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to admin-only - CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create a ``cinder`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt cinder - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 9d7e33de3e1a498390353819bc7d245d | - | name | cinder | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``cinder`` user: - - .. code-block:: console - - $ openstack role add --project service --user cinder admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``cinderv2`` and ``cinderv3`` service entities: - - .. code-block:: console - - $ openstack service create --name cinderv2 \ - --description "OpenStack Block Storage" volumev2 - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Block Storage | - | enabled | True | - | id | eb9fd245bdbc414695952e93f29fe3ac | - | name | cinderv2 | - | type | volumev2 | - +-------------+----------------------------------+ - - .. end - - .. code-block:: console - - $ openstack service create --name cinderv3 \ - --description "OpenStack Block Storage" volumev3 - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Block Storage | - | enabled | True | - | id | ab3bbbef780845a1a283490d281e7fda | - | name | cinderv3 | - | type | volumev3 | - +-------------+----------------------------------+ - - .. end - - .. note:: - - The Block Storage services require two service entities. - -#. Create the Block Storage service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - volumev2 public http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 513e73819e14460fb904163f41ef3759 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev2 internal http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 6436a8a23d014cfdb69c586eff146a32 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev2 admin http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | e652cf84dd334f359ae9b045a2c91d96 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - .. end - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - volumev3 public http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 03fa2c90153546c295bf30ca86b1344b | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev3 internal http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 94f684395d1b41068c70e4ecb11364b2 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev3 admin http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 4511c28a0f9840c78bacb25f10f62c98 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - .. end - - .. note:: - - The Block Storage services require endpoints for each service - entity. - -Install and configure components --------------------------------- - - - - -#. Install the packages: - - .. code-block:: console - - # apt install cinder-api cinder-scheduler - - .. end - - -2. Edit the ``/etc/cinder/cinder.conf`` file and complete the - following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - .. end - - Replace ``CINDER_DBPASS`` with the password you chose for the - Block Storage database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in ``RabbitMQ``. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = cinder - password = CINDER_PASS - - .. end - - Replace ``CINDER_PASS`` with the password you chose for - the ``cinder`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to - use the management interface IP address of the controller node: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = 10.0.0.11 - - .. end - - - -3. Populate the Block Storage database: - - .. code-block:: console - - # su -s /bin/sh -c "cinder-manage db sync" cinder - - .. end - - .. note:: - - Ignore any deprecation messages in this output. - - -Configure Compute to use Block Storage --------------------------------------- - -* Edit the ``/etc/nova/nova.conf`` file and add the following - to it: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [cinder] - os_region_name = RegionOne - - .. end - -Finalize installation ---------------------- - - - -#. Restart the Compute API service: - - .. code-block:: console - - # service nova-api restart - - .. end - -#. Restart the Block Storage services: - - .. code-block:: console - - # service cinder-scheduler restart - # service apache2 restart - - .. end - diff --git a/doc/install-guide/source/cinder-controller-install-obs.rst b/doc/install-guide/source/cinder-controller-install-obs.rst deleted file mode 100644 index 11cb0b8a2e..0000000000 --- a/doc/install-guide/source/cinder-controller-install-obs.rst +++ /dev/null @@ -1,394 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Block -Storage service, code-named cinder, on the controller node. This -service requires at least one additional storage node that provides -volumes to instances. - -Prerequisites -------------- - -Before you install and configure the Block Storage service, you -must create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``cinder`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE cinder; - - .. end - - * Grant proper access to the ``cinder`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - - .. end - - Replace ``CINDER_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to admin-only - CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create a ``cinder`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt cinder - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 9d7e33de3e1a498390353819bc7d245d | - | name | cinder | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``cinder`` user: - - .. code-block:: console - - $ openstack role add --project service --user cinder admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``cinderv2`` and ``cinderv3`` service entities: - - .. code-block:: console - - $ openstack service create --name cinderv2 \ - --description "OpenStack Block Storage" volumev2 - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Block Storage | - | enabled | True | - | id | eb9fd245bdbc414695952e93f29fe3ac | - | name | cinderv2 | - | type | volumev2 | - +-------------+----------------------------------+ - - .. end - - .. code-block:: console - - $ openstack service create --name cinderv3 \ - --description "OpenStack Block Storage" volumev3 - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Block Storage | - | enabled | True | - | id | ab3bbbef780845a1a283490d281e7fda | - | name | cinderv3 | - | type | volumev3 | - +-------------+----------------------------------+ - - .. end - - .. note:: - - The Block Storage services require two service entities. - -#. Create the Block Storage service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - volumev2 public http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 513e73819e14460fb904163f41ef3759 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev2 internal http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 6436a8a23d014cfdb69c586eff146a32 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev2 admin http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | e652cf84dd334f359ae9b045a2c91d96 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - .. end - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - volumev3 public http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 03fa2c90153546c295bf30ca86b1344b | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev3 internal http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 94f684395d1b41068c70e4ecb11364b2 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev3 admin http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 4511c28a0f9840c78bacb25f10f62c98 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - .. end - - .. note:: - - The Block Storage services require endpoints for each service - entity. - -Install and configure components --------------------------------- - - -#. Install the packages: - - .. code-block:: console - - # zypper install openstack-cinder-api openstack-cinder-scheduler - - .. end - - - - -2. Edit the ``/etc/cinder/cinder.conf`` file and complete the - following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - .. end - - Replace ``CINDER_DBPASS`` with the password you chose for the - Block Storage database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in ``RabbitMQ``. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = cinder - password = CINDER_PASS - - .. end - - Replace ``CINDER_PASS`` with the password you chose for - the ``cinder`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to - use the management interface IP address of the controller node: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = 10.0.0.11 - - .. end - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/cinder/tmp - - .. end - - - -Configure Compute to use Block Storage --------------------------------------- - -* Edit the ``/etc/nova/nova.conf`` file and add the following - to it: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [cinder] - os_region_name = RegionOne - - .. end - -Finalize installation ---------------------- - - -#. Restart the Compute API service: - - .. code-block:: console - - # systemctl restart openstack-nova-api.service - - .. end - -#. Start the Block Storage services and configure them to start when - the system boots: - - .. code-block:: console - - # systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service - # systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service - - .. end - - diff --git a/doc/install-guide/source/cinder-controller-install-rdo.rst b/doc/install-guide/source/cinder-controller-install-rdo.rst deleted file mode 100644 index 9ac1723e8c..0000000000 --- a/doc/install-guide/source/cinder-controller-install-rdo.rst +++ /dev/null @@ -1,407 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Block -Storage service, code-named cinder, on the controller node. This -service requires at least one additional storage node that provides -volumes to instances. - -Prerequisites -------------- - -Before you install and configure the Block Storage service, you -must create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``cinder`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE cinder; - - .. end - - * Grant proper access to the ``cinder`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - - .. end - - Replace ``CINDER_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to admin-only - CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create a ``cinder`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt cinder - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 9d7e33de3e1a498390353819bc7d245d | - | name | cinder | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``cinder`` user: - - .. code-block:: console - - $ openstack role add --project service --user cinder admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``cinderv2`` and ``cinderv3`` service entities: - - .. code-block:: console - - $ openstack service create --name cinderv2 \ - --description "OpenStack Block Storage" volumev2 - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Block Storage | - | enabled | True | - | id | eb9fd245bdbc414695952e93f29fe3ac | - | name | cinderv2 | - | type | volumev2 | - +-------------+----------------------------------+ - - .. end - - .. code-block:: console - - $ openstack service create --name cinderv3 \ - --description "OpenStack Block Storage" volumev3 - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Block Storage | - | enabled | True | - | id | ab3bbbef780845a1a283490d281e7fda | - | name | cinderv3 | - | type | volumev3 | - +-------------+----------------------------------+ - - .. end - - .. note:: - - The Block Storage services require two service entities. - -#. Create the Block Storage service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - volumev2 public http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 513e73819e14460fb904163f41ef3759 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev2 internal http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 6436a8a23d014cfdb69c586eff146a32 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev2 admin http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | e652cf84dd334f359ae9b045a2c91d96 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - .. end - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - volumev3 public http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 03fa2c90153546c295bf30ca86b1344b | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev3 internal http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 94f684395d1b41068c70e4ecb11364b2 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev3 admin http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 4511c28a0f9840c78bacb25f10f62c98 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - .. end - - .. note:: - - The Block Storage services require endpoints for each service - entity. - -Install and configure components --------------------------------- - - - -#. Install the packages: - - .. code-block:: console - - # yum install openstack-cinder - - .. end - - - -2. Edit the ``/etc/cinder/cinder.conf`` file and complete the - following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - .. end - - Replace ``CINDER_DBPASS`` with the password you chose for the - Block Storage database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in ``RabbitMQ``. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = cinder - password = CINDER_PASS - - .. end - - Replace ``CINDER_PASS`` with the password you chose for - the ``cinder`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to - use the management interface IP address of the controller node: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = 10.0.0.11 - - .. end - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/cinder/tmp - - .. end - - - -3. Populate the Block Storage database: - - .. code-block:: console - - # su -s /bin/sh -c "cinder-manage db sync" cinder - - .. end - - .. note:: - - Ignore any deprecation messages in this output. - - -Configure Compute to use Block Storage --------------------------------------- - -* Edit the ``/etc/nova/nova.conf`` file and add the following - to it: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [cinder] - os_region_name = RegionOne - - .. end - -Finalize installation ---------------------- - - -#. Restart the Compute API service: - - .. code-block:: console - - # systemctl restart openstack-nova-api.service - - .. end - -#. Start the Block Storage services and configure them to start when - the system boots: - - .. code-block:: console - - # systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service - # systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service - - .. end - - diff --git a/doc/install-guide/source/cinder-controller-install-ubuntu.rst b/doc/install-guide/source/cinder-controller-install-ubuntu.rst deleted file mode 100644 index 313b5ea10c..0000000000 --- a/doc/install-guide/source/cinder-controller-install-ubuntu.rst +++ /dev/null @@ -1,406 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Block -Storage service, code-named cinder, on the controller node. This -service requires at least one additional storage node that provides -volumes to instances. - -Prerequisites -------------- - -Before you install and configure the Block Storage service, you -must create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - # mysql - - .. end - - - - * Create the ``cinder`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE cinder; - - .. end - - * Grant proper access to the ``cinder`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \ - IDENTIFIED BY 'CINDER_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \ - IDENTIFIED BY 'CINDER_DBPASS'; - - .. end - - Replace ``CINDER_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to admin-only - CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create a ``cinder`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt cinder - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 9d7e33de3e1a498390353819bc7d245d | - | name | cinder | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``cinder`` user: - - .. code-block:: console - - $ openstack role add --project service --user cinder admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``cinderv2`` and ``cinderv3`` service entities: - - .. code-block:: console - - $ openstack service create --name cinderv2 \ - --description "OpenStack Block Storage" volumev2 - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Block Storage | - | enabled | True | - | id | eb9fd245bdbc414695952e93f29fe3ac | - | name | cinderv2 | - | type | volumev2 | - +-------------+----------------------------------+ - - .. end - - .. code-block:: console - - $ openstack service create --name cinderv3 \ - --description "OpenStack Block Storage" volumev3 - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Block Storage | - | enabled | True | - | id | ab3bbbef780845a1a283490d281e7fda | - | name | cinderv3 | - | type | volumev3 | - +-------------+----------------------------------+ - - .. end - - .. note:: - - The Block Storage services require two service entities. - -#. Create the Block Storage service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - volumev2 public http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 513e73819e14460fb904163f41ef3759 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev2 internal http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 6436a8a23d014cfdb69c586eff146a32 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev2 admin http://controller:8776/v2/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | e652cf84dd334f359ae9b045a2c91d96 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | eb9fd245bdbc414695952e93f29fe3ac | - | service_name | cinderv2 | - | service_type | volumev2 | - | url | http://controller:8776/v2/%(project_id)s | - +--------------+------------------------------------------+ - - .. end - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - volumev3 public http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 03fa2c90153546c295bf30ca86b1344b | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev3 internal http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 94f684395d1b41068c70e4ecb11364b2 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - volumev3 admin http://controller:8776/v3/%\(project_id\)s - - +--------------+------------------------------------------+ - | Field | Value | - +--------------+------------------------------------------+ - | enabled | True | - | id | 4511c28a0f9840c78bacb25f10f62c98 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | ab3bbbef780845a1a283490d281e7fda | - | service_name | cinderv3 | - | service_type | volumev3 | - | url | http://controller:8776/v3/%(project_id)s | - +--------------+------------------------------------------+ - - .. end - - .. note:: - - The Block Storage services require endpoints for each service - entity. - -Install and configure components --------------------------------- - - - - -#. Install the packages: - - .. code-block:: console - - # apt install cinder-api cinder-scheduler - - .. end - - -2. Edit the ``/etc/cinder/cinder.conf`` file and complete the - following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - .. end - - Replace ``CINDER_DBPASS`` with the password you chose for the - Block Storage database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in ``RabbitMQ``. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = cinder - password = CINDER_PASS - - .. end - - Replace ``CINDER_PASS`` with the password you chose for - the ``cinder`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to - use the management interface IP address of the controller node: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = 10.0.0.11 - - .. end - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/cinder/tmp - - .. end - - - -3. Populate the Block Storage database: - - .. code-block:: console - - # su -s /bin/sh -c "cinder-manage db sync" cinder - - .. end - - .. note:: - - Ignore any deprecation messages in this output. - - -Configure Compute to use Block Storage --------------------------------------- - -* Edit the ``/etc/nova/nova.conf`` file and add the following - to it: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [cinder] - os_region_name = RegionOne - - .. end - -Finalize installation ---------------------- - - - -#. Restart the Compute API service: - - .. code-block:: console - - # service nova-api restart - - .. end - -#. Restart the Block Storage services: - - .. code-block:: console - - # service cinder-scheduler restart - # service apache2 restart - - .. end - diff --git a/doc/install-guide/source/cinder-controller-install.rst b/doc/install-guide/source/cinder-controller-install.rst deleted file mode 100644 index 8b0abfd484..0000000000 --- a/doc/install-guide/source/cinder-controller-install.rst +++ /dev/null @@ -1,9 +0,0 @@ -.. _cinder-controller: - -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. toctree:: - :glob: - - cinder-controller-install-* diff --git a/doc/install-guide/source/cinder-next-steps.rst b/doc/install-guide/source/cinder-next-steps.rst deleted file mode 100644 index 828a58268e..0000000000 --- a/doc/install-guide/source/cinder-next-steps.rst +++ /dev/null @@ -1,9 +0,0 @@ -.. _cinder-next-steps: - -========== -Next steps -========== - -Your OpenStack environment now includes Block Storage. You can -:doc:`launch an instance ` or add more -services to your environment in the following chapters. diff --git a/doc/install-guide/source/cinder-storage-install-debian.rst b/doc/install-guide/source/cinder-storage-install-debian.rst deleted file mode 100644 index af64b986c5..0000000000 --- a/doc/install-guide/source/cinder-storage-install-debian.rst +++ /dev/null @@ -1,263 +0,0 @@ -Install and configure a storage node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure storage nodes -for the Block Storage service. For simplicity, this configuration -references one storage node with an empty local block storage device. -The instructions use ``/dev/sdb``, but you can substitute a different -value for your particular node. - -The service provisions logical volumes on this device using the -:term:`LVM ` driver and provides them -to instances via :term:`iSCSI ` transport. -You can follow these instructions with minor modifications to horizontally -scale your environment with additional storage nodes. - -Prerequisites -------------- - -Before you install and configure the Block Storage service on the -storage node, you must prepare the storage device. - -.. note:: - - Perform these steps on the storage node. - -#. Install the supporting utility packages: - - - - - .. note:: - - Some distributions include LVM by default. - -#. Create the LVM physical volume ``/dev/sdb``: - - .. code-block:: console - - # pvcreate /dev/sdb - - Physical volume "/dev/sdb" successfully created - - .. end - -#. Create the LVM volume group ``cinder-volumes``: - - .. code-block:: console - - # vgcreate cinder-volumes /dev/sdb - - Volume group "cinder-volumes" successfully created - - .. end - - The Block Storage service creates logical volumes in this volume group. - -#. Only instances can access Block Storage volumes. However, the - underlying operating system manages the devices associated with - the volumes. By default, the LVM volume scanning tool scans the - ``/dev`` directory for block storage devices that - contain volumes. If projects use LVM on their volumes, the scanning - tool detects these volumes and attempts to cache them which can cause - a variety of problems with both the underlying operating system - and project volumes. You must reconfigure LVM to scan only the devices - that contain the ``cinder-volumes`` volume group. Edit the - ``/etc/lvm/lvm.conf`` file and complete the following actions: - - * In the ``devices`` section, add a filter that accepts the - ``/dev/sdb`` device and rejects all other devices: - - .. path /etc/lvm/lvm.conf - .. code-block:: none - - devices { - ... - filter = [ "a/sdb/", "r/.*/"] - - .. end - - Each item in the filter array begins with ``a`` for **accept** or - ``r`` for **reject** and includes a regular expression for the - device name. The array must end with ``r/.*/`` to reject any - remaining devices. You can use the :command:`vgs -vvvv` command - to test filters. - - .. warning:: - - If your storage nodes use LVM on the operating system disk, you - must also add the associated device to the filter. For example, - if the ``/dev/sda`` device contains the operating system: - - .. ignore_path /etc/lvm/lvm.conf - .. code-block:: ini - - filter = [ "a/sda/", "a/sdb/", "r/.*/"] - - .. end - - Similarly, if your compute nodes use LVM on the operating - system disk, you must also modify the filter in the - ``/etc/lvm/lvm.conf`` file on those nodes to include only - the operating system disk. For example, if the ``/dev/sda`` - device contains the operating system: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: ini - - filter = [ "a/sda/", "r/.*/"] - - .. end - -Install and configure components --------------------------------- - - - - -#. Install the packages: - - .. code-block:: console - - # apt install cinder-volume - - .. end - - -2. Edit the ``/etc/cinder/cinder.conf`` file - and complete the following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - .. end - - Replace ``CINDER_DBPASS`` with the password you chose for - the Block Storage database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for - the ``openstack`` account in ``RabbitMQ``. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = cinder - password = CINDER_PASS - - .. end - - Replace ``CINDER_PASS`` with the password you chose for the - ``cinder`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - - .. end - - Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address - of the management network interface on your storage node, - typically 10.0.0.41 for the first node in the - :ref:`example architecture `. - - - - * In the ``[DEFAULT]`` section, enable the LVM back end: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - enabled_backends = lvm - - .. end - - .. note:: - - Back-end names are arbitrary. As an example, this guide - uses the name of the driver as the name of the back end. - - * In the ``[DEFAULT]`` section, configure the location of the - Image service API: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - glance_api_servers = http://controller:9292 - - .. end - - * In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/cinder/tmp - - .. end - - -Finalize installation ---------------------- - - - - -#. Restart the Block Storage volume service including its dependencies: - - .. code-block:: console - - # service tgt restart - # service cinder-volume restart - - .. end - diff --git a/doc/install-guide/source/cinder-storage-install-obs.rst b/doc/install-guide/source/cinder-storage-install-obs.rst deleted file mode 100644 index 7ba542cfaa..0000000000 --- a/doc/install-guide/source/cinder-storage-install-obs.rst +++ /dev/null @@ -1,309 +0,0 @@ -Install and configure a storage node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure storage nodes -for the Block Storage service. For simplicity, this configuration -references one storage node with an empty local block storage device. -The instructions use ``/dev/sdb``, but you can substitute a different -value for your particular node. - -The service provisions logical volumes on this device using the -:term:`LVM ` driver and provides them -to instances via :term:`iSCSI ` transport. -You can follow these instructions with minor modifications to horizontally -scale your environment with additional storage nodes. - -Prerequisites -------------- - -Before you install and configure the Block Storage service on the -storage node, you must prepare the storage device. - -.. note:: - - Perform these steps on the storage node. - -#. Install the supporting utility packages: - - -* Install the LVM packages: - - .. code-block:: console - - # zypper install lvm2 - - .. end - -* (Optional) If you intend to use non-raw image types such as QCOW2 - and VMDK, install the QEMU package: - - .. code-block:: console - - # zypper install qemu - - .. end - - - - - .. note:: - - Some distributions include LVM by default. - -#. Create the LVM physical volume ``/dev/sdb``: - - .. code-block:: console - - # pvcreate /dev/sdb - - Physical volume "/dev/sdb" successfully created - - .. end - -#. Create the LVM volume group ``cinder-volumes``: - - .. code-block:: console - - # vgcreate cinder-volumes /dev/sdb - - Volume group "cinder-volumes" successfully created - - .. end - - The Block Storage service creates logical volumes in this volume group. - -#. Only instances can access Block Storage volumes. However, the - underlying operating system manages the devices associated with - the volumes. By default, the LVM volume scanning tool scans the - ``/dev`` directory for block storage devices that - contain volumes. If projects use LVM on their volumes, the scanning - tool detects these volumes and attempts to cache them which can cause - a variety of problems with both the underlying operating system - and project volumes. You must reconfigure LVM to scan only the devices - that contain the ``cinder-volumes`` volume group. Edit the - ``/etc/lvm/lvm.conf`` file and complete the following actions: - - * In the ``devices`` section, add a filter that accepts the - ``/dev/sdb`` device and rejects all other devices: - - .. path /etc/lvm/lvm.conf - .. code-block:: none - - devices { - ... - filter = [ "a/sdb/", "r/.*/"] - - .. end - - Each item in the filter array begins with ``a`` for **accept** or - ``r`` for **reject** and includes a regular expression for the - device name. The array must end with ``r/.*/`` to reject any - remaining devices. You can use the :command:`vgs -vvvv` command - to test filters. - - .. warning:: - - If your storage nodes use LVM on the operating system disk, you - must also add the associated device to the filter. For example, - if the ``/dev/sda`` device contains the operating system: - - .. ignore_path /etc/lvm/lvm.conf - .. code-block:: ini - - filter = [ "a/sda/", "a/sdb/", "r/.*/"] - - .. end - - Similarly, if your compute nodes use LVM on the operating - system disk, you must also modify the filter in the - ``/etc/lvm/lvm.conf`` file on those nodes to include only - the operating system disk. For example, if the ``/dev/sda`` - device contains the operating system: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: ini - - filter = [ "a/sda/", "r/.*/"] - - .. end - -Install and configure components --------------------------------- - - -#. Install the packages: - - .. code-block:: console - - # zypper install openstack-cinder-volume tgt - - .. end - - - - -2. Edit the ``/etc/cinder/cinder.conf`` file - and complete the following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - .. end - - Replace ``CINDER_DBPASS`` with the password you chose for - the Block Storage database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for - the ``openstack`` account in ``RabbitMQ``. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = cinder - password = CINDER_PASS - - .. end - - Replace ``CINDER_PASS`` with the password you chose for the - ``cinder`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - - .. end - - Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address - of the management network interface on your storage node, - typically 10.0.0.41 for the first node in the - :ref:`example architecture `. - - -* In the ``[lvm]`` section, configure the LVM back end with the - LVM driver, ``cinder-volumes`` volume group, iSCSI protocol, - and appropriate iSCSI service: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [lvm] - # ... - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver - volume_group = cinder-volumes - iscsi_protocol = iscsi - iscsi_helper = tgtadm - - .. end - - - - * In the ``[DEFAULT]`` section, enable the LVM back end: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - enabled_backends = lvm - - .. end - - .. note:: - - Back-end names are arbitrary. As an example, this guide - uses the name of the driver as the name of the back end. - - * In the ``[DEFAULT]`` section, configure the location of the - Image service API: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - glance_api_servers = http://controller:9292 - - .. end - - * In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/cinder/tmp - - .. end - - -3. Create the ``/etc/tgt/conf.d/cinder.conf`` file - with the following data: - - .. code-block:: shell - - include /var/lib/cinder/volumes/* - - .. end - - -Finalize installation ---------------------- - - -* Start the Block Storage volume service including its dependencies - and configure them to start when the system boots: - - .. code-block:: console - - # systemctl enable openstack-cinder-volume.service tgtd.service - # systemctl start openstack-cinder-volume.service tgtd.service - - .. end - - - diff --git a/doc/install-guide/source/cinder-storage-install-rdo.rst b/doc/install-guide/source/cinder-storage-install-rdo.rst deleted file mode 100644 index f79f7c6cc8..0000000000 --- a/doc/install-guide/source/cinder-storage-install-rdo.rst +++ /dev/null @@ -1,300 +0,0 @@ -Install and configure a storage node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure storage nodes -for the Block Storage service. For simplicity, this configuration -references one storage node with an empty local block storage device. -The instructions use ``/dev/sdb``, but you can substitute a different -value for your particular node. - -The service provisions logical volumes on this device using the -:term:`LVM ` driver and provides them -to instances via :term:`iSCSI ` transport. -You can follow these instructions with minor modifications to horizontally -scale your environment with additional storage nodes. - -Prerequisites -------------- - -Before you install and configure the Block Storage service on the -storage node, you must prepare the storage device. - -.. note:: - - Perform these steps on the storage node. - -#. Install the supporting utility packages: - - - -* Install the LVM packages: - - .. code-block:: console - - # yum install lvm2 - - .. end - -* Start the LVM metadata service and configure it to start when the - system boots: - - .. code-block:: console - - # systemctl enable lvm2-lvmetad.service - # systemctl start lvm2-lvmetad.service - - .. end - - - - .. note:: - - Some distributions include LVM by default. - -#. Create the LVM physical volume ``/dev/sdb``: - - .. code-block:: console - - # pvcreate /dev/sdb - - Physical volume "/dev/sdb" successfully created - - .. end - -#. Create the LVM volume group ``cinder-volumes``: - - .. code-block:: console - - # vgcreate cinder-volumes /dev/sdb - - Volume group "cinder-volumes" successfully created - - .. end - - The Block Storage service creates logical volumes in this volume group. - -#. Only instances can access Block Storage volumes. However, the - underlying operating system manages the devices associated with - the volumes. By default, the LVM volume scanning tool scans the - ``/dev`` directory for block storage devices that - contain volumes. If projects use LVM on their volumes, the scanning - tool detects these volumes and attempts to cache them which can cause - a variety of problems with both the underlying operating system - and project volumes. You must reconfigure LVM to scan only the devices - that contain the ``cinder-volumes`` volume group. Edit the - ``/etc/lvm/lvm.conf`` file and complete the following actions: - - * In the ``devices`` section, add a filter that accepts the - ``/dev/sdb`` device and rejects all other devices: - - .. path /etc/lvm/lvm.conf - .. code-block:: none - - devices { - ... - filter = [ "a/sdb/", "r/.*/"] - - .. end - - Each item in the filter array begins with ``a`` for **accept** or - ``r`` for **reject** and includes a regular expression for the - device name. The array must end with ``r/.*/`` to reject any - remaining devices. You can use the :command:`vgs -vvvv` command - to test filters. - - .. warning:: - - If your storage nodes use LVM on the operating system disk, you - must also add the associated device to the filter. For example, - if the ``/dev/sda`` device contains the operating system: - - .. ignore_path /etc/lvm/lvm.conf - .. code-block:: ini - - filter = [ "a/sda/", "a/sdb/", "r/.*/"] - - .. end - - Similarly, if your compute nodes use LVM on the operating - system disk, you must also modify the filter in the - ``/etc/lvm/lvm.conf`` file on those nodes to include only - the operating system disk. For example, if the ``/dev/sda`` - device contains the operating system: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: ini - - filter = [ "a/sda/", "r/.*/"] - - .. end - -Install and configure components --------------------------------- - - - -#. Install the packages: - - .. code-block:: console - - # yum install openstack-cinder targetcli python-keystone - - .. end - - - -2. Edit the ``/etc/cinder/cinder.conf`` file - and complete the following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - .. end - - Replace ``CINDER_DBPASS`` with the password you chose for - the Block Storage database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for - the ``openstack`` account in ``RabbitMQ``. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = cinder - password = CINDER_PASS - - .. end - - Replace ``CINDER_PASS`` with the password you chose for the - ``cinder`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - - .. end - - Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address - of the management network interface on your storage node, - typically 10.0.0.41 for the first node in the - :ref:`example architecture `. - - - -* In the ``[lvm]`` section, configure the LVM back end with the - LVM driver, ``cinder-volumes`` volume group, iSCSI protocol, - and appropriate iSCSI service. If the ``[lvm]`` section does not exist, - create it: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [lvm] - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver - volume_group = cinder-volumes - iscsi_protocol = iscsi - iscsi_helper = lioadm - - .. end - - - * In the ``[DEFAULT]`` section, enable the LVM back end: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - enabled_backends = lvm - - .. end - - .. note:: - - Back-end names are arbitrary. As an example, this guide - uses the name of the driver as the name of the back end. - - * In the ``[DEFAULT]`` section, configure the location of the - Image service API: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - glance_api_servers = http://controller:9292 - - .. end - - * In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/cinder/tmp - - .. end - - -Finalize installation ---------------------- - - - -* Start the Block Storage volume service including its dependencies - and configure them to start when the system boots: - - .. code-block:: console - - # systemctl enable openstack-cinder-volume.service target.service - # systemctl start openstack-cinder-volume.service target.service - - .. end - - diff --git a/doc/install-guide/source/cinder-storage-install-ubuntu.rst b/doc/install-guide/source/cinder-storage-install-ubuntu.rst deleted file mode 100644 index 783d86f740..0000000000 --- a/doc/install-guide/source/cinder-storage-install-ubuntu.rst +++ /dev/null @@ -1,287 +0,0 @@ -Install and configure a storage node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure storage nodes -for the Block Storage service. For simplicity, this configuration -references one storage node with an empty local block storage device. -The instructions use ``/dev/sdb``, but you can substitute a different -value for your particular node. - -The service provisions logical volumes on this device using the -:term:`LVM ` driver and provides them -to instances via :term:`iSCSI ` transport. -You can follow these instructions with minor modifications to horizontally -scale your environment with additional storage nodes. - -Prerequisites -------------- - -Before you install and configure the Block Storage service on the -storage node, you must prepare the storage device. - -.. note:: - - Perform these steps on the storage node. - -#. Install the supporting utility packages: - - - - -.. code-block:: console - - # apt install lvm2 - -.. end - - - .. note:: - - Some distributions include LVM by default. - -#. Create the LVM physical volume ``/dev/sdb``: - - .. code-block:: console - - # pvcreate /dev/sdb - - Physical volume "/dev/sdb" successfully created - - .. end - -#. Create the LVM volume group ``cinder-volumes``: - - .. code-block:: console - - # vgcreate cinder-volumes /dev/sdb - - Volume group "cinder-volumes" successfully created - - .. end - - The Block Storage service creates logical volumes in this volume group. - -#. Only instances can access Block Storage volumes. However, the - underlying operating system manages the devices associated with - the volumes. By default, the LVM volume scanning tool scans the - ``/dev`` directory for block storage devices that - contain volumes. If projects use LVM on their volumes, the scanning - tool detects these volumes and attempts to cache them which can cause - a variety of problems with both the underlying operating system - and project volumes. You must reconfigure LVM to scan only the devices - that contain the ``cinder-volumes`` volume group. Edit the - ``/etc/lvm/lvm.conf`` file and complete the following actions: - - * In the ``devices`` section, add a filter that accepts the - ``/dev/sdb`` device and rejects all other devices: - - .. path /etc/lvm/lvm.conf - .. code-block:: none - - devices { - ... - filter = [ "a/sdb/", "r/.*/"] - - .. end - - Each item in the filter array begins with ``a`` for **accept** or - ``r`` for **reject** and includes a regular expression for the - device name. The array must end with ``r/.*/`` to reject any - remaining devices. You can use the :command:`vgs -vvvv` command - to test filters. - - .. warning:: - - If your storage nodes use LVM on the operating system disk, you - must also add the associated device to the filter. For example, - if the ``/dev/sda`` device contains the operating system: - - .. ignore_path /etc/lvm/lvm.conf - .. code-block:: ini - - filter = [ "a/sda/", "a/sdb/", "r/.*/"] - - .. end - - Similarly, if your compute nodes use LVM on the operating - system disk, you must also modify the filter in the - ``/etc/lvm/lvm.conf`` file on those nodes to include only - the operating system disk. For example, if the ``/dev/sda`` - device contains the operating system: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: ini - - filter = [ "a/sda/", "r/.*/"] - - .. end - -Install and configure components --------------------------------- - - - - -#. Install the packages: - - .. code-block:: console - - # apt install cinder-volume - - .. end - - -2. Edit the ``/etc/cinder/cinder.conf`` file - and complete the following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder - - .. end - - Replace ``CINDER_DBPASS`` with the password you chose for - the Block Storage database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for - the ``openstack`` account in ``RabbitMQ``. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = cinder - password = CINDER_PASS - - .. end - - Replace ``CINDER_PASS`` with the password you chose for the - ``cinder`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - - .. end - - Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address - of the management network interface on your storage node, - typically 10.0.0.41 for the first node in the - :ref:`example architecture `. - - -* In the ``[lvm]`` section, configure the LVM back end with the - LVM driver, ``cinder-volumes`` volume group, iSCSI protocol, - and appropriate iSCSI service: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [lvm] - # ... - volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver - volume_group = cinder-volumes - iscsi_protocol = iscsi - iscsi_helper = tgtadm - - .. end - - - - * In the ``[DEFAULT]`` section, enable the LVM back end: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - enabled_backends = lvm - - .. end - - .. note:: - - Back-end names are arbitrary. As an example, this guide - uses the name of the driver as the name of the back end. - - * In the ``[DEFAULT]`` section, configure the location of the - Image service API: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [DEFAULT] - # ... - glance_api_servers = http://controller:9292 - - .. end - - * In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/cinder/cinder.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/cinder/tmp - - .. end - - -Finalize installation ---------------------- - - - - -#. Restart the Block Storage volume service including its dependencies: - - .. code-block:: console - - # service tgt restart - # service cinder-volume restart - - .. end - diff --git a/doc/install-guide/source/cinder-storage-install.rst b/doc/install-guide/source/cinder-storage-install.rst deleted file mode 100644 index 539c25ed8b..0000000000 --- a/doc/install-guide/source/cinder-storage-install.rst +++ /dev/null @@ -1,9 +0,0 @@ -.. _cinder-storage: - -Install and configure a storage node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. toctree:: - :glob: - - cinder-storage-install-* diff --git a/doc/install-guide/source/cinder-verify.rst b/doc/install-guide/source/cinder-verify.rst deleted file mode 100644 index 9c41da06e9..0000000000 --- a/doc/install-guide/source/cinder-verify.rst +++ /dev/null @@ -1,35 +0,0 @@ -.. _cinder-verify: - -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the Block Storage service. - -.. note:: - - Perform these commands on the controller node. - -#. Source the ``admin`` credentials to gain access to - admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. List service components to verify successful launch of each process: - - .. code-block:: console - - $ openstack volume service list - - +------------------+------------+------+---------+-------+----------------------------+ - | Binary | Host | Zone | Status | State | Updated_at | - +------------------+------------+------+---------+-------+----------------------------+ - | cinder-scheduler | controller | nova | enabled | up | 2016-09-30T02:27:41.000000 | - | cinder-volume | block@lvm | nova | enabled | up | 2016-09-30T02:27:46.000000 | - +------------------+------------+------+---------+-------+----------------------------+ - - - .. end diff --git a/doc/install-guide/source/cinder.rst b/doc/install-guide/source/cinder.rst deleted file mode 100644 index 752946971a..0000000000 --- a/doc/install-guide/source/cinder.rst +++ /dev/null @@ -1,26 +0,0 @@ -.. _cinder: - -===================== -Block Storage service -===================== - -.. toctree:: - - common/get-started-block-storage.rst - cinder-controller-install.rst - cinder-storage-install.rst - cinder-verify.rst - cinder-next-steps.rst - -The Block Storage service (cinder) provides block storage devices -to guest instances. The method in which the storage is provisioned and -consumed is determined by the Block Storage driver, or drivers -in the case of a multi-backend configuration. There are a variety of -drivers that are available: NAS/SAN, NFS, iSCSI, Ceph, and more. - -The Block Storage API and scheduler services typically run on the controller -nodes. Depending upon the drivers used, the volume service can run -on controller nodes, compute nodes, or standalone storage nodes. - -For more information, see the -`Configuration Reference `_. diff --git a/doc/install-guide/source/conf.py b/doc/install-guide/source/conf.py index 0f3f4c4cf3..33efbf8881 100644 --- a/doc/install-guide/source/conf.py +++ b/doc/install-guide/source/conf.py @@ -80,19 +80,12 @@ release = '15.0.0' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. -exclude_patterns = ['common/cli*', 'common/nova*', - 'common/get-started-with-openstack.rst', - 'common/get-started-openstack-services.rst', - 'common/get-started-logical-architecture.rst', - 'common/get-started-dashboard.rst', - 'common/get-started-storage-concepts.rst', - 'common/get-started-database-service.rst', - 'common/get-started-data-processing.rst', - 'common/get-started-object-storage.rst', - 'common/get-started-orchestration.rst', - 'common/get-started-shared-file-systems.rst', - 'common/get-started-telemetry.rst', - 'shared/note_configuration_vary_by_distribution.rst'] +exclude_patterns = [ + 'common/cli*', + 'common/nova*', + 'common/get-started-*.rst', + 'shared/note_configuration_vary_by_distribution.rst', +] # The reST default role (used for this markup: `text`) to use for all # documents. diff --git a/doc/install-guide/source/glance-install-debian.rst b/doc/install-guide/source/glance-install-debian.rst deleted file mode 100644 index eaf691267a..0000000000 --- a/doc/install-guide/source/glance-install-debian.rst +++ /dev/null @@ -1,329 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Image service, -code-named glance, on the controller node. For simplicity, this -configuration stores images on the local file system. - -Prerequisites -------------- - -Before you install and configure the Image service, you must -create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``glance`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE glance; - - .. end - - * Grant proper access to the ``glance`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - - .. end - - Replace ``GLANCE_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to - admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create the ``glance`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt glance - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 3f4e777c4062483ab8d9edd7dff829df | - | name | glance | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``glance`` user and - ``service`` project: - - .. code-block:: console - - $ openstack role add --project service --user glance admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``glance`` service entity: - - .. code-block:: console - - $ openstack service create --name glance \ - --description "OpenStack Image" image - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Image | - | enabled | True | - | id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | name | glance | - | type | image | - +-------------+----------------------------------+ - - .. end - -#. Create the Image service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - image public http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 340be3625e9b4239a6415d034e98aace | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - image internal http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | a6e4b153c2ae4c919eccfdbb7dceb5d2 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - image admin http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 0c37ed58103f4300a84ff125a539032d | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - .. end - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - - -#. Install the packages: - - .. code-block:: console - - # apt install glance - - .. end - - -2. Edit the ``/etc/glance/glance-api.conf`` file and complete the - following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - .. end - - Replace ``GLANCE_DBPASS`` with the password you chose for the - Image service database. - - * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, - configure Identity service access: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - # ... - flavor = keystone - - .. end - - Replace ``GLANCE_PASS`` with the password you chose for the - ``glance`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[glance_store]`` section, configure the local file - system store and location of image files: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [glance_store] - # ... - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - - .. end - -3. Edit the ``/etc/glance/glance-registry.conf`` file and complete - the following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/glance/glance-registry.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - .. end - - Replace ``GLANCE_DBPASS`` with the password you chose for the - Image service database. - - * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, - configure Identity service access: - - .. path /etc/glance/glance-registry.conf - .. code-block:: ini - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - # ... - flavor = keystone - - .. end - - Replace ``GLANCE_PASS`` with the password you chose for the - ``glance`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - -4. Populate the Image service database: - - .. code-block:: console - - # su -s /bin/sh -c "glance-manage db_sync" glance - - .. end - - .. note:: - - Ignore any deprecation messages in this output. - - -Finalize installation ---------------------- - - - -#. Restart the Image services: - - .. code-block:: console - - # service glance-registry restart - # service glance-api restart - - .. end - diff --git a/doc/install-guide/source/glance-install-obs.rst b/doc/install-guide/source/glance-install-obs.rst deleted file mode 100644 index 6d42fea396..0000000000 --- a/doc/install-guide/source/glance-install-obs.rst +++ /dev/null @@ -1,333 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Image service, -code-named glance, on the controller node. For simplicity, this -configuration stores images on the local file system. - -Prerequisites -------------- - -Before you install and configure the Image service, you must -create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``glance`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE glance; - - .. end - - * Grant proper access to the ``glance`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - - .. end - - Replace ``GLANCE_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to - admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create the ``glance`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt glance - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 3f4e777c4062483ab8d9edd7dff829df | - | name | glance | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``glance`` user and - ``service`` project: - - .. code-block:: console - - $ openstack role add --project service --user glance admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``glance`` service entity: - - .. code-block:: console - - $ openstack service create --name glance \ - --description "OpenStack Image" image - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Image | - | enabled | True | - | id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | name | glance | - | type | image | - +-------------+----------------------------------+ - - .. end - -#. Create the Image service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - image public http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 340be3625e9b4239a6415d034e98aace | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - image internal http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | a6e4b153c2ae4c919eccfdbb7dceb5d2 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - image admin http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 0c37ed58103f4300a84ff125a539032d | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - .. end - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - -.. note:: - - Starting with the Newton release, SUSE OpenStack packages are shipping - with the upstream default configuration files. For example - ``/etc/glance/glance-api.conf`` or - ``/etc/glance/glance-registry.conf``, with customizations in - ``/etc/glance/glance-api.conf.d/`` or - ``/etc/glance/glance-registry.conf.d/``. While the following - instructions modify the default configuration files, adding new files - in ``/etc/glance/glance-api.conf.d`` or - ``/etc/glance/glance-registry.conf.d`` achieves the same result. - - - -#. Install the packages: - - .. code-block:: console - - # zypper install openstack-glance \ - openstack-glance-api openstack-glance-registry - - .. end - - - - -2. Edit the ``/etc/glance/glance-api.conf`` file and complete the - following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - .. end - - Replace ``GLANCE_DBPASS`` with the password you chose for the - Image service database. - - * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, - configure Identity service access: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - # ... - flavor = keystone - - .. end - - Replace ``GLANCE_PASS`` with the password you chose for the - ``glance`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[glance_store]`` section, configure the local file - system store and location of image files: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [glance_store] - # ... - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - - .. end - -3. Edit the ``/etc/glance/glance-registry.conf`` file and complete - the following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/glance/glance-registry.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - .. end - - Replace ``GLANCE_DBPASS`` with the password you chose for the - Image service database. - - * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, - configure Identity service access: - - .. path /etc/glance/glance-registry.conf - .. code-block:: ini - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - # ... - flavor = keystone - - .. end - - Replace ``GLANCE_PASS`` with the password you chose for the - ``glance`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - -Finalize installation ---------------------- - - -* Start the Image services and configure them to start when - the system boots: - - .. code-block:: console - - # systemctl enable openstack-glance-api.service \ - openstack-glance-registry.service - # systemctl start openstack-glance-api.service \ - openstack-glance-registry.service - - .. end - - diff --git a/doc/install-guide/source/glance-install-rdo.rst b/doc/install-guide/source/glance-install-rdo.rst deleted file mode 100644 index 7e9e28951b..0000000000 --- a/doc/install-guide/source/glance-install-rdo.rst +++ /dev/null @@ -1,332 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Image service, -code-named glance, on the controller node. For simplicity, this -configuration stores images on the local file system. - -Prerequisites -------------- - -Before you install and configure the Image service, you must -create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``glance`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE glance; - - .. end - - * Grant proper access to the ``glance`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - - .. end - - Replace ``GLANCE_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to - admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create the ``glance`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt glance - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 3f4e777c4062483ab8d9edd7dff829df | - | name | glance | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``glance`` user and - ``service`` project: - - .. code-block:: console - - $ openstack role add --project service --user glance admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``glance`` service entity: - - .. code-block:: console - - $ openstack service create --name glance \ - --description "OpenStack Image" image - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Image | - | enabled | True | - | id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | name | glance | - | type | image | - +-------------+----------------------------------+ - - .. end - -#. Create the Image service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - image public http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 340be3625e9b4239a6415d034e98aace | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - image internal http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | a6e4b153c2ae4c919eccfdbb7dceb5d2 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - image admin http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 0c37ed58103f4300a84ff125a539032d | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - .. end - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - -#. Install the packages: - - .. code-block:: console - - # yum install openstack-glance - - .. end - - - -2. Edit the ``/etc/glance/glance-api.conf`` file and complete the - following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - .. end - - Replace ``GLANCE_DBPASS`` with the password you chose for the - Image service database. - - * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, - configure Identity service access: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - # ... - flavor = keystone - - .. end - - Replace ``GLANCE_PASS`` with the password you chose for the - ``glance`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[glance_store]`` section, configure the local file - system store and location of image files: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [glance_store] - # ... - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - - .. end - -3. Edit the ``/etc/glance/glance-registry.conf`` file and complete - the following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/glance/glance-registry.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - .. end - - Replace ``GLANCE_DBPASS`` with the password you chose for the - Image service database. - - * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, - configure Identity service access: - - .. path /etc/glance/glance-registry.conf - .. code-block:: ini - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - # ... - flavor = keystone - - .. end - - Replace ``GLANCE_PASS`` with the password you chose for the - ``glance`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - -4. Populate the Image service database: - - .. code-block:: console - - # su -s /bin/sh -c "glance-manage db_sync" glance - - .. end - - .. note:: - - Ignore any deprecation messages in this output. - - -Finalize installation ---------------------- - - -* Start the Image services and configure them to start when - the system boots: - - .. code-block:: console - - # systemctl enable openstack-glance-api.service \ - openstack-glance-registry.service - # systemctl start openstack-glance-api.service \ - openstack-glance-registry.service - - .. end - - diff --git a/doc/install-guide/source/glance-install-ubuntu.rst b/doc/install-guide/source/glance-install-ubuntu.rst deleted file mode 100644 index c84a51dc83..0000000000 --- a/doc/install-guide/source/glance-install-ubuntu.rst +++ /dev/null @@ -1,329 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Image service, -code-named glance, on the controller node. For simplicity, this -configuration stores images on the local file system. - -Prerequisites -------------- - -Before you install and configure the Image service, you must -create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - # mysql - - .. end - - - - * Create the ``glance`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE glance; - - .. end - - * Grant proper access to the ``glance`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \ - IDENTIFIED BY 'GLANCE_DBPASS'; - - .. end - - Replace ``GLANCE_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to - admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create the ``glance`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt glance - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 3f4e777c4062483ab8d9edd7dff829df | - | name | glance | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``glance`` user and - ``service`` project: - - .. code-block:: console - - $ openstack role add --project service --user glance admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``glance`` service entity: - - .. code-block:: console - - $ openstack service create --name glance \ - --description "OpenStack Image" image - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Image | - | enabled | True | - | id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | name | glance | - | type | image | - +-------------+----------------------------------+ - - .. end - -#. Create the Image service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - image public http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 340be3625e9b4239a6415d034e98aace | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - image internal http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | a6e4b153c2ae4c919eccfdbb7dceb5d2 | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - image admin http://controller:9292 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 0c37ed58103f4300a84ff125a539032d | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 | - | service_name | glance | - | service_type | image | - | url | http://controller:9292 | - +--------------+----------------------------------+ - - .. end - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - - -#. Install the packages: - - .. code-block:: console - - # apt install glance - - .. end - - -2. Edit the ``/etc/glance/glance-api.conf`` file and complete the - following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - .. end - - Replace ``GLANCE_DBPASS`` with the password you chose for the - Image service database. - - * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, - configure Identity service access: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - # ... - flavor = keystone - - .. end - - Replace ``GLANCE_PASS`` with the password you chose for the - ``glance`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[glance_store]`` section, configure the local file - system store and location of image files: - - .. path /etc/glance/glance.conf - .. code-block:: ini - - [glance_store] - # ... - stores = file,http - default_store = file - filesystem_store_datadir = /var/lib/glance/images/ - - .. end - -3. Edit the ``/etc/glance/glance-registry.conf`` file and complete - the following actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/glance/glance-registry.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance - - .. end - - Replace ``GLANCE_DBPASS`` with the password you chose for the - Image service database. - - * In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections, - configure Identity service access: - - .. path /etc/glance/glance-registry.conf - .. code-block:: ini - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = glance - password = GLANCE_PASS - - [paste_deploy] - # ... - flavor = keystone - - .. end - - Replace ``GLANCE_PASS`` with the password you chose for the - ``glance`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - -4. Populate the Image service database: - - .. code-block:: console - - # su -s /bin/sh -c "glance-manage db_sync" glance - - .. end - - .. note:: - - Ignore any deprecation messages in this output. - - -Finalize installation ---------------------- - - - -#. Restart the Image services: - - .. code-block:: console - - # service glance-registry restart - # service glance-api restart - - .. end - diff --git a/doc/install-guide/source/glance-install.rst b/doc/install-guide/source/glance-install.rst deleted file mode 100644 index 9fa396ffb8..0000000000 --- a/doc/install-guide/source/glance-install.rst +++ /dev/null @@ -1,11 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Image service, -code-named glance, on the controller node. For simplicity, this -configuration stores images on the local file system. - -.. toctree:: - :glob: - - glance-install-* diff --git a/doc/install-guide/source/glance-verify.rst b/doc/install-guide/source/glance-verify.rst deleted file mode 100644 index 686e279ddc..0000000000 --- a/doc/install-guide/source/glance-verify.rst +++ /dev/null @@ -1,103 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the Image service using -`CirrOS `__, a small -Linux image that helps you test your OpenStack deployment. - -For more information about how to download and build images, see -`OpenStack Virtual Machine Image Guide -`__. -For information about how to manage images, see the -`OpenStack End User Guide -`__. - -.. note:: - - Perform these commands on the controller node. - -#. Source the ``admin`` credentials to gain access to - admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. Download the source image: - - .. code-block:: console - - $ wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img - - .. end - - .. note:: - - Install ``wget`` if your distribution does not include it. - -#. Upload the image to the Image service using the - :term:`QCOW2 ` disk format, :term:`bare` - container format, and public visibility so all projects can access it: - - .. code-block:: console - - $ openstack image create "cirros" \ - --file cirros-0.3.5-x86_64-disk.img \ - --disk-format qcow2 --container-format bare \ - --public - - +------------------+------------------------------------------------------+ - | Field | Value | - +------------------+------------------------------------------------------+ - | checksum | 133eae9fb1c98f45894a4e60d8736619 | - | container_format | bare | - | created_at | 2015-03-26T16:52:10Z | - | disk_format | qcow2 | - | file | /v2/images/cc5c6982-4910-471e-b864-1098015901b5/file | - | id | cc5c6982-4910-471e-b864-1098015901b5 | - | min_disk | 0 | - | min_ram | 0 | - | name | cirros | - | owner | ae7a98326b9c455588edd2656d723b9d | - | protected | False | - | schema | /v2/schemas/image | - | size | 13200896 | - | status | active | - | tags | | - | updated_at | 2015-03-26T16:52:10Z | - | virtual_size | None | - | visibility | public | - +------------------+------------------------------------------------------+ - - .. end - - For information about the :command:`openstack image create` parameters, - see `Create or update an image (glance) - `__ - in the ``OpenStack User Guide``. - - For information about disk and container formats for images, see - `Disk and container formats for images - `__ - in the ``OpenStack Virtual Machine Image Guide``. - - .. note:: - - OpenStack generates IDs dynamically, so you will see - different values in the example command output. - -#. Confirm upload of the image and validate attributes: - - .. code-block:: console - - $ openstack image list - - +--------------------------------------+--------+--------+ - | ID | Name | Status | - +--------------------------------------+--------+--------+ - | 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active | - +--------------------------------------+--------+--------+ - - .. end diff --git a/doc/install-guide/source/glance.rst b/doc/install-guide/source/glance.rst deleted file mode 100644 index 8729a34704..0000000000 --- a/doc/install-guide/source/glance.rst +++ /dev/null @@ -1,9 +0,0 @@ -============= -Image service -============= - -.. toctree:: - - common/get-started-image-service.rst - glance-install.rst - glance-verify.rst diff --git a/doc/install-guide/source/horizon-install-debian.rst b/doc/install-guide/source/horizon-install-debian.rst deleted file mode 100644 index 1c6cc80c66..0000000000 --- a/doc/install-guide/source/horizon-install-debian.rst +++ /dev/null @@ -1,212 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the dashboard -on the controller node. - -The only core service required by the dashboard is the Identity service. -You can use the dashboard in combination with other services, such as -Image service, Compute, and Networking. You can also use the dashboard -in environments with stand-alone services such as Object Storage. - -.. note:: - - This section assumes proper installation, configuration, and operation - of the Identity service using the Apache HTTP server and Memcached - service as described in the :ref:`Install and configure the Identity - service ` section. - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - - -1. Install the packages: - - .. code-block:: console - - # apt install openstack-dashboard-apache - - .. end - -2. Respond to prompts for web server configuration. - - .. note:: - - The automatic configuration process generates a self-signed - SSL certificate. Consider obtaining an official certificate - for production environments. - - .. note:: - - There are two modes of installation. One using ``/horizon`` as the URL, - keeping your default vhost and only adding an Alias directive: this is - the default. The other mode will remove the default Apache vhost and install - the dashboard on the webroot. It was the only available option - before the Liberty release. If you prefer to set the Apache configuration - manually, install the ``openstack-dashboard`` package instead of - ``openstack-dashboard-apache``. - - - - - -2. Edit the - ``/etc/openstack-dashboard/local_settings.py`` - file and complete the following actions: - - * Configure the dashboard to use OpenStack services on the - ``controller`` node: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_HOST = "controller" - - .. end - - * In the Dashboard configuration section, allow your hosts to access - Dashboard: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - ALLOWED_HOSTS = ['one.example.com', 'two.example.com'] - - .. end - - .. note:: - - - Do not edit the ``ALLOWED_HOSTS`` parameter under the Ubuntu - configuration section. - - ``ALLOWED_HOSTS`` can also be ``['*']`` to accept all hosts. This - may be useful for development work, but is potentially insecure - and should not be used in production. See the - `Django documentation - `_ - for further information. - - * Configure the ``memcached`` session storage service: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - - .. end - - .. note:: - - Comment out any other session storage configuration. - - * Enable the Identity API version 3: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - - .. end - - * Enable support for domains: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - - .. end - - * Configure API versions: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 2, - } - - .. end - - * Configure ``Default`` as the default domain for users that you create - via the dashboard: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" - - .. end - - * Configure ``user`` as the default role for - users that you create via the dashboard: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" - - .. end - - * If you chose networking option 1, disable support for layer-3 - networking services: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_NEUTRON_NETWORK = { - ... - 'enable_router': False, - 'enable_quotas': False, - 'enable_ipv6': False, - 'enable_distributed_router': False, - 'enable_ha_router': False, - 'enable_lb': False, - 'enable_firewall': False, - 'enable_vpn': False, - 'enable_fip_topology_check': False, - } - - .. end - - * Optionally, configure the time zone: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - TIME_ZONE = "TIME_ZONE" - - .. end - - Replace ``TIME_ZONE`` with an appropriate time zone identifier. - For more information, see the `list of time zones - `__. - - -Finalize installation ---------------------- - - -* Reload the web server configuration: - - .. code-block:: console - - # service apache2 reload - - .. end - - - diff --git a/doc/install-guide/source/horizon-install-obs.rst b/doc/install-guide/source/horizon-install-obs.rst deleted file mode 100644 index c8d26a7516..0000000000 --- a/doc/install-guide/source/horizon-install-obs.rst +++ /dev/null @@ -1,204 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the dashboard -on the controller node. - -The only core service required by the dashboard is the Identity service. -You can use the dashboard in combination with other services, such as -Image service, Compute, and Networking. You can also use the dashboard -in environments with stand-alone services such as Object Storage. - -.. note:: - - This section assumes proper installation, configuration, and operation - of the Identity service using the Apache HTTP server and Memcached - service as described in the :ref:`Install and configure the Identity - service ` section. - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - -1. Install the packages: - - .. code-block:: console - - # zypper install openstack-dashboard - - .. end - - - - - - -2. Configure the web server: - - .. code-block:: console - - # cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \ - /etc/apache2/conf.d/openstack-dashboard.conf - # a2enmod rewrite - - .. end - -3. Edit the - ``/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py`` - file and complete the following actions: - - * Configure the dashboard to use OpenStack services on the - ``controller`` node: - - .. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py - .. code-block:: python - - OPENSTACK_HOST = "controller" - - .. end - - * Allow your hosts to access the dashboard: - - .. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py - .. code-block:: python - - ALLOWED_HOSTS = ['one.example.com', 'two.example.com'] - - .. end - - .. note:: - - ``ALLOWED_HOSTS`` can also be ``['*']`` to accept all hosts. This may be - useful for development work, but is potentially insecure and should - not be used in production. See `Django documentation - `_ - for further information. - - * Configure the ``memcached`` session storage service: - - .. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py - .. code-block:: python - - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - - .. end - - .. note:: - - Comment out any other session storage configuration. - - * Enable the Identity API version 3: - - .. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - - .. end - - * Enable support for domains: - - .. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - - .. end - - * Configure API versions: - - .. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py - .. code-block:: python - - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 2, - } - - .. end - - * Configure ``Default`` as the default domain for users that you create - via the dashboard: - - .. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" - - .. end - - * Configure ``user`` as the default role for - users that you create via the dashboard: - - .. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" - - .. end - - * If you chose networking option 1, disable support for layer-3 - networking services: - - .. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py - .. code-block:: python - - OPENSTACK_NEUTRON_NETWORK = { - ... - 'enable_router': False, - 'enable_quotas': False, - 'enable_distributed_router': False, - 'enable_ha_router': False, - 'enable_lb': False, - 'enable_firewall': False, - 'enable_vpn': False, - 'enable_fip_topology_check': False, - } - - .. end - - * Optionally, configure the time zone: - - .. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py - .. code-block:: python - - TIME_ZONE = "TIME_ZONE" - - .. end - - Replace ``TIME_ZONE`` with an appropriate time zone identifier. - For more information, see the `list of time zones - `__. - - - - -Finalize installation ---------------------- - - - -* Restart the web server and session storage service: - - .. code-block:: console - - # systemctl restart apache2.service memcached.service - - .. end - - .. note:: - - The ``systemctl restart`` command starts each service if - not currently running. - - diff --git a/doc/install-guide/source/horizon-install-rdo.rst b/doc/install-guide/source/horizon-install-rdo.rst deleted file mode 100644 index c6c8961b69..0000000000 --- a/doc/install-guide/source/horizon-install-rdo.rst +++ /dev/null @@ -1,194 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the dashboard -on the controller node. - -The only core service required by the dashboard is the Identity service. -You can use the dashboard in combination with other services, such as -Image service, Compute, and Networking. You can also use the dashboard -in environments with stand-alone services such as Object Storage. - -.. note:: - - This section assumes proper installation, configuration, and operation - of the Identity service using the Apache HTTP server and Memcached - service as described in the :ref:`Install and configure the Identity - service ` section. - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - -1. Install the packages: - - .. code-block:: console - - # yum install openstack-dashboard - - .. end - - - - - - -2. Edit the - ``/etc/openstack-dashboard/local_settings`` - file and complete the following actions: - - * Configure the dashboard to use OpenStack services on the - ``controller`` node: - - .. path /etc/openstack-dashboard/local_settings - .. code-block:: python - - OPENSTACK_HOST = "controller" - - .. end - - * Allow your hosts to access the dashboard: - - .. path /etc/openstack-dashboard/local_settings - .. code-block:: python - - ALLOWED_HOSTS = ['one.example.com', 'two.example.com'] - - .. end - - .. note:: - - ALLOWED_HOSTS can also be ['*'] to accept all hosts. This may be - useful for development work, but is potentially insecure and should - not be used in production. See - https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts - for further information. - - * Configure the ``memcached`` session storage service: - - .. path /etc/openstack-dashboard/local_settings - .. code-block:: python - - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - - .. end - - .. note:: - - Comment out any other session storage configuration. - - * Enable the Identity API version 3: - - .. path /etc/openstack-dashboard/local_settings - .. code-block:: python - - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - - .. end - - * Enable support for domains: - - .. path /etc/openstack-dashboard/local_settings - .. code-block:: python - - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - - .. end - - * Configure API versions: - - .. path /etc/openstack-dashboard/local_settings - .. code-block:: python - - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 2, - } - - .. end - - * Configure ``Default`` as the default domain for users that you create - via the dashboard: - - .. path /etc/openstack-dashboard/local_settings - .. code-block:: python - - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" - - .. end - - * Configure ``user`` as the default role for - users that you create via the dashboard: - - .. path /etc/openstack-dashboard/local_settings - .. code-block:: python - - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" - - .. end - - * If you chose networking option 1, disable support for layer-3 - networking services: - - .. path /etc/openstack-dashboard/local_settings - .. code-block:: python - - OPENSTACK_NEUTRON_NETWORK = { - ... - 'enable_router': False, - 'enable_quotas': False, - 'enable_distributed_router': False, - 'enable_ha_router': False, - 'enable_lb': False, - 'enable_firewall': False, - 'enable_vpn': False, - 'enable_fip_topology_check': False, - } - - .. end - - * Optionally, configure the time zone: - - .. path /etc/openstack-dashboard/local_settings - .. code-block:: python - - TIME_ZONE = "TIME_ZONE" - - .. end - - Replace ``TIME_ZONE`` with an appropriate time zone identifier. - For more information, see the `list of time zones - `__. - - - -Finalize installation ---------------------- - - - - -* Restart the web server and session storage service: - - .. code-block:: console - - # systemctl restart httpd.service memcached.service - - .. end - - .. note:: - - The ``systemctl restart`` command starts each service if - not currently running. - diff --git a/doc/install-guide/source/horizon-install-ubuntu.rst b/doc/install-guide/source/horizon-install-ubuntu.rst deleted file mode 100644 index 03dee2e59a..0000000000 --- a/doc/install-guide/source/horizon-install-ubuntu.rst +++ /dev/null @@ -1,194 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the dashboard -on the controller node. - -The only core service required by the dashboard is the Identity service. -You can use the dashboard in combination with other services, such as -Image service, Compute, and Networking. You can also use the dashboard -in environments with stand-alone services such as Object Storage. - -.. note:: - - This section assumes proper installation, configuration, and operation - of the Identity service using the Apache HTTP server and Memcached - service as described in the :ref:`Install and configure the Identity - service ` section. - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - -1. Install the packages: - - .. code-block:: console - - # apt install openstack-dashboard - - .. end - - - - - - -2. Edit the - ``/etc/openstack-dashboard/local_settings.py`` - file and complete the following actions: - - * Configure the dashboard to use OpenStack services on the - ``controller`` node: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_HOST = "controller" - - .. end - - * In the Dashboard configuration section, allow your hosts to access - Dashboard: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - ALLOWED_HOSTS = ['one.example.com', 'two.example.com'] - - .. end - - .. note:: - - - Do not edit the ``ALLOWED_HOSTS`` parameter under the Ubuntu - configuration section. - - ``ALLOWED_HOSTS`` can also be ``['*']`` to accept all hosts. This - may be useful for development work, but is potentially insecure - and should not be used in production. See the - `Django documentation - `_ - for further information. - - * Configure the ``memcached`` session storage service: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - SESSION_ENGINE = 'django.contrib.sessions.backends.cache' - - CACHES = { - 'default': { - 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', - 'LOCATION': 'controller:11211', - } - } - - .. end - - .. note:: - - Comment out any other session storage configuration. - - * Enable the Identity API version 3: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST - - .. end - - * Enable support for domains: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True - - .. end - - * Configure API versions: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_API_VERSIONS = { - "identity": 3, - "image": 2, - "volume": 2, - } - - .. end - - * Configure ``Default`` as the default domain for users that you create - via the dashboard: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" - - .. end - - * Configure ``user`` as the default role for - users that you create via the dashboard: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" - - .. end - - * If you chose networking option 1, disable support for layer-3 - networking services: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - OPENSTACK_NEUTRON_NETWORK = { - ... - 'enable_router': False, - 'enable_quotas': False, - 'enable_ipv6': False, - 'enable_distributed_router': False, - 'enable_ha_router': False, - 'enable_lb': False, - 'enable_firewall': False, - 'enable_vpn': False, - 'enable_fip_topology_check': False, - } - - .. end - - * Optionally, configure the time zone: - - .. path /etc/openstack-dashboard/local_settings.py - .. code-block:: python - - TIME_ZONE = "TIME_ZONE" - - .. end - - Replace ``TIME_ZONE`` with an appropriate time zone identifier. - For more information, see the `list of time zones - `__. - - -Finalize installation ---------------------- - - -* Reload the web server configuration: - - .. code-block:: console - - # service apache2 reload - - .. end - - - diff --git a/doc/install-guide/source/horizon-install.rst b/doc/install-guide/source/horizon-install.rst deleted file mode 100644 index ac2ca61a6b..0000000000 --- a/doc/install-guide/source/horizon-install.rst +++ /dev/null @@ -1,22 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the dashboard -on the controller node. - -The only core service required by the dashboard is the Identity service. -You can use the dashboard in combination with other services, such as -Image service, Compute, and Networking. You can also use the dashboard -in environments with stand-alone services such as Object Storage. - -.. note:: - - This section assumes proper installation, configuration, and operation - of the Identity service using the Apache HTTP server and Memcached - service as described in the :ref:`Install and configure the Identity - service ` section. - -.. toctree:: - :glob: - - horizon-install-* diff --git a/doc/install-guide/source/horizon-next-steps.rst b/doc/install-guide/source/horizon-next-steps.rst deleted file mode 100644 index c5638a1a9a..0000000000 --- a/doc/install-guide/source/horizon-next-steps.rst +++ /dev/null @@ -1,31 +0,0 @@ -========== -Next steps -========== - -Your OpenStack environment now includes the dashboard. You can -:ref:`launch-instance` or add more services to your environment. - -After you install and configure the dashboard, you can -complete the following tasks: - -* Provide users with a public IP address, a username, and a password - so they can access the dashboard through a web browser. In case of - any SSL certificate connection problems, point the server - IP address to a domain name, and give users access. - -* Customize your dashboard. See section - `Customize and configure the Dashboard - `__. - -* Set up session storage. See - `Set up session storage for the dashboard - `__. - -* To use the VNC client with the dashboard, the browser - must support HTML5 Canvas and HTML5 WebSockets. - - For details about browsers that support noVNC, see - `README - `__ - and `browser support - `__. diff --git a/doc/install-guide/source/horizon-verify-debian.rst b/doc/install-guide/source/horizon-verify-debian.rst deleted file mode 100644 index 536abcbdf1..0000000000 --- a/doc/install-guide/source/horizon-verify-debian.rst +++ /dev/null @@ -1,14 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the dashboard. - - -Access the dashboard using a web browser at -``http://controller/``. - - - - -Authenticate using ``admin`` or ``demo`` user -and ``default`` domain credentials. diff --git a/doc/install-guide/source/horizon-verify-obs.rst b/doc/install-guide/source/horizon-verify-obs.rst deleted file mode 100644 index 536abcbdf1..0000000000 --- a/doc/install-guide/source/horizon-verify-obs.rst +++ /dev/null @@ -1,14 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the dashboard. - - -Access the dashboard using a web browser at -``http://controller/``. - - - - -Authenticate using ``admin`` or ``demo`` user -and ``default`` domain credentials. diff --git a/doc/install-guide/source/horizon-verify-rdo.rst b/doc/install-guide/source/horizon-verify-rdo.rst deleted file mode 100644 index 43394ed166..0000000000 --- a/doc/install-guide/source/horizon-verify-rdo.rst +++ /dev/null @@ -1,14 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the dashboard. - - - -Access the dashboard using a web browser at -``http://controller/dashboard``. - - - -Authenticate using ``admin`` or ``demo`` user -and ``default`` domain credentials. diff --git a/doc/install-guide/source/horizon-verify-ubuntu.rst b/doc/install-guide/source/horizon-verify-ubuntu.rst deleted file mode 100644 index aad1ccb661..0000000000 --- a/doc/install-guide/source/horizon-verify-ubuntu.rst +++ /dev/null @@ -1,14 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the dashboard. - - - - -Access the dashboard using a web browser at -``http://controller/horizon``. - - -Authenticate using ``admin`` or ``demo`` user -and ``default`` domain credentials. diff --git a/doc/install-guide/source/horizon-verify.rst b/doc/install-guide/source/horizon-verify.rst deleted file mode 100644 index e9586fe064..0000000000 --- a/doc/install-guide/source/horizon-verify.rst +++ /dev/null @@ -1,7 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -.. toctree:: - :glob: - - horizon-verify-* diff --git a/doc/install-guide/source/horizon.rst b/doc/install-guide/source/horizon.rst deleted file mode 100644 index 9fb09e483f..0000000000 --- a/doc/install-guide/source/horizon.rst +++ /dev/null @@ -1,15 +0,0 @@ -========= -Dashboard -========= - -.. toctree:: - - horizon-install.rst - horizon-verify.rst - horizon-next-steps.rst - -The Dashboard (horizon) is a web interface that enables cloud -administrators and users to manage various OpenStack resources -and services. - -This example deployment uses an Apache web server. diff --git a/doc/install-guide/source/index-debian.rst b/doc/install-guide/source/index-debian.rst index dbc6780173..9cd254af26 100644 --- a/doc/install-guide/source/index-debian.rst +++ b/doc/install-guide/source/index-debian.rst @@ -65,13 +65,6 @@ Contents common/conventions.rst overview.rst environment.rst - keystone.rst - glance.rst - nova.rst - neutron.rst - horizon.rst - cinder.rst - additional-services.rst launch-instance.rst common/appendix.rst diff --git a/doc/install-guide/source/index-obs.rst b/doc/install-guide/source/index-obs.rst index 643cefcb38..dded9dc09c 100644 --- a/doc/install-guide/source/index-obs.rst +++ b/doc/install-guide/source/index-obs.rst @@ -53,13 +53,6 @@ Contents common/conventions.rst overview.rst environment.rst - keystone.rst - glance.rst - nova.rst - neutron.rst - horizon.rst - cinder.rst - additional-services.rst launch-instance.rst common/appendix.rst diff --git a/doc/install-guide/source/index-rdo.rst b/doc/install-guide/source/index-rdo.rst index ae0e447c48..e5a2a2b52c 100644 --- a/doc/install-guide/source/index-rdo.rst +++ b/doc/install-guide/source/index-rdo.rst @@ -54,13 +54,6 @@ Contents common/conventions.rst overview.rst environment.rst - keystone.rst - glance.rst - nova.rst - neutron.rst - horizon.rst - cinder.rst - additional-services.rst launch-instance.rst common/appendix.rst diff --git a/doc/install-guide/source/index-ubuntu.rst b/doc/install-guide/source/index-ubuntu.rst index 100f88e8f9..70cc3da003 100644 --- a/doc/install-guide/source/index-ubuntu.rst +++ b/doc/install-guide/source/index-ubuntu.rst @@ -52,13 +52,6 @@ Contents common/conventions.rst overview.rst environment.rst - keystone.rst - glance.rst - nova.rst - neutron.rst - horizon.rst - cinder.rst - additional-services.rst launch-instance.rst common/appendix.rst diff --git a/doc/install-guide/source/keystone-install-debian.rst b/doc/install-guide/source/keystone-install-debian.rst deleted file mode 100644 index cf0a79e96f..0000000000 --- a/doc/install-guide/source/keystone-install-debian.rst +++ /dev/null @@ -1,197 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the OpenStack -Identity service, code-named keystone, on the controller node. For -scalability purposes, this configuration deploys Fernet tokens and -the Apache HTTP server to handle requests. - -Prerequisites -------------- - -Before you install and configure the Identity service, you must -create a database. - - - - -#. Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - -2. Create the ``keystone`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE keystone; - - .. end - -#. Grant proper access to the ``keystone`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - - .. end - - Replace ``KEYSTONE_DBPASS`` with a suitable password. - -#. Exit the database access client. - -.. _keystone-install-configure-debian: - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - -.. note:: - - This guide uses the Apache HTTP server with ``mod_wsgi`` to serve - Identity service requests on ports 5000 and 35357. By default, the - keystone service still listens on these ports. The package handles - all of the Apache configuration for you (including the activation of - the ``mod_wsgi`` apache2 module and keystone configuration in Apache). - -#. Run the following command to install the packages: - - .. code-block:: console - - # apt install keystone - - .. end - - - - - -2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/keystone/keystone.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - .. end - - Replace ``KEYSTONE_DBPASS`` with the password you chose for the database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[token]`` section, configure the Fernet token provider: - - .. path /etc/keystone/keystone.conf - .. code-block:: ini - - [token] - # ... - provider = fernet - - .. end - -3. Populate the Identity service database: - - .. code-block:: console - - # su -s /bin/sh -c "keystone-manage db_sync" keystone - - .. end - -4. Initialize Fernet key repositories: - - .. code-block:: console - - # keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - # keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - - .. end - -5. Bootstrap the Identity service: - - .. code-block:: console - - # keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:35357/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - - .. end - - Replace ``ADMIN_PASS`` with a suitable password for an administrative user. - -Configure the Apache HTTP server --------------------------------- - - - -#. Edit the ``/etc/apache2/apache2.conf`` file and configure the - ``ServerName`` option to reference the controller node: - - .. path /etc/apache2/apache2.conf - .. code-block:: apache - - ServerName controller - - .. end - - - -.. note:: - - The Debian package will perform the below operations for you: - - .. code-block:: console - - # a2enmod wsgi - # a2ensite wsgi-keystone.conf - # invoke-rc.d apache2 restart - - .. end - - - - -Finalize the installation -------------------------- - - - - -2. Configure the administrative account - - .. code-block:: console - - $ export OS_USERNAME=admin - $ export OS_PASSWORD=ADMIN_PASS - $ export OS_PROJECT_NAME=admin - $ export OS_USER_DOMAIN_NAME=Default - $ export OS_PROJECT_DOMAIN_NAME=Default - $ export OS_AUTH_URL=http://controller:35357/v3 - $ export OS_IDENTITY_API_VERSION=3 - - .. end - - Replace ``ADMIN_PASS`` with the password used in the - ``keystone-manage bootstrap`` command in `keystone-install-configure-debian`_. diff --git a/doc/install-guide/source/keystone-install-obs.rst b/doc/install-guide/source/keystone-install-obs.rst deleted file mode 100644 index eb31bedfde..0000000000 --- a/doc/install-guide/source/keystone-install-obs.rst +++ /dev/null @@ -1,261 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the OpenStack -Identity service, code-named keystone, on the controller node. For -scalability purposes, this configuration deploys Fernet tokens and -the Apache HTTP server to handle requests. - -Prerequisites -------------- - -Before you install and configure the Identity service, you must -create a database. - - -.. note:: - - Before you begin, ensure you have the most recent version of - ``python-pyasn1`` `installed `_. - - - - -#. Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - -2. Create the ``keystone`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE keystone; - - .. end - -#. Grant proper access to the ``keystone`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - - .. end - - Replace ``KEYSTONE_DBPASS`` with a suitable password. - -#. Exit the database access client. - -.. _keystone-install-configure-obs: - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - -.. note:: - - This guide uses the Apache HTTP server with ``mod_wsgi`` to serve - Identity service requests on ports 5000 and 35357. By default, the - keystone service still listens on these ports. Therefore, this guide - manually disables the keystone service. - - - -.. note:: - - Starting with the Newton release, SUSE OpenStack packages are shipping - with the upstream default configuration files. For example - ``/etc/keystone/keystone.conf``, with customizations in - ``/etc/keystone/keystone.conf.d/010-keystone.conf``. While the - following instructions modify the default configuration file, adding a - new file in ``/etc/keystone/keystone.conf.d`` achieves the same - result. - - - - - - -#. Run the following command to install the packages: - - .. code-block:: console - - # zypper install openstack-keystone apache2-mod_wsgi - - .. end - - -2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/keystone/keystone.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - .. end - - Replace ``KEYSTONE_DBPASS`` with the password you chose for the database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[token]`` section, configure the Fernet token provider: - - .. path /etc/keystone/keystone.conf - .. code-block:: ini - - [token] - # ... - provider = fernet - - .. end - -3. Populate the Identity service database: - - .. code-block:: console - - # su -s /bin/sh -c "keystone-manage db_sync" keystone - - .. end - -4. Initialize Fernet key repositories: - - .. code-block:: console - - # keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - # keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - - .. end - -5. Bootstrap the Identity service: - - .. code-block:: console - - # keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:35357/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - - .. end - - Replace ``ADMIN_PASS`` with a suitable password for an administrative user. - -Configure the Apache HTTP server --------------------------------- - - - - - -#. Edit the ``/etc/sysconfig/apache2`` file and configure the - ``APACHE_SERVERNAME`` option to reference the controller node: - - .. path /etc/sysconfig/apache2 - .. code-block:: shell - - APACHE_SERVERNAME="controller" - - .. end - -#. Create the ``/etc/apache2/conf.d/wsgi-keystone.conf`` file - with the following content: - - .. path /etc/apache2/conf.d/wsgi-keystone.conf - .. code-block:: apache - - Listen 5000 - Listen 35357 - - - WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} - WSGIProcessGroup keystone-public - WSGIScriptAlias / /usr/bin/keystone-wsgi-public - WSGIApplicationGroup %{GLOBAL} - WSGIPassAuthorization On - ErrorLogFormat "%{cu}t %M" - ErrorLog /var/log/apache2/keystone.log - CustomLog /var/log/apache2/keystone_access.log combined - - - Require all granted - - - - - WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP} - WSGIProcessGroup keystone-admin - WSGIScriptAlias / /usr/bin/keystone-wsgi-admin - WSGIApplicationGroup %{GLOBAL} - WSGIPassAuthorization On - ErrorLogFormat "%{cu}t %M" - ErrorLog /var/log/apache2/keystone.log - CustomLog /var/log/apache2/keystone_access.log combined - - - Require all granted - - - - .. end - -#. Recursively change the ownership of the ``/etc/keystone`` directory: - - .. code-block:: console - - # chown -R keystone:keystone /etc/keystone - - .. end - - - -Finalize the installation -------------------------- - - - - -#. Start the Apache HTTP service and configure it to start when the system - boots: - - .. code-block:: console - - # systemctl enable apache2.service - # systemctl start apache2.service - - .. end - - -2. Configure the administrative account - - .. code-block:: console - - $ export OS_USERNAME=admin - $ export OS_PASSWORD=ADMIN_PASS - $ export OS_PROJECT_NAME=admin - $ export OS_USER_DOMAIN_NAME=Default - $ export OS_PROJECT_DOMAIN_NAME=Default - $ export OS_AUTH_URL=http://controller:35357/v3 - $ export OS_IDENTITY_API_VERSION=3 - - .. end - - Replace ``ADMIN_PASS`` with the password used in the - ``keystone-manage bootstrap`` command in `keystone-install-configure-obs`_. diff --git a/doc/install-guide/source/keystone-install-rdo.rst b/doc/install-guide/source/keystone-install-rdo.rst deleted file mode 100644 index b68f116e59..0000000000 --- a/doc/install-guide/source/keystone-install-rdo.rst +++ /dev/null @@ -1,203 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the OpenStack -Identity service, code-named keystone, on the controller node. For -scalability purposes, this configuration deploys Fernet tokens and -the Apache HTTP server to handle requests. - -Prerequisites -------------- - -Before you install and configure the Identity service, you must -create a database. - - - - -#. Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - -2. Create the ``keystone`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE keystone; - - .. end - -#. Grant proper access to the ``keystone`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - - .. end - - Replace ``KEYSTONE_DBPASS`` with a suitable password. - -#. Exit the database access client. - -.. _keystone-install-configure-rdo: - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - -.. note:: - - This guide uses the Apache HTTP server with ``mod_wsgi`` to serve - Identity service requests on ports 5000 and 35357. By default, the - keystone service still listens on these ports. Therefore, this guide - manually disables the keystone service. - - - - - - -#. Run the following command to install the packages: - - .. code-block:: console - - # yum install openstack-keystone httpd mod_wsgi - - .. end - - - -2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/keystone/keystone.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - .. end - - Replace ``KEYSTONE_DBPASS`` with the password you chose for the database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[token]`` section, configure the Fernet token provider: - - .. path /etc/keystone/keystone.conf - .. code-block:: ini - - [token] - # ... - provider = fernet - - .. end - -3. Populate the Identity service database: - - .. code-block:: console - - # su -s /bin/sh -c "keystone-manage db_sync" keystone - - .. end - -4. Initialize Fernet key repositories: - - .. code-block:: console - - # keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - # keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - - .. end - -5. Bootstrap the Identity service: - - .. code-block:: console - - # keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:35357/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - - .. end - - Replace ``ADMIN_PASS`` with a suitable password for an administrative user. - -Configure the Apache HTTP server --------------------------------- - - -#. Edit the ``/etc/httpd/conf/httpd.conf`` file and configure the - ``ServerName`` option to reference the controller node: - - .. path /etc/httpd/conf/httpd - .. code-block:: apache - - ServerName controller - - .. end - -#. Create a link to the ``/usr/share/keystone/wsgi-keystone.conf`` file: - - .. code-block:: console - - # ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ - - .. end - - - - - - -Finalize the installation -------------------------- - - - -#. Start the Apache HTTP service and configure it to start when the system - boots: - - .. code-block:: console - - # systemctl enable httpd.service - # systemctl start httpd.service - - .. end - - - -2. Configure the administrative account - - .. code-block:: console - - $ export OS_USERNAME=admin - $ export OS_PASSWORD=ADMIN_PASS - $ export OS_PROJECT_NAME=admin - $ export OS_USER_DOMAIN_NAME=Default - $ export OS_PROJECT_DOMAIN_NAME=Default - $ export OS_AUTH_URL=http://controller:35357/v3 - $ export OS_IDENTITY_API_VERSION=3 - - .. end - - Replace ``ADMIN_PASS`` with the password used in the - ``keystone-manage bootstrap`` command in `keystone-install-configure-rdo`_. diff --git a/doc/install-guide/source/keystone-install-ubuntu.rst b/doc/install-guide/source/keystone-install-ubuntu.rst deleted file mode 100644 index 5692a00683..0000000000 --- a/doc/install-guide/source/keystone-install-ubuntu.rst +++ /dev/null @@ -1,193 +0,0 @@ -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the OpenStack -Identity service, code-named keystone, on the controller node. For -scalability purposes, this configuration deploys Fernet tokens and -the Apache HTTP server to handle requests. - -Prerequisites -------------- - -Before you install and configure the Identity service, you must -create a database. - - - -#. Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - # mysql - - .. end - - - -2. Create the ``keystone`` database: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE keystone; - - .. end - -#. Grant proper access to the ``keystone`` database: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \ - IDENTIFIED BY 'KEYSTONE_DBPASS'; - - .. end - - Replace ``KEYSTONE_DBPASS`` with a suitable password. - -#. Exit the database access client. - -.. _keystone-install-configure-ubuntu: - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - -.. note:: - - This guide uses the Apache HTTP server with ``mod_wsgi`` to serve - Identity service requests on ports 5000 and 35357. By default, the - keystone service still listens on these ports. The package handles - all of the Apache configuration for you (including the activation of - the ``mod_wsgi`` apache2 module and keystone configuration in Apache). - -#. Run the following command to install the packages: - - .. code-block:: console - - # apt install keystone - - .. end - - - - - -2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/keystone/keystone.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone - - .. end - - Replace ``KEYSTONE_DBPASS`` with the password you chose for the database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[token]`` section, configure the Fernet token provider: - - .. path /etc/keystone/keystone.conf - .. code-block:: ini - - [token] - # ... - provider = fernet - - .. end - -3. Populate the Identity service database: - - .. code-block:: console - - # su -s /bin/sh -c "keystone-manage db_sync" keystone - - .. end - -4. Initialize Fernet key repositories: - - .. code-block:: console - - # keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone - # keystone-manage credential_setup --keystone-user keystone --keystone-group keystone - - .. end - -5. Bootstrap the Identity service: - - .. code-block:: console - - # keystone-manage bootstrap --bootstrap-password ADMIN_PASS \ - --bootstrap-admin-url http://controller:35357/v3/ \ - --bootstrap-internal-url http://controller:5000/v3/ \ - --bootstrap-public-url http://controller:5000/v3/ \ - --bootstrap-region-id RegionOne - - .. end - - Replace ``ADMIN_PASS`` with a suitable password for an administrative user. - -Configure the Apache HTTP server --------------------------------- - - - -#. Edit the ``/etc/apache2/apache2.conf`` file and configure the - ``ServerName`` option to reference the controller node: - - .. path /etc/apache2/apache2.conf - .. code-block:: apache - - ServerName controller - - .. end - - - - - -Finalize the installation -------------------------- - - -#. Restart the Apache service: - - .. code-block:: console - - # service apache2 restart - - .. end - - - - -2. Configure the administrative account - - .. code-block:: console - - $ export OS_USERNAME=admin - $ export OS_PASSWORD=ADMIN_PASS - $ export OS_PROJECT_NAME=admin - $ export OS_USER_DOMAIN_NAME=Default - $ export OS_PROJECT_DOMAIN_NAME=Default - $ export OS_AUTH_URL=http://controller:35357/v3 - $ export OS_IDENTITY_API_VERSION=3 - - .. end - - Replace ``ADMIN_PASS`` with the password used in the - ``keystone-manage bootstrap`` command in `keystone-install-configure-ubuntu`_. diff --git a/doc/install-guide/source/keystone-install.rst b/doc/install-guide/source/keystone-install.rst deleted file mode 100644 index ad47da9089..0000000000 --- a/doc/install-guide/source/keystone-install.rst +++ /dev/null @@ -1,14 +0,0 @@ -.. _keystone-install: - -Install and configure -~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the OpenStack -Identity service, code-named keystone, on the controller node. For -scalability purposes, this configuration deploys Fernet tokens and -the Apache HTTP server to handle requests. - -.. toctree:: - :glob: - - keystone-install-* diff --git a/doc/install-guide/source/keystone-openrc.rst b/doc/install-guide/source/keystone-openrc.rst deleted file mode 100644 index da57c1884d..0000000000 --- a/doc/install-guide/source/keystone-openrc.rst +++ /dev/null @@ -1,96 +0,0 @@ -Create OpenStack client environment scripts -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The previous section used a combination of environment variables and -command options to interact with the Identity service via the -``openstack`` client. To increase efficiency of client operations, -OpenStack supports simple client environment scripts also known as -OpenRC files. These scripts typically contain common options for -all clients, but also support unique options. For more information, see the -`OpenStack End User Guide `_. - -Creating the scripts --------------------- - -Create client environment scripts for the ``admin`` and ``demo`` -projects and users. Future portions of this guide reference these -scripts to load appropriate credentials for client operations. - -#. Create and edit the ``admin-openrc`` file and add the following content: - - .. note:: - - The OpenStack client also supports using a ``clouds.yaml`` file. - For more information, see - the `os-client-config `_. - - .. code-block:: bash - - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=admin - export OS_USERNAME=admin - export OS_PASSWORD=ADMIN_PASS - export OS_AUTH_URL=http://controller:35357/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - - .. end - - Replace ``ADMIN_PASS`` with the password you chose - for the ``admin`` user in the Identity service. - -#. Create and edit the ``demo-openrc`` file and add the following content: - - .. code-block:: bash - - export OS_PROJECT_DOMAIN_NAME=Default - export OS_USER_DOMAIN_NAME=Default - export OS_PROJECT_NAME=demo - export OS_USERNAME=demo - export OS_PASSWORD=DEMO_PASS - export OS_AUTH_URL=http://controller:5000/v3 - export OS_IDENTITY_API_VERSION=3 - export OS_IMAGE_API_VERSION=2 - - .. end - - Replace ``DEMO_PASS`` with the password you chose - for the ``demo`` user in the Identity service. - -Using the scripts ------------------ - -To run clients as a specific project and user, you can simply load -the associated client environment script prior to running them. -For example: - -#. Load the ``admin-openrc`` file to populate - environment variables with the location of the Identity service - and the ``admin`` project and user credentials: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. Request an authentication token: - - .. code-block:: console - - $ openstack token issue - - +------------+-----------------------------------------------------------------+ - | Field | Value | - +------------+-----------------------------------------------------------------+ - | expires | 2016-02-12T20:44:35.659723Z | - | id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl | - | | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e | - | | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E | - | project_id | 343d245e850143a096806dfaefa9afdc | - | user_id | ac3377633149401296f6c0d92d79dc16 | - +------------+-----------------------------------------------------------------+ - - .. end diff --git a/doc/install-guide/source/keystone-users.rst b/doc/install-guide/source/keystone-users.rst deleted file mode 100644 index d0fa09a9f4..0000000000 --- a/doc/install-guide/source/keystone-users.rst +++ /dev/null @@ -1,114 +0,0 @@ -Create a domain, projects, users, and roles -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The Identity service provides authentication services for each OpenStack -service. The authentication service uses a combination of :term:`domains -`, :term:`projects`, :term:`users`, and -:term:`roles`. - -#. This guide uses a service project that contains a unique user for each - service that you add to your environment. Create the ``service`` - project: - - .. code-block:: console - - $ openstack project create --domain default \ - --description "Service Project" service - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | Service Project | - | domain_id | default | - | enabled | True | - | id | 24ac7f19cd944f4cba1d77469b2a73ed | - | is_domain | False | - | name | service | - | parent_id | default | - +-------------+----------------------------------+ - - .. end - -#. Regular (non-admin) tasks should use an unprivileged project and user. - As an example, this guide creates the ``demo`` project and user. - - * Create the ``demo`` project: - - .. code-block:: console - - $ openstack project create --domain default \ - --description "Demo Project" demo - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | Demo Project | - | domain_id | default | - | enabled | True | - | id | 231ad6e7ebba47d6a1e57e1cc07ae446 | - | is_domain | False | - | name | demo | - | parent_id | default | - +-------------+----------------------------------+ - - .. end - - .. note:: - - Do not repeat this step when creating additional users for this - project. - - * Create the ``demo`` user: - - .. code-block:: console - - $ openstack user create --domain default \ - --password-prompt demo - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | aeda23aa78f44e859900e22c24817832 | - | name | demo | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Create the ``user`` role: - - .. code-block:: console - - $ openstack role create user - - +-----------+----------------------------------+ - | Field | Value | - +-----------+----------------------------------+ - | domain_id | None | - | id | 997ce8d05fc143ac97d83fdfb5998552 | - | name | user | - +-----------+----------------------------------+ - - .. end - - * Add the ``user`` role to the ``demo`` project and user: - - .. code-block:: console - - $ openstack role add --project demo --user demo user - - .. end - - .. note:: - - This command provides no output. - -.. note:: - - You can repeat this procedure to create additional projects and - users. diff --git a/doc/install-guide/source/keystone-verify-debian.rst b/doc/install-guide/source/keystone-verify-debian.rst deleted file mode 100644 index 545e18f4c1..0000000000 --- a/doc/install-guide/source/keystone-verify-debian.rst +++ /dev/null @@ -1,74 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the Identity service before installing other -services. - -.. note:: - - Perform these commands on the controller node. - - - -2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD`` - environment variable: - - .. code-block:: console - - $ unset OS_AUTH_URL OS_PASSWORD - - .. end - -3. As the ``admin`` user, request an authentication token: - - .. code-block:: console - - $ openstack --os-auth-url http://controller:35357/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - - Password: - +------------+-----------------------------------------------------------------+ - | Field | Value | - +------------+-----------------------------------------------------------------+ - | expires | 2016-02-12T20:14:07.056119Z | - | id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv | - | | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 | - | | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws | - | project_id | 343d245e850143a096806dfaefa9afdc | - | user_id | ac3377633149401296f6c0d92d79dc16 | - +------------+-----------------------------------------------------------------+ - - .. end - - .. note:: - - This command uses the password for the ``admin`` user. - -4. As the ``demo`` user, request an authentication token: - - .. code-block:: console - - $ openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name demo --os-username demo token issue - - Password: - +------------+-----------------------------------------------------------------+ - | Field | Value | - +------------+-----------------------------------------------------------------+ - | expires | 2016-02-12T20:15:39.014479Z | - | id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW | - | | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ | - | | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U | - | project_id | ed0b60bf607743088218b0a533d5943f | - | user_id | 58126687cbcc4888bfa9ab73a2256f27 | - +------------+-----------------------------------------------------------------+ - - .. end - - .. note:: - - This command uses the password for the ``demo`` - user and API port 5000 which only allows regular (non-admin) - access to the Identity service API. diff --git a/doc/install-guide/source/keystone-verify-obs.rst b/doc/install-guide/source/keystone-verify-obs.rst deleted file mode 100644 index 8989a99c2b..0000000000 --- a/doc/install-guide/source/keystone-verify-obs.rst +++ /dev/null @@ -1,83 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the Identity service before installing other -services. - -.. note:: - - Perform these commands on the controller node. - - -#. For security reasons, disable the temporary authentication - token mechanism: - - Edit the ``/etc/keystone/keystone-paste.ini`` - file and remove ``admin_token_auth`` from the - ``[pipeline:public_api]``, ``[pipeline:admin_api]``, - and ``[pipeline:api_v3]`` sections. - - - -2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD`` - environment variable: - - .. code-block:: console - - $ unset OS_AUTH_URL OS_PASSWORD - - .. end - -3. As the ``admin`` user, request an authentication token: - - .. code-block:: console - - $ openstack --os-auth-url http://controller:35357/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - - Password: - +------------+-----------------------------------------------------------------+ - | Field | Value | - +------------+-----------------------------------------------------------------+ - | expires | 2016-02-12T20:14:07.056119Z | - | id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv | - | | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 | - | | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws | - | project_id | 343d245e850143a096806dfaefa9afdc | - | user_id | ac3377633149401296f6c0d92d79dc16 | - +------------+-----------------------------------------------------------------+ - - .. end - - .. note:: - - This command uses the password for the ``admin`` user. - -4. As the ``demo`` user, request an authentication token: - - .. code-block:: console - - $ openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name demo --os-username demo token issue - - Password: - +------------+-----------------------------------------------------------------+ - | Field | Value | - +------------+-----------------------------------------------------------------+ - | expires | 2016-02-12T20:15:39.014479Z | - | id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW | - | | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ | - | | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U | - | project_id | ed0b60bf607743088218b0a533d5943f | - | user_id | 58126687cbcc4888bfa9ab73a2256f27 | - +------------+-----------------------------------------------------------------+ - - .. end - - .. note:: - - This command uses the password for the ``demo`` - user and API port 5000 which only allows regular (non-admin) - access to the Identity service API. diff --git a/doc/install-guide/source/keystone-verify-rdo.rst b/doc/install-guide/source/keystone-verify-rdo.rst deleted file mode 100644 index 943e13ebbd..0000000000 --- a/doc/install-guide/source/keystone-verify-rdo.rst +++ /dev/null @@ -1,83 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the Identity service before installing other -services. - -.. note:: - - Perform these commands on the controller node. - - - -#. For security reasons, disable the temporary authentication - token mechanism: - - Edit the ``/etc/keystone/keystone-paste.ini`` - file and remove ``admin_token_auth`` from the - ``[pipeline:public_api]``, ``[pipeline:admin_api]``, - and ``[pipeline:api_v3]`` sections. - - -2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD`` - environment variable: - - .. code-block:: console - - $ unset OS_AUTH_URL OS_PASSWORD - - .. end - -3. As the ``admin`` user, request an authentication token: - - .. code-block:: console - - $ openstack --os-auth-url http://controller:35357/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - - Password: - +------------+-----------------------------------------------------------------+ - | Field | Value | - +------------+-----------------------------------------------------------------+ - | expires | 2016-02-12T20:14:07.056119Z | - | id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv | - | | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 | - | | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws | - | project_id | 343d245e850143a096806dfaefa9afdc | - | user_id | ac3377633149401296f6c0d92d79dc16 | - +------------+-----------------------------------------------------------------+ - - .. end - - .. note:: - - This command uses the password for the ``admin`` user. - -4. As the ``demo`` user, request an authentication token: - - .. code-block:: console - - $ openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name demo --os-username demo token issue - - Password: - +------------+-----------------------------------------------------------------+ - | Field | Value | - +------------+-----------------------------------------------------------------+ - | expires | 2016-02-12T20:15:39.014479Z | - | id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW | - | | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ | - | | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U | - | project_id | ed0b60bf607743088218b0a533d5943f | - | user_id | 58126687cbcc4888bfa9ab73a2256f27 | - +------------+-----------------------------------------------------------------+ - - .. end - - .. note:: - - This command uses the password for the ``demo`` - user and API port 5000 which only allows regular (non-admin) - access to the Identity service API. diff --git a/doc/install-guide/source/keystone-verify-ubuntu.rst b/doc/install-guide/source/keystone-verify-ubuntu.rst deleted file mode 100644 index 8989a99c2b..0000000000 --- a/doc/install-guide/source/keystone-verify-ubuntu.rst +++ /dev/null @@ -1,83 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the Identity service before installing other -services. - -.. note:: - - Perform these commands on the controller node. - - -#. For security reasons, disable the temporary authentication - token mechanism: - - Edit the ``/etc/keystone/keystone-paste.ini`` - file and remove ``admin_token_auth`` from the - ``[pipeline:public_api]``, ``[pipeline:admin_api]``, - and ``[pipeline:api_v3]`` sections. - - - -2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD`` - environment variable: - - .. code-block:: console - - $ unset OS_AUTH_URL OS_PASSWORD - - .. end - -3. As the ``admin`` user, request an authentication token: - - .. code-block:: console - - $ openstack --os-auth-url http://controller:35357/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name admin --os-username admin token issue - - Password: - +------------+-----------------------------------------------------------------+ - | Field | Value | - +------------+-----------------------------------------------------------------+ - | expires | 2016-02-12T20:14:07.056119Z | - | id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv | - | | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 | - | | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws | - | project_id | 343d245e850143a096806dfaefa9afdc | - | user_id | ac3377633149401296f6c0d92d79dc16 | - +------------+-----------------------------------------------------------------+ - - .. end - - .. note:: - - This command uses the password for the ``admin`` user. - -4. As the ``demo`` user, request an authentication token: - - .. code-block:: console - - $ openstack --os-auth-url http://controller:5000/v3 \ - --os-project-domain-name Default --os-user-domain-name Default \ - --os-project-name demo --os-username demo token issue - - Password: - +------------+-----------------------------------------------------------------+ - | Field | Value | - +------------+-----------------------------------------------------------------+ - | expires | 2016-02-12T20:15:39.014479Z | - | id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW | - | | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ | - | | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U | - | project_id | ed0b60bf607743088218b0a533d5943f | - | user_id | 58126687cbcc4888bfa9ab73a2256f27 | - +------------+-----------------------------------------------------------------+ - - .. end - - .. note:: - - This command uses the password for the ``demo`` - user and API port 5000 which only allows regular (non-admin) - access to the Identity service API. diff --git a/doc/install-guide/source/keystone-verify.rst b/doc/install-guide/source/keystone-verify.rst deleted file mode 100644 index f81466ab0a..0000000000 --- a/doc/install-guide/source/keystone-verify.rst +++ /dev/null @@ -1,14 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the Identity service before installing other -services. - -.. note:: - - Perform these commands on the controller node. - -.. toctree:: - :glob: - - keystone-verify-* diff --git a/doc/install-guide/source/keystone.rst b/doc/install-guide/source/keystone.rst deleted file mode 100644 index e76dd46d0c..0000000000 --- a/doc/install-guide/source/keystone.rst +++ /dev/null @@ -1,11 +0,0 @@ -================ -Identity service -================ - -.. toctree:: - - common/get-started-identity.rst - keystone-install.rst - keystone-users.rst - keystone-verify.rst - keystone-openrc.rst diff --git a/doc/install-guide/source/launch-instance.rst b/doc/install-guide/source/launch-instance.rst index 49a7da8df0..e68cbfcdb8 100644 --- a/doc/install-guide/source/launch-instance.rst +++ b/doc/install-guide/source/launch-instance.rst @@ -25,7 +25,7 @@ Create virtual networks ----------------------- Create virtual networks for the networking option that you chose -in :ref:`networking`. If you chose option 1, create only the provider +when configuring Neutron. If you chose option 1, create only the provider network. If you chose option 2, create the provider and self-service networks. diff --git a/doc/install-guide/source/neutron-compute-install-debian.rst b/doc/install-guide/source/neutron-compute-install-debian.rst deleted file mode 100644 index f7b91b28bc..0000000000 --- a/doc/install-guide/source/neutron-compute-install-debian.rst +++ /dev/null @@ -1,146 +0,0 @@ -Install and configure compute node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The compute node handles connectivity and :term:`security groups ` for instances. - - -Install the components ----------------------- - -.. code-block:: console - - # apt install neutron-linuxbridge-agent - -.. end - - - - -Configure the common component ------------------------------- - -The Networking common component configuration includes the -authentication mechanism, message queue, and plug-in. - -.. include:: shared/note_configuration_vary_by_distribution.rst - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, comment out any ``connection`` options - because compute nodes do not directly access the database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` - account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - - -Configure networking options ----------------------------- - -Choose the same networking option that you chose for the controller node to -configure services specific to it. Afterwards, return here and proceed to -:ref:`neutron-compute-compute-debian`. - -.. toctree:: - :maxdepth: 1 - - neutron-compute-install-option1.rst - neutron-compute-install-option2.rst - -.. _neutron-compute-compute-debian: - -Configure the Compute service to use the Networking service ------------------------------------------------------------ - -* Edit the ``/etc/nova/nova.conf`` file and complete the following actions: - - * In the ``[neutron]`` section, configure access parameters: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [neutron] - # ... - url = http://controller:9696 - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - -Finalize installation ---------------------- - - - - -#. Restart the Compute service: - - .. code-block:: console - - # service nova-compute restart - - .. end - -#. Restart the Linux bridge agent: - - .. code-block:: console - - # service neutron-linuxbridge-agent restart - - .. end - diff --git a/doc/install-guide/source/neutron-compute-install-obs.rst b/doc/install-guide/source/neutron-compute-install-obs.rst deleted file mode 100644 index 2cc9b8ab96..0000000000 --- a/doc/install-guide/source/neutron-compute-install-obs.rst +++ /dev/null @@ -1,161 +0,0 @@ -Install and configure compute node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The compute node handles connectivity and :term:`security groups ` for instances. - - - - -Install the components ----------------------- - -.. code-block:: console - - # zypper install --no-recommends \ - openstack-neutron-linuxbridge-agent bridge-utils - -.. end - - -Configure the common component ------------------------------- - -The Networking common component configuration includes the -authentication mechanism, message queue, and plug-in. - -.. include:: shared/note_configuration_vary_by_distribution.rst - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, comment out any ``connection`` options - because compute nodes do not directly access the database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` - account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - - -Configure networking options ----------------------------- - -Choose the same networking option that you chose for the controller node to -configure services specific to it. Afterwards, return here and proceed to -:ref:`neutron-compute-compute-obs`. - -.. toctree:: - :maxdepth: 1 - - neutron-compute-install-option1.rst - neutron-compute-install-option2.rst - -.. _neutron-compute-compute-obs: - -Configure the Compute service to use the Networking service ------------------------------------------------------------ - -* Edit the ``/etc/nova/nova.conf`` file and complete the following actions: - - * In the ``[neutron]`` section, configure access parameters: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [neutron] - # ... - url = http://controller:9696 - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - -Finalize installation ---------------------- - - - -#. The Networking service initialization scripts expect the variable - ``NEUTRON_PLUGIN_CONF`` in the ``/etc/sysconfig/neutron`` file to - reference the ML2 plug-in configuration file. Ensure that the - ``/etc/sysconfig/neutron`` file contains the following: - - .. path /etc/sysconfig/neutron - .. code-block:: ini - - NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini" - - .. end - -#. Restart the Compute service: - - .. code-block:: console - - # systemctl restart openstack-nova-compute.service - - .. end - -#. Start the Linux Bridge agent and configure it to start when the - system boots: - - .. code-block:: console - - # systemctl enable openstack-neutron-linuxbridge-agent.service - # systemctl start openstack-neutron-linuxbridge-agent.service - - .. end - - diff --git a/doc/install-guide/source/neutron-compute-install-option1.rst b/doc/install-guide/source/neutron-compute-install-option1.rst deleted file mode 100644 index 068f083120..0000000000 --- a/doc/install-guide/source/neutron-compute-install-option1.rst +++ /dev/null @@ -1,53 +0,0 @@ -Networking Option 1: Provider networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Configure the Networking components on a *compute* node. - -Configure the Linux bridge agent --------------------------------- - -The Linux bridge agent builds layer-2 (bridging and switching) virtual -networking infrastructure for instances and handles security groups. - -* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and - complete the following actions: - - * In the ``[linux_bridge]`` section, map the provider virtual network to the - provider physical network interface: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - .. end - - Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying - provider physical network interface. See :ref:`environment-networking` - for more information. - - * In the ``[vxlan]`` section, disable VXLAN overlay networks: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [vxlan] - enable_vxlan = false - - .. end - - * In the ``[securitygroup]`` section, enable security groups and - configure the Linux bridge :term:`iptables` firewall driver: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - - .. end - -Return to *Networking compute node configuration* diff --git a/doc/install-guide/source/neutron-compute-install-option2.rst b/doc/install-guide/source/neutron-compute-install-option2.rst deleted file mode 100644 index 8d711b10de..0000000000 --- a/doc/install-guide/source/neutron-compute-install-option2.rst +++ /dev/null @@ -1,64 +0,0 @@ -Networking Option 2: Self-service networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Configure the Networking components on a *compute* node. - -Configure the Linux bridge agent --------------------------------- - -The Linux bridge agent builds layer-2 (bridging and switching) virtual -networking infrastructure for instances and handles security groups. - -* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and - complete the following actions: - - * In the ``[linux_bridge]`` section, map the provider virtual network to the - provider physical network interface: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - .. end - - Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying - provider physical network interface. See :ref:`environment-networking` - for more information. - - * In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the - IP address of the physical network interface that handles overlay - networks, and enable layer-2 population: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - .. end - - Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the - underlying physical network interface that handles overlay networks. The - example architecture uses the management interface to tunnel traffic to - the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with - the management IP address of the compute node. See - :ref:`environment-networking` for more information. - - * In the ``[securitygroup]`` section, enable security groups and - configure the Linux bridge :term:`iptables` firewall driver: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - - .. end - -Return to *Networking compute node configuration*. diff --git a/doc/install-guide/source/neutron-compute-install-rdo.rst b/doc/install-guide/source/neutron-compute-install-rdo.rst deleted file mode 100644 index b11ee9b2ee..0000000000 --- a/doc/install-guide/source/neutron-compute-install-rdo.rst +++ /dev/null @@ -1,164 +0,0 @@ -Install and configure compute node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The compute node handles connectivity and :term:`security groups ` for instances. - - - -Install the components ----------------------- - -.. todo: - - https://bugzilla.redhat.com/show_bug.cgi?id=1334626 - -.. code-block:: console - - # yum install openstack-neutron-linuxbridge ebtables ipset - -.. end - - - -Configure the common component ------------------------------- - -The Networking common component configuration includes the -authentication mechanism, message queue, and plug-in. - -.. include:: shared/note_configuration_vary_by_distribution.rst - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, comment out any ``connection`` options - because compute nodes do not directly access the database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` - account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/neutron/tmp - - .. end - - - -Configure networking options ----------------------------- - -Choose the same networking option that you chose for the controller node to -configure services specific to it. Afterwards, return here and proceed to -:ref:`neutron-compute-compute-rdo`. - -.. toctree:: - :maxdepth: 1 - - neutron-compute-install-option1.rst - neutron-compute-install-option2.rst - -.. _neutron-compute-compute-rdo: - -Configure the Compute service to use the Networking service ------------------------------------------------------------ - -* Edit the ``/etc/nova/nova.conf`` file and complete the following actions: - - * In the ``[neutron]`` section, configure access parameters: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [neutron] - # ... - url = http://controller:9696 - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - -Finalize installation ---------------------- - - -#. Restart the Compute service: - - .. code-block:: console - - # systemctl restart openstack-nova-compute.service - - .. end - -#. Start the Linux bridge agent and configure it to start when the - system boots: - - .. code-block:: console - - # systemctl enable neutron-linuxbridge-agent.service - # systemctl start neutron-linuxbridge-agent.service - - .. end - - - diff --git a/doc/install-guide/source/neutron-compute-install-ubuntu.rst b/doc/install-guide/source/neutron-compute-install-ubuntu.rst deleted file mode 100644 index 0dec38d2e5..0000000000 --- a/doc/install-guide/source/neutron-compute-install-ubuntu.rst +++ /dev/null @@ -1,146 +0,0 @@ -Install and configure compute node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The compute node handles connectivity and :term:`security groups ` for instances. - - -Install the components ----------------------- - -.. code-block:: console - - # apt install neutron-linuxbridge-agent - -.. end - - - - -Configure the common component ------------------------------- - -The Networking common component configuration includes the -authentication mechanism, message queue, and plug-in. - -.. include:: shared/note_configuration_vary_by_distribution.rst - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, comment out any ``connection`` options - because compute nodes do not directly access the database. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the ``openstack`` - account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - - -Configure networking options ----------------------------- - -Choose the same networking option that you chose for the controller node to -configure services specific to it. Afterwards, return here and proceed to -:ref:`neutron-compute-compute-ubuntu`. - -.. toctree:: - :maxdepth: 1 - - neutron-compute-install-option1.rst - neutron-compute-install-option2.rst - -.. _neutron-compute-compute-ubuntu: - -Configure the Compute service to use the Networking service ------------------------------------------------------------ - -* Edit the ``/etc/nova/nova.conf`` file and complete the following actions: - - * In the ``[neutron]`` section, configure access parameters: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [neutron] - # ... - url = http://controller:9696 - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - -Finalize installation ---------------------- - - - - -#. Restart the Compute service: - - .. code-block:: console - - # service nova-compute restart - - .. end - -#. Restart the Linux bridge agent: - - .. code-block:: console - - # service neutron-linuxbridge-agent restart - - .. end - diff --git a/doc/install-guide/source/neutron-compute-install.rst b/doc/install-guide/source/neutron-compute-install.rst deleted file mode 100644 index a33a92bb43..0000000000 --- a/doc/install-guide/source/neutron-compute-install.rst +++ /dev/null @@ -1,9 +0,0 @@ -Install and configure compute node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. toctree:: - - neutron-compute-install-debian - neutron-compute-install-obs - neutron-compute-install-rdo - neutron-compute-install-ubuntu diff --git a/doc/install-guide/source/neutron-concepts.rst b/doc/install-guide/source/neutron-concepts.rst deleted file mode 100644 index 950e5dc718..0000000000 --- a/doc/install-guide/source/neutron-concepts.rst +++ /dev/null @@ -1,54 +0,0 @@ -Networking (neutron) concepts -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -OpenStack Networking (neutron) manages all networking facets for the -Virtual Networking Infrastructure (VNI) and the access layer aspects -of the Physical Networking Infrastructure (PNI) in your OpenStack -environment. OpenStack Networking enables projects to create advanced -virtual network topologies which may include services such as a -:term:`firewall`, a :term:`load balancer`, and a -:term:`virtual private network (VPN)`. - -Networking provides networks, subnets, and routers as object abstractions. -Each abstraction has functionality that mimics its physical counterpart: -networks contain subnets, and routers route traffic between different -subnets and networks. - -Any given Networking set up has at least one external network. Unlike -the other networks, the external network is not merely a virtually -defined network. Instead, it represents a view into a slice of the -physical, external network accessible outside the OpenStack -installation. IP addresses on the external network are accessible by -anybody physically on the outside network. - -In addition to external networks, any Networking set up has one or more -internal networks. These software-defined networks connect directly to -the VMs. Only the VMs on any given internal network, or those on subnets -connected through interfaces to a similar router, can access VMs connected -to that network directly. - -For the outside network to access VMs, and vice versa, routers between -the networks are needed. Each router has one gateway that is connected -to an external network and one or more interfaces connected to internal -networks. Like a physical router, subnets can access machines on other -subnets that are connected to the same router, and machines can access the -outside network through the gateway for the router. - -Additionally, you can allocate IP addresses on external networks to -ports on the internal network. Whenever something is connected to a -subnet, that connection is called a port. You can associate external -network IP addresses with ports to VMs. This way, entities on the -outside network can access VMs. - -Networking also supports *security groups*. Security groups enable -administrators to define firewall rules in groups. A VM can belong to -one or more security groups, and Networking applies the rules in those -security groups to block or unblock ports, port ranges, or traffic types -for that VM. - -Each plug-in that Networking uses has its own concepts. While not vital -to operating the VNI and OpenStack environment, understanding these -concepts can help you set up Networking. All Networking installations -use a core plug-in and a security group plug-in (or just the No-Op -security group plug-in). Additionally, Firewall-as-a-Service (FWaaS) and -Load-Balancer-as-a-Service (LBaaS) plug-ins are available. diff --git a/doc/install-guide/source/neutron-controller-install-debian.rst b/doc/install-guide/source/neutron-controller-install-debian.rst deleted file mode 100644 index b44094c1f6..0000000000 --- a/doc/install-guide/source/neutron-controller-install-debian.rst +++ /dev/null @@ -1,314 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Prerequisites -------------- - -Before you configure the OpenStack Networking (neutron) service, you -must create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``neutron`` database: - - .. code-block:: console - - MariaDB [(none)] CREATE DATABASE neutron; - - .. end - - * Grant proper access to the ``neutron`` database, replacing - ``NEUTRON_DBPASS`` with a suitable password: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - - .. end - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to admin-only CLI - commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create the ``neutron`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt neutron - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | fdb0f541e28141719b6a43c8944bf1fb | - | name | neutron | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``neutron`` user: - - .. code-block:: console - - $ openstack role add --project service --user neutron admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``neutron`` service entity: - - .. code-block:: console - - $ openstack service create --name neutron \ - --description "OpenStack Networking" network - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Networking | - | enabled | True | - | id | f71529314dab4a4d8eca427e701d209e | - | name | neutron | - | type | network | - +-------------+----------------------------------+ - - .. end - -#. Create the Networking service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - network public http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 85d80a6d02fc4b7683f611d7fc1493a3 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - network internal http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 09753b537ac74422a68d2d791cf3714f | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - network admin http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 1ee14289c9374dffb5db92a5c112fc4e | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - .. end - -Configure networking options ----------------------------- - -You can deploy the Networking service using one of two architectures -represented by options 1 and 2. - -Option 1 deploys the simplest possible architecture that only supports -attaching instances to provider (external) networks. No self-service (private) -networks, routers, or floating IP addresses. Only the ``admin`` or other -privileged user can manage provider networks. - -Option 2 augments option 1 with layer-3 services that support attaching -instances to self-service networks. The ``demo`` or other unprivileged -user can manage self-service networks including routers that provide -connectivity between self-service and provider networks. Additionally, -floating IP addresses provide connectivity to instances using self-service -networks from external networks such as the Internet. - -Self-service networks typically use overlay networks. Overlay network -protocols such as VXLAN include additional headers that increase overhead -and decrease space available for the payload or user data. Without knowledge -of the virtual network infrastructure, instances attempt to send packets -using the default Ethernet :term:`maximum transmission unit (MTU)` of 1500 -bytes. The Networking service automatically provides the correct MTU value -to instances via DHCP. However, some cloud images do not use DHCP or ignore -the DHCP MTU option and require configuration using metadata or a script. - -.. note:: - - Option 2 also supports attaching instances to provider networks. - -Choose one of the following networking options to configure services -specific to it. Afterwards, return here and proceed to -:ref:`neutron-controller-metadata-agent-debian`. - -.. toctree:: - :maxdepth: 1 - - neutron-controller-install-option1.rst - neutron-controller-install-option2.rst - -.. _neutron-controller-metadata-agent-debian: - -Configure the metadata agent ----------------------------- - -The :term:`metadata agent ` provides configuration information -such as credentials to instances. - -* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the metadata host and shared - secret: - - .. path /etc/neutron/metadata_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - nova_metadata_ip = controller - metadata_proxy_shared_secret = METADATA_SECRET - - .. end - - Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy. - -Configure the Compute service to use the Networking service ------------------------------------------------------------ - -* Edit the ``/etc/nova/nova.conf`` file and perform the following actions: - - * In the ``[neutron]`` section, configure access parameters, enable the - metadata proxy, and configure the secret: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [neutron] - # ... - url = http://controller:9696 - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true - metadata_proxy_shared_secret = METADATA_SECRET - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - Replace ``METADATA_SECRET`` with the secret you chose for the metadata - proxy. - -Finalize installation ---------------------- - - - - -#. Populate the database: - - .. code-block:: console - - # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - - .. end - - .. note:: - - Database population occurs later for Networking because the script - requires complete server and plug-in configuration files. - -#. Restart the Compute API service: - - .. code-block:: console - - # service nova-api restart - - .. end - -#. Restart the Networking services. - - For both networking options: - - .. code-block:: console - - # service neutron-server restart - # service neutron-linuxbridge-agent restart - # service neutron-dhcp-agent restart - # service neutron-metadata-agent restart - - .. end - - For networking option 2, also restart the layer-3 service: - - .. code-block:: console - - # service neutron-l3-agent restart - - .. end - diff --git a/doc/install-guide/source/neutron-controller-install-obs.rst b/doc/install-guide/source/neutron-controller-install-obs.rst deleted file mode 100644 index e5a0571685..0000000000 --- a/doc/install-guide/source/neutron-controller-install-obs.rst +++ /dev/null @@ -1,319 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Prerequisites -------------- - -Before you configure the OpenStack Networking (neutron) service, you -must create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``neutron`` database: - - .. code-block:: console - - MariaDB [(none)] CREATE DATABASE neutron; - - .. end - - * Grant proper access to the ``neutron`` database, replacing - ``NEUTRON_DBPASS`` with a suitable password: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - - .. end - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to admin-only CLI - commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create the ``neutron`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt neutron - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | fdb0f541e28141719b6a43c8944bf1fb | - | name | neutron | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``neutron`` user: - - .. code-block:: console - - $ openstack role add --project service --user neutron admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``neutron`` service entity: - - .. code-block:: console - - $ openstack service create --name neutron \ - --description "OpenStack Networking" network - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Networking | - | enabled | True | - | id | f71529314dab4a4d8eca427e701d209e | - | name | neutron | - | type | network | - +-------------+----------------------------------+ - - .. end - -#. Create the Networking service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - network public http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 85d80a6d02fc4b7683f611d7fc1493a3 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - network internal http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 09753b537ac74422a68d2d791cf3714f | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - network admin http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 1ee14289c9374dffb5db92a5c112fc4e | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - .. end - -Configure networking options ----------------------------- - -You can deploy the Networking service using one of two architectures -represented by options 1 and 2. - -Option 1 deploys the simplest possible architecture that only supports -attaching instances to provider (external) networks. No self-service (private) -networks, routers, or floating IP addresses. Only the ``admin`` or other -privileged user can manage provider networks. - -Option 2 augments option 1 with layer-3 services that support attaching -instances to self-service networks. The ``demo`` or other unprivileged -user can manage self-service networks including routers that provide -connectivity between self-service and provider networks. Additionally, -floating IP addresses provide connectivity to instances using self-service -networks from external networks such as the Internet. - -Self-service networks typically use overlay networks. Overlay network -protocols such as VXLAN include additional headers that increase overhead -and decrease space available for the payload or user data. Without knowledge -of the virtual network infrastructure, instances attempt to send packets -using the default Ethernet :term:`maximum transmission unit (MTU)` of 1500 -bytes. The Networking service automatically provides the correct MTU value -to instances via DHCP. However, some cloud images do not use DHCP or ignore -the DHCP MTU option and require configuration using metadata or a script. - -.. note:: - - Option 2 also supports attaching instances to provider networks. - -Choose one of the following networking options to configure services -specific to it. Afterwards, return here and proceed to -:ref:`neutron-controller-metadata-agent-obs`. - -.. toctree:: - :maxdepth: 1 - - neutron-controller-install-option1.rst - neutron-controller-install-option2.rst - -.. _neutron-controller-metadata-agent-obs: - -Configure the metadata agent ----------------------------- - -The :term:`metadata agent ` provides configuration information -such as credentials to instances. - -* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the metadata host and shared - secret: - - .. path /etc/neutron/metadata_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - nova_metadata_ip = controller - metadata_proxy_shared_secret = METADATA_SECRET - - .. end - - Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy. - -Configure the Compute service to use the Networking service ------------------------------------------------------------ - -* Edit the ``/etc/nova/nova.conf`` file and perform the following actions: - - * In the ``[neutron]`` section, configure access parameters, enable the - metadata proxy, and configure the secret: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [neutron] - # ... - url = http://controller:9696 - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true - metadata_proxy_shared_secret = METADATA_SECRET - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - Replace ``METADATA_SECRET`` with the secret you chose for the metadata - proxy. - -Finalize installation ---------------------- - - - -.. note:: - - SLES enables apparmor by default and restricts dnsmasq. You need to - either completely disable apparmor or disable only the dnsmasq - profile: - - .. code-block:: console - - # ln -s /etc/apparmor.d/usr.sbin.dnsmasq /etc/apparmor.d/disable/ - # systemctl restart apparmor - - .. end - -#. Restart the Compute API service: - - .. code-block:: console - - # systemctl restart openstack-nova-api.service - - .. end - -#. Start the Networking services and configure them to start when the system - boots. - - For both networking options: - - .. code-block:: console - - # systemctl enable openstack-neutron.service \ - openstack-neutron-linuxbridge-agent.service \ - openstack-neutron-dhcp-agent.service \ - openstack-neutron-metadata-agent.service - # systemctl start openstack-neutron.service \ - openstack-neutron-linuxbridge-agent.service \ - openstack-neutron-dhcp-agent.service \ - openstack-neutron-metadata-agent.service - - .. end - - For networking option 2, also enable and start the layer-3 service: - - .. code-block:: console - - # systemctl enable openstack-neutron-l3-agent.service - # systemctl start openstack-neutron-l3-agent.service - - .. end - - diff --git a/doc/install-guide/source/neutron-controller-install-option1-debian.rst b/doc/install-guide/source/neutron-controller-install-option1-debian.rst deleted file mode 100644 index 23d958ba4f..0000000000 --- a/doc/install-guide/source/neutron-controller-install-option1-debian.rst +++ /dev/null @@ -1,287 +0,0 @@ -Networking Option 1: Provider networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Install and configure the Networking components on the *controller* node. - -Install the components ----------------------- - - -.. code-block:: console - - # apt install neutron-server neutron-linuxbridge-agent \ - neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent - -.. end - -Configure the server component ------------------------------- - -The Networking server component configuration includes the database, -authentication mechanism, message queue, topology change notifications, -and plug-in. - -.. include:: shared/note_configuration_vary_by_distribution.rst - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - - .. end - - Replace ``NEUTRON_DBPASS`` with the password you chose for the - database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) - plug-in and disable additional plug-ins: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - core_plugin = ml2 - service_plugins = - - .. end - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to - notify Compute of network topology changes: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - notify_nova_on_port_status_changes = true - notify_nova_on_port_data_changes = true - - [nova] - # ... - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the ``nova`` - user in the Identity service. - -Configure the Modular Layer 2 (ML2) plug-in -------------------------------------------- - -The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging -and switching) virtual networking infrastructure for instances. - -* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the - following actions: - - * In the ``[ml2]`` section, enable flat and VLAN networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - type_drivers = flat,vlan - - .. end - - * In the ``[ml2]`` section, disable self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - tenant_network_types = - - .. end - - * In the ``[ml2]`` section, enable the Linux bridge mechanism: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - mechanism_drivers = linuxbridge - - .. end - - .. warning:: - - After you configure the ML2 plug-in, removing values in the - ``type_drivers`` option can lead to database inconsistency. - - * In the ``[ml2]`` section, enable the port security extension driver: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - extension_drivers = port_security - - .. end - - * In the ``[ml2_type_flat]`` section, configure the provider virtual - network as a flat network: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_flat] - # ... - flat_networks = provider - - .. end - - * In the ``[securitygroup]`` section, enable :term:`ipset` to increase - efficiency of security group rules: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_ipset = true - - .. end - -Configure the Linux bridge agent --------------------------------- - -The Linux bridge agent builds layer-2 (bridging and switching) virtual -networking infrastructure for instances and handles security groups. - -* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and - complete the following actions: - - * In the ``[linux_bridge]`` section, map the provider virtual network to the - provider physical network interface: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - .. end - - Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying - provider physical network interface. See :ref:`environment-networking` - for more information. - - * In the ``[vxlan]`` section, disable VXLAN overlay networks: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [vxlan] - enable_vxlan = false - - .. end - - * In the ``[securitygroup]`` section, enable security groups and - configure the Linux bridge :term:`iptables` firewall driver: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - - .. end - -Configure the DHCP agent ------------------------- - -The :term:`DHCP agent` provides DHCP services for virtual networks. - -* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, - Dnsmasq DHCP driver, and enable isolated metadata so instances on provider - networks can access metadata over the network: - - .. path /etc/neutron/dhcp_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - - .. end - -Return to *Networking controller node configuration*. diff --git a/doc/install-guide/source/neutron-controller-install-option1-obs.rst b/doc/install-guide/source/neutron-controller-install-option1-obs.rst deleted file mode 100644 index d5489eee48..0000000000 --- a/doc/install-guide/source/neutron-controller-install-option1-obs.rst +++ /dev/null @@ -1,289 +0,0 @@ -Networking Option 1: Provider networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Install and configure the Networking components on the *controller* node. - -Install the components ----------------------- - - -.. code-block:: console - - # zypper install --no-recommends openstack-neutron \ - openstack-neutron-server openstack-neutron-linuxbridge-agent \ - openstack-neutron-dhcp-agent openstack-neutron-metadata-agent \ - bridge-utils - -.. end - -Configure the server component ------------------------------- - -The Networking server component configuration includes the database, -authentication mechanism, message queue, topology change notifications, -and plug-in. - -.. include:: shared/note_configuration_vary_by_distribution.rst - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - - .. end - - Replace ``NEUTRON_DBPASS`` with the password you chose for the - database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) - plug-in and disable additional plug-ins: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - core_plugin = ml2 - service_plugins = - - .. end - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to - notify Compute of network topology changes: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - notify_nova_on_port_status_changes = true - notify_nova_on_port_data_changes = true - - [nova] - # ... - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the ``nova`` - user in the Identity service. - -Configure the Modular Layer 2 (ML2) plug-in -------------------------------------------- - -The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging -and switching) virtual networking infrastructure for instances. - -* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the - following actions: - - * In the ``[ml2]`` section, enable flat and VLAN networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - type_drivers = flat,vlan - - .. end - - * In the ``[ml2]`` section, disable self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - tenant_network_types = - - .. end - - * In the ``[ml2]`` section, enable the Linux bridge mechanism: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - mechanism_drivers = linuxbridge - - .. end - - .. warning:: - - After you configure the ML2 plug-in, removing values in the - ``type_drivers`` option can lead to database inconsistency. - - * In the ``[ml2]`` section, enable the port security extension driver: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - extension_drivers = port_security - - .. end - - * In the ``[ml2_type_flat]`` section, configure the provider virtual - network as a flat network: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_flat] - # ... - flat_networks = provider - - .. end - - * In the ``[securitygroup]`` section, enable :term:`ipset` to increase - efficiency of security group rules: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_ipset = true - - .. end - -Configure the Linux bridge agent --------------------------------- - -The Linux bridge agent builds layer-2 (bridging and switching) virtual -networking infrastructure for instances and handles security groups. - -* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and - complete the following actions: - - * In the ``[linux_bridge]`` section, map the provider virtual network to the - provider physical network interface: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - .. end - - Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying - provider physical network interface. See :ref:`environment-networking` - for more information. - - * In the ``[vxlan]`` section, disable VXLAN overlay networks: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [vxlan] - enable_vxlan = false - - .. end - - * In the ``[securitygroup]`` section, enable security groups and - configure the Linux bridge :term:`iptables` firewall driver: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - - .. end - -Configure the DHCP agent ------------------------- - -The :term:`DHCP agent` provides DHCP services for virtual networks. - -* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, - Dnsmasq DHCP driver, and enable isolated metadata so instances on provider - networks can access metadata over the network: - - .. path /etc/neutron/dhcp_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - - .. end - -Return to *Networking controller node configuration*. diff --git a/doc/install-guide/source/neutron-controller-install-option1-rdo.rst b/doc/install-guide/source/neutron-controller-install-option1-rdo.rst deleted file mode 100644 index dc67e942b6..0000000000 --- a/doc/install-guide/source/neutron-controller-install-option1-rdo.rst +++ /dev/null @@ -1,299 +0,0 @@ -Networking Option 1: Provider networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Install and configure the Networking components on the *controller* node. - -Install the components ----------------------- - - -.. code-block:: console - - # yum install openstack-neutron openstack-neutron-ml2 \ - openstack-neutron-linuxbridge ebtables - -.. end - -Configure the server component ------------------------------- - -The Networking server component configuration includes the database, -authentication mechanism, message queue, topology change notifications, -and plug-in. - -.. include:: shared/note_configuration_vary_by_distribution.rst - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - - .. end - - Replace ``NEUTRON_DBPASS`` with the password you chose for the - database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) - plug-in and disable additional plug-ins: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - core_plugin = ml2 - service_plugins = - - .. end - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to - notify Compute of network topology changes: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - notify_nova_on_port_status_changes = true - notify_nova_on_port_data_changes = true - - [nova] - # ... - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the ``nova`` - user in the Identity service. - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/neutron/tmp - - .. end - -Configure the Modular Layer 2 (ML2) plug-in -------------------------------------------- - -The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging -and switching) virtual networking infrastructure for instances. - -* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the - following actions: - - * In the ``[ml2]`` section, enable flat and VLAN networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - type_drivers = flat,vlan - - .. end - - * In the ``[ml2]`` section, disable self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - tenant_network_types = - - .. end - - * In the ``[ml2]`` section, enable the Linux bridge mechanism: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - mechanism_drivers = linuxbridge - - .. end - - .. warning:: - - After you configure the ML2 plug-in, removing values in the - ``type_drivers`` option can lead to database inconsistency. - - * In the ``[ml2]`` section, enable the port security extension driver: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - extension_drivers = port_security - - .. end - - * In the ``[ml2_type_flat]`` section, configure the provider virtual - network as a flat network: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_flat] - # ... - flat_networks = provider - - .. end - - * In the ``[securitygroup]`` section, enable :term:`ipset` to increase - efficiency of security group rules: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_ipset = true - - .. end - -Configure the Linux bridge agent --------------------------------- - -The Linux bridge agent builds layer-2 (bridging and switching) virtual -networking infrastructure for instances and handles security groups. - -* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and - complete the following actions: - - * In the ``[linux_bridge]`` section, map the provider virtual network to the - provider physical network interface: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - .. end - - Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying - provider physical network interface. See :ref:`environment-networking` - for more information. - - * In the ``[vxlan]`` section, disable VXLAN overlay networks: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [vxlan] - enable_vxlan = false - - .. end - - * In the ``[securitygroup]`` section, enable security groups and - configure the Linux bridge :term:`iptables` firewall driver: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - - .. end - -Configure the DHCP agent ------------------------- - -The :term:`DHCP agent` provides DHCP services for virtual networks. - -* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, - Dnsmasq DHCP driver, and enable isolated metadata so instances on provider - networks can access metadata over the network: - - .. path /etc/neutron/dhcp_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - - .. end - -Return to *Networking controller node configuration*. diff --git a/doc/install-guide/source/neutron-controller-install-option1-ubuntu.rst b/doc/install-guide/source/neutron-controller-install-option1-ubuntu.rst deleted file mode 100644 index dba2650232..0000000000 --- a/doc/install-guide/source/neutron-controller-install-option1-ubuntu.rst +++ /dev/null @@ -1,288 +0,0 @@ -Networking Option 1: Provider networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Install and configure the Networking components on the *controller* node. - -Install the components ----------------------- - - -.. code-block:: console - - # apt install neutron-server neutron-plugin-ml2 \ - neutron-linuxbridge-agent neutron-dhcp-agent \ - neutron-metadata-agent - -.. end - -Configure the server component ------------------------------- - -The Networking server component configuration includes the database, -authentication mechanism, message queue, topology change notifications, -and plug-in. - -.. include:: shared/note_configuration_vary_by_distribution.rst - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - - .. end - - Replace ``NEUTRON_DBPASS`` with the password you chose for the - database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) - plug-in and disable additional plug-ins: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - core_plugin = ml2 - service_plugins = - - .. end - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to - notify Compute of network topology changes: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - notify_nova_on_port_status_changes = true - notify_nova_on_port_data_changes = true - - [nova] - # ... - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the ``nova`` - user in the Identity service. - -Configure the Modular Layer 2 (ML2) plug-in -------------------------------------------- - -The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging -and switching) virtual networking infrastructure for instances. - -* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the - following actions: - - * In the ``[ml2]`` section, enable flat and VLAN networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - type_drivers = flat,vlan - - .. end - - * In the ``[ml2]`` section, disable self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - tenant_network_types = - - .. end - - * In the ``[ml2]`` section, enable the Linux bridge mechanism: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - mechanism_drivers = linuxbridge - - .. end - - .. warning:: - - After you configure the ML2 plug-in, removing values in the - ``type_drivers`` option can lead to database inconsistency. - - * In the ``[ml2]`` section, enable the port security extension driver: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - extension_drivers = port_security - - .. end - - * In the ``[ml2_type_flat]`` section, configure the provider virtual - network as a flat network: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_flat] - # ... - flat_networks = provider - - .. end - - * In the ``[securitygroup]`` section, enable :term:`ipset` to increase - efficiency of security group rules: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_ipset = true - - .. end - -Configure the Linux bridge agent --------------------------------- - -The Linux bridge agent builds layer-2 (bridging and switching) virtual -networking infrastructure for instances and handles security groups. - -* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and - complete the following actions: - - * In the ``[linux_bridge]`` section, map the provider virtual network to the - provider physical network interface: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - .. end - - Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying - provider physical network interface. See :ref:`environment-networking` - for more information. - - * In the ``[vxlan]`` section, disable VXLAN overlay networks: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [vxlan] - enable_vxlan = false - - .. end - - * In the ``[securitygroup]`` section, enable security groups and - configure the Linux bridge :term:`iptables` firewall driver: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - - .. end - -Configure the DHCP agent ------------------------- - -The :term:`DHCP agent` provides DHCP services for virtual networks. - -* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, - Dnsmasq DHCP driver, and enable isolated metadata so instances on provider - networks can access metadata over the network: - - .. path /etc/neutron/dhcp_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - - .. end - -Return to *Networking controller node configuration*. diff --git a/doc/install-guide/source/neutron-controller-install-option1.rst b/doc/install-guide/source/neutron-controller-install-option1.rst deleted file mode 100644 index 47983e7667..0000000000 --- a/doc/install-guide/source/neutron-controller-install-option1.rst +++ /dev/null @@ -1,9 +0,0 @@ -Networking Option 1: Provider networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Install and configure the Networking components on the *controller* node. - -.. toctree:: - :glob: - - neutron-controller-install-option1-* diff --git a/doc/install-guide/source/neutron-controller-install-option2-debian.rst b/doc/install-guide/source/neutron-controller-install-option2-debian.rst deleted file mode 100644 index 7286cf8e36..0000000000 --- a/doc/install-guide/source/neutron-controller-install-option2-debian.rst +++ /dev/null @@ -1,335 +0,0 @@ -Networking Option 2: Self-service networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Install and configure the Networking components on the *controller* node. - -Install the components ----------------------- - - - - - -#. .. code-block:: console - - # apt install neutron-server neutron-linuxbridge-agent \ - neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent - - .. end - - -Configure the server component ------------------------------- - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - - .. end - - Replace ``NEUTRON_DBPASS`` with the password you chose for the - database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) - plug-in, router service, and overlapping IP addresses: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = true - - .. end - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to - notify Compute of network topology changes: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - notify_nova_on_port_status_changes = true - notify_nova_on_port_data_changes = true - - [nova] - # ... - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the ``nova`` - user in the Identity service. - -Configure the Modular Layer 2 (ML2) plug-in -------------------------------------------- - -The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging -and switching) virtual networking infrastructure for instances. - -* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the - following actions: - - * In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - type_drivers = flat,vlan,vxlan - - .. end - - * In the ``[ml2]`` section, enable VXLAN self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - tenant_network_types = vxlan - - .. end - - * In the ``[ml2]`` section, enable the Linux bridge and layer-2 population - mechanisms: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - mechanism_drivers = linuxbridge,l2population - - .. end - - .. warning:: - - After you configure the ML2 plug-in, removing values in the - ``type_drivers`` option can lead to database inconsistency. - - .. note:: - - The Linux bridge agent only supports VXLAN overlay networks. - - * In the ``[ml2]`` section, enable the port security extension driver: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - extension_drivers = port_security - - .. end - - * In the ``[ml2_type_flat]`` section, configure the provider virtual - network as a flat network: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_flat] - # ... - flat_networks = provider - - .. end - - * In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier - range for self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_vxlan] - # ... - vni_ranges = 1:1000 - - .. end - - * In the ``[securitygroup]`` section, enable :term:`ipset` to increase - efficiency of security group rules: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_ipset = true - - .. end - -Configure the Linux bridge agent --------------------------------- - -The Linux bridge agent builds layer-2 (bridging and switching) virtual -networking infrastructure for instances and handles security groups. - -* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and - complete the following actions: - - * In the ``[linux_bridge]`` section, map the provider virtual network to the - provider physical network interface: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - .. end - - Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying - provider physical network interface. See :ref:`environment-networking` - for more information. - - * In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the - IP address of the physical network interface that handles overlay - networks, and enable layer-2 population: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - .. end - - Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the - underlying physical network interface that handles overlay networks. The - example architecture uses the management interface to tunnel traffic to - the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with - the management IP address of the controller node. See - :ref:`environment-networking` for more information. - - * In the ``[securitygroup]`` section, enable security groups and - configure the Linux bridge :term:`iptables` firewall driver: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - - .. end - -Configure the layer-3 agent ---------------------------- - -The :term:`Layer-3 (L3) agent` provides routing and NAT services for -self-service virtual networks. - -* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver - and external network bridge: - - .. path /etc/neutron/l3_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - - .. end - -Configure the DHCP agent ------------------------- - -The :term:`DHCP agent` provides DHCP services for virtual networks. - -* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, - Dnsmasq DHCP driver, and enable isolated metadata so instances on provider - networks can access metadata over the network: - - .. path /etc/neutron/dhcp_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - - .. end - -Return to *Networking controller node configuration*. diff --git a/doc/install-guide/source/neutron-controller-install-option2-obs.rst b/doc/install-guide/source/neutron-controller-install-option2-obs.rst deleted file mode 100644 index 4993b7034d..0000000000 --- a/doc/install-guide/source/neutron-controller-install-option2-obs.rst +++ /dev/null @@ -1,337 +0,0 @@ -Networking Option 2: Self-service networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Install and configure the Networking components on the *controller* node. - -Install the components ----------------------- - - - - -.. code-block:: console - - # zypper install --no-recommends openstack-neutron \ - openstack-neutron-server openstack-neutron-linuxbridge-agent \ - openstack-neutron-l3-agent openstack-neutron-dhcp-agent \ - openstack-neutron-metadata-agent bridge-utils - -.. end - - - -Configure the server component ------------------------------- - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - - .. end - - Replace ``NEUTRON_DBPASS`` with the password you chose for the - database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) - plug-in, router service, and overlapping IP addresses: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = true - - .. end - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to - notify Compute of network topology changes: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - notify_nova_on_port_status_changes = true - notify_nova_on_port_data_changes = true - - [nova] - # ... - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the ``nova`` - user in the Identity service. - -Configure the Modular Layer 2 (ML2) plug-in -------------------------------------------- - -The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging -and switching) virtual networking infrastructure for instances. - -* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the - following actions: - - * In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - type_drivers = flat,vlan,vxlan - - .. end - - * In the ``[ml2]`` section, enable VXLAN self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - tenant_network_types = vxlan - - .. end - - * In the ``[ml2]`` section, enable the Linux bridge and layer-2 population - mechanisms: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - mechanism_drivers = linuxbridge,l2population - - .. end - - .. warning:: - - After you configure the ML2 plug-in, removing values in the - ``type_drivers`` option can lead to database inconsistency. - - .. note:: - - The Linux bridge agent only supports VXLAN overlay networks. - - * In the ``[ml2]`` section, enable the port security extension driver: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - extension_drivers = port_security - - .. end - - * In the ``[ml2_type_flat]`` section, configure the provider virtual - network as a flat network: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_flat] - # ... - flat_networks = provider - - .. end - - * In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier - range for self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_vxlan] - # ... - vni_ranges = 1:1000 - - .. end - - * In the ``[securitygroup]`` section, enable :term:`ipset` to increase - efficiency of security group rules: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_ipset = true - - .. end - -Configure the Linux bridge agent --------------------------------- - -The Linux bridge agent builds layer-2 (bridging and switching) virtual -networking infrastructure for instances and handles security groups. - -* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and - complete the following actions: - - * In the ``[linux_bridge]`` section, map the provider virtual network to the - provider physical network interface: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - .. end - - Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying - provider physical network interface. See :ref:`environment-networking` - for more information. - - * In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the - IP address of the physical network interface that handles overlay - networks, and enable layer-2 population: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - .. end - - Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the - underlying physical network interface that handles overlay networks. The - example architecture uses the management interface to tunnel traffic to - the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with - the management IP address of the controller node. See - :ref:`environment-networking` for more information. - - * In the ``[securitygroup]`` section, enable security groups and - configure the Linux bridge :term:`iptables` firewall driver: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - - .. end - -Configure the layer-3 agent ---------------------------- - -The :term:`Layer-3 (L3) agent` provides routing and NAT services for -self-service virtual networks. - -* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver - and external network bridge: - - .. path /etc/neutron/l3_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - - .. end - -Configure the DHCP agent ------------------------- - -The :term:`DHCP agent` provides DHCP services for virtual networks. - -* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, - Dnsmasq DHCP driver, and enable isolated metadata so instances on provider - networks can access metadata over the network: - - .. path /etc/neutron/dhcp_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - - .. end - -Return to *Networking controller node configuration*. diff --git a/doc/install-guide/source/neutron-controller-install-option2-rdo.rst b/doc/install-guide/source/neutron-controller-install-option2-rdo.rst deleted file mode 100644 index c966754fb2..0000000000 --- a/doc/install-guide/source/neutron-controller-install-option2-rdo.rst +++ /dev/null @@ -1,347 +0,0 @@ -Networking Option 2: Self-service networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Install and configure the Networking components on the *controller* node. - -Install the components ----------------------- - - - -.. code-block:: console - - # yum install openstack-neutron openstack-neutron-ml2 \ - openstack-neutron-linuxbridge ebtables - -.. end - - - - -Configure the server component ------------------------------- - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - - .. end - - Replace ``NEUTRON_DBPASS`` with the password you chose for the - database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) - plug-in, router service, and overlapping IP addresses: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = true - - .. end - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to - notify Compute of network topology changes: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - notify_nova_on_port_status_changes = true - notify_nova_on_port_data_changes = true - - [nova] - # ... - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the ``nova`` - user in the Identity service. - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/neutron/tmp - - .. end - -Configure the Modular Layer 2 (ML2) plug-in -------------------------------------------- - -The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging -and switching) virtual networking infrastructure for instances. - -* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the - following actions: - - * In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - type_drivers = flat,vlan,vxlan - - .. end - - * In the ``[ml2]`` section, enable VXLAN self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - tenant_network_types = vxlan - - .. end - - * In the ``[ml2]`` section, enable the Linux bridge and layer-2 population - mechanisms: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - mechanism_drivers = linuxbridge,l2population - - .. end - - .. warning:: - - After you configure the ML2 plug-in, removing values in the - ``type_drivers`` option can lead to database inconsistency. - - .. note:: - - The Linux bridge agent only supports VXLAN overlay networks. - - * In the ``[ml2]`` section, enable the port security extension driver: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - extension_drivers = port_security - - .. end - - * In the ``[ml2_type_flat]`` section, configure the provider virtual - network as a flat network: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_flat] - # ... - flat_networks = provider - - .. end - - * In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier - range for self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_vxlan] - # ... - vni_ranges = 1:1000 - - .. end - - * In the ``[securitygroup]`` section, enable :term:`ipset` to increase - efficiency of security group rules: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_ipset = true - - .. end - -Configure the Linux bridge agent --------------------------------- - -The Linux bridge agent builds layer-2 (bridging and switching) virtual -networking infrastructure for instances and handles security groups. - -* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and - complete the following actions: - - * In the ``[linux_bridge]`` section, map the provider virtual network to the - provider physical network interface: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - .. end - - Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying - provider physical network interface. See :ref:`environment-networking` - for more information. - - * In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the - IP address of the physical network interface that handles overlay - networks, and enable layer-2 population: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - .. end - - Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the - underlying physical network interface that handles overlay networks. The - example architecture uses the management interface to tunnel traffic to - the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with - the management IP address of the controller node. See - :ref:`environment-networking` for more information. - - * In the ``[securitygroup]`` section, enable security groups and - configure the Linux bridge :term:`iptables` firewall driver: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - - .. end - -Configure the layer-3 agent ---------------------------- - -The :term:`Layer-3 (L3) agent` provides routing and NAT services for -self-service virtual networks. - -* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver - and external network bridge: - - .. path /etc/neutron/l3_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - - .. end - -Configure the DHCP agent ------------------------- - -The :term:`DHCP agent` provides DHCP services for virtual networks. - -* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, - Dnsmasq DHCP driver, and enable isolated metadata so instances on provider - networks can access metadata over the network: - - .. path /etc/neutron/dhcp_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - - .. end - -Return to *Networking controller node configuration*. diff --git a/doc/install-guide/source/neutron-controller-install-option2-ubuntu.rst b/doc/install-guide/source/neutron-controller-install-option2-ubuntu.rst deleted file mode 100644 index 2560e728e4..0000000000 --- a/doc/install-guide/source/neutron-controller-install-option2-ubuntu.rst +++ /dev/null @@ -1,336 +0,0 @@ -Networking Option 2: Self-service networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Install and configure the Networking components on the *controller* node. - -Install the components ----------------------- - - -.. code-block:: console - - # apt install neutron-server neutron-plugin-ml2 \ - neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent \ - neutron-metadata-agent - -.. end - - - - - -Configure the server component ------------------------------- - -* Edit the ``/etc/neutron/neutron.conf`` file and complete the following - actions: - - * In the ``[database]`` section, configure database access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [database] - # ... - connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron - - .. end - - Replace ``NEUTRON_DBPASS`` with the password you chose for the - database. - - .. note:: - - Comment out or remove any other ``connection`` options in the - ``[database]`` section. - - * In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2) - plug-in, router service, and overlapping IP addresses: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - core_plugin = ml2 - service_plugins = router - allow_overlapping_ips = true - - .. end - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in RabbitMQ. - - * In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure - Identity service access: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = neutron - password = NEUTRON_PASS - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to - notify Compute of network topology changes: - - .. path /etc/neutron/neutron.conf - .. code-block:: ini - - [DEFAULT] - # ... - notify_nova_on_port_status_changes = true - notify_nova_on_port_data_changes = true - - [nova] - # ... - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the ``nova`` - user in the Identity service. - -Configure the Modular Layer 2 (ML2) plug-in -------------------------------------------- - -The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging -and switching) virtual networking infrastructure for instances. - -* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the - following actions: - - * In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - type_drivers = flat,vlan,vxlan - - .. end - - * In the ``[ml2]`` section, enable VXLAN self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - tenant_network_types = vxlan - - .. end - - * In the ``[ml2]`` section, enable the Linux bridge and layer-2 population - mechanisms: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - mechanism_drivers = linuxbridge,l2population - - .. end - - .. warning:: - - After you configure the ML2 plug-in, removing values in the - ``type_drivers`` option can lead to database inconsistency. - - .. note:: - - The Linux bridge agent only supports VXLAN overlay networks. - - * In the ``[ml2]`` section, enable the port security extension driver: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2] - # ... - extension_drivers = port_security - - .. end - - * In the ``[ml2_type_flat]`` section, configure the provider virtual - network as a flat network: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_flat] - # ... - flat_networks = provider - - .. end - - * In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier - range for self-service networks: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [ml2_type_vxlan] - # ... - vni_ranges = 1:1000 - - .. end - - * In the ``[securitygroup]`` section, enable :term:`ipset` to increase - efficiency of security group rules: - - .. path /etc/neutron/plugins/ml2/ml2_conf.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_ipset = true - - .. end - -Configure the Linux bridge agent --------------------------------- - -The Linux bridge agent builds layer-2 (bridging and switching) virtual -networking infrastructure for instances and handles security groups. - -* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and - complete the following actions: - - * In the ``[linux_bridge]`` section, map the provider virtual network to the - provider physical network interface: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [linux_bridge] - physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME - - .. end - - Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying - provider physical network interface. See :ref:`environment-networking` - for more information. - - * In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the - IP address of the physical network interface that handles overlay - networks, and enable layer-2 population: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [vxlan] - enable_vxlan = true - local_ip = OVERLAY_INTERFACE_IP_ADDRESS - l2_population = true - - .. end - - Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the - underlying physical network interface that handles overlay networks. The - example architecture uses the management interface to tunnel traffic to - the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with - the management IP address of the controller node. See - :ref:`environment-networking` for more information. - - * In the ``[securitygroup]`` section, enable security groups and - configure the Linux bridge :term:`iptables` firewall driver: - - .. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini - .. code-block:: ini - - [securitygroup] - # ... - enable_security_group = true - firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver - - .. end - -Configure the layer-3 agent ---------------------------- - -The :term:`Layer-3 (L3) agent` provides routing and NAT services for -self-service virtual networks. - -* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver - and external network bridge: - - .. path /etc/neutron/l3_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - - .. end - -Configure the DHCP agent ------------------------- - -The :term:`DHCP agent` provides DHCP services for virtual networks. - -* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the Linux bridge interface driver, - Dnsmasq DHCP driver, and enable isolated metadata so instances on provider - networks can access metadata over the network: - - .. path /etc/neutron/dhcp_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - interface_driver = linuxbridge - dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq - enable_isolated_metadata = true - - .. end - -Return to *Networking controller node configuration*. diff --git a/doc/install-guide/source/neutron-controller-install-option2.rst b/doc/install-guide/source/neutron-controller-install-option2.rst deleted file mode 100644 index c588305cc3..0000000000 --- a/doc/install-guide/source/neutron-controller-install-option2.rst +++ /dev/null @@ -1,9 +0,0 @@ -Networking Option 2: Self-service networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Install and configure the Networking components on the *controller* node. - -.. toctree:: - :glob: - - neutron-controller-install-option2-* diff --git a/doc/install-guide/source/neutron-controller-install-rdo.rst b/doc/install-guide/source/neutron-controller-install-rdo.rst deleted file mode 100644 index 5363a7d71c..0000000000 --- a/doc/install-guide/source/neutron-controller-install-rdo.rst +++ /dev/null @@ -1,329 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Prerequisites -------------- - -Before you configure the OpenStack Networking (neutron) service, you -must create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``neutron`` database: - - .. code-block:: console - - MariaDB [(none)] CREATE DATABASE neutron; - - .. end - - * Grant proper access to the ``neutron`` database, replacing - ``NEUTRON_DBPASS`` with a suitable password: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - - .. end - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to admin-only CLI - commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create the ``neutron`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt neutron - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | fdb0f541e28141719b6a43c8944bf1fb | - | name | neutron | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``neutron`` user: - - .. code-block:: console - - $ openstack role add --project service --user neutron admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``neutron`` service entity: - - .. code-block:: console - - $ openstack service create --name neutron \ - --description "OpenStack Networking" network - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Networking | - | enabled | True | - | id | f71529314dab4a4d8eca427e701d209e | - | name | neutron | - | type | network | - +-------------+----------------------------------+ - - .. end - -#. Create the Networking service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - network public http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 85d80a6d02fc4b7683f611d7fc1493a3 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - network internal http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 09753b537ac74422a68d2d791cf3714f | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - network admin http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 1ee14289c9374dffb5db92a5c112fc4e | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - .. end - -Configure networking options ----------------------------- - -You can deploy the Networking service using one of two architectures -represented by options 1 and 2. - -Option 1 deploys the simplest possible architecture that only supports -attaching instances to provider (external) networks. No self-service (private) -networks, routers, or floating IP addresses. Only the ``admin`` or other -privileged user can manage provider networks. - -Option 2 augments option 1 with layer-3 services that support attaching -instances to self-service networks. The ``demo`` or other unprivileged -user can manage self-service networks including routers that provide -connectivity between self-service and provider networks. Additionally, -floating IP addresses provide connectivity to instances using self-service -networks from external networks such as the Internet. - -Self-service networks typically use overlay networks. Overlay network -protocols such as VXLAN include additional headers that increase overhead -and decrease space available for the payload or user data. Without knowledge -of the virtual network infrastructure, instances attempt to send packets -using the default Ethernet :term:`maximum transmission unit (MTU)` of 1500 -bytes. The Networking service automatically provides the correct MTU value -to instances via DHCP. However, some cloud images do not use DHCP or ignore -the DHCP MTU option and require configuration using metadata or a script. - -.. note:: - - Option 2 also supports attaching instances to provider networks. - -Choose one of the following networking options to configure services -specific to it. Afterwards, return here and proceed to -:ref:`neutron-controller-metadata-agent-rdo`. - -.. toctree:: - :maxdepth: 1 - - neutron-controller-install-option1.rst - neutron-controller-install-option2.rst - -.. _neutron-controller-metadata-agent-rdo: - -Configure the metadata agent ----------------------------- - -The :term:`metadata agent ` provides configuration information -such as credentials to instances. - -* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the metadata host and shared - secret: - - .. path /etc/neutron/metadata_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - nova_metadata_ip = controller - metadata_proxy_shared_secret = METADATA_SECRET - - .. end - - Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy. - -Configure the Compute service to use the Networking service ------------------------------------------------------------ - -* Edit the ``/etc/nova/nova.conf`` file and perform the following actions: - - * In the ``[neutron]`` section, configure access parameters, enable the - metadata proxy, and configure the secret: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [neutron] - # ... - url = http://controller:9696 - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true - metadata_proxy_shared_secret = METADATA_SECRET - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - Replace ``METADATA_SECRET`` with the secret you chose for the metadata - proxy. - -Finalize installation ---------------------- - - -#. The Networking service initialization scripts expect a symbolic link - ``/etc/neutron/plugin.ini`` pointing to the ML2 plug-in configuration - file, ``/etc/neutron/plugins/ml2/ml2_conf.ini``. If this symbolic - link does not exist, create it using the following command: - - .. code-block:: console - - # ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini - - .. end - -#. Populate the database: - - .. code-block:: console - - # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - - .. end - - .. note:: - - Database population occurs later for Networking because the script - requires complete server and plug-in configuration files. - -#. Restart the Compute API service: - - .. code-block:: console - - # systemctl restart openstack-nova-api.service - - .. end - -#. Start the Networking services and configure them to start when the system - boots. - - For both networking options: - - .. code-block:: console - - # systemctl enable neutron-server.service \ - neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ - neutron-metadata-agent.service - # systemctl start neutron-server.service \ - neutron-linuxbridge-agent.service neutron-dhcp-agent.service \ - neutron-metadata-agent.service - - .. end - - For networking option 2, also enable and start the layer-3 service: - - .. code-block:: console - - # systemctl enable neutron-l3-agent.service - # systemctl start neutron-l3-agent.service - - .. end - - - diff --git a/doc/install-guide/source/neutron-controller-install-ubuntu.rst b/doc/install-guide/source/neutron-controller-install-ubuntu.rst deleted file mode 100644 index a939a69bac..0000000000 --- a/doc/install-guide/source/neutron-controller-install-ubuntu.rst +++ /dev/null @@ -1,314 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -Prerequisites -------------- - -Before you configure the OpenStack Networking (neutron) service, you -must create a database, service credentials, and API endpoints. - -#. To create the database, complete these steps: - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - # mysql - - .. end - - - - * Create the ``neutron`` database: - - .. code-block:: console - - MariaDB [(none)] CREATE DATABASE neutron; - - .. end - - * Grant proper access to the ``neutron`` database, replacing - ``NEUTRON_DBPASS`` with a suitable password: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \ - IDENTIFIED BY 'NEUTRON_DBPASS'; - - .. end - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to admin-only CLI - commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. To create the service credentials, complete these steps: - - * Create the ``neutron`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt neutron - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | fdb0f541e28141719b6a43c8944bf1fb | - | name | neutron | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``neutron`` user: - - .. code-block:: console - - $ openstack role add --project service --user neutron admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``neutron`` service entity: - - .. code-block:: console - - $ openstack service create --name neutron \ - --description "OpenStack Networking" network - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Networking | - | enabled | True | - | id | f71529314dab4a4d8eca427e701d209e | - | name | neutron | - | type | network | - +-------------+----------------------------------+ - - .. end - -#. Create the Networking service API endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - network public http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 85d80a6d02fc4b7683f611d7fc1493a3 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - network internal http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 09753b537ac74422a68d2d791cf3714f | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne \ - network admin http://controller:9696 - - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 1ee14289c9374dffb5db92a5c112fc4e | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | f71529314dab4a4d8eca427e701d209e | - | service_name | neutron | - | service_type | network | - | url | http://controller:9696 | - +--------------+----------------------------------+ - - .. end - -Configure networking options ----------------------------- - -You can deploy the Networking service using one of two architectures -represented by options 1 and 2. - -Option 1 deploys the simplest possible architecture that only supports -attaching instances to provider (external) networks. No self-service (private) -networks, routers, or floating IP addresses. Only the ``admin`` or other -privileged user can manage provider networks. - -Option 2 augments option 1 with layer-3 services that support attaching -instances to self-service networks. The ``demo`` or other unprivileged -user can manage self-service networks including routers that provide -connectivity between self-service and provider networks. Additionally, -floating IP addresses provide connectivity to instances using self-service -networks from external networks such as the Internet. - -Self-service networks typically use overlay networks. Overlay network -protocols such as VXLAN include additional headers that increase overhead -and decrease space available for the payload or user data. Without knowledge -of the virtual network infrastructure, instances attempt to send packets -using the default Ethernet :term:`maximum transmission unit (MTU)` of 1500 -bytes. The Networking service automatically provides the correct MTU value -to instances via DHCP. However, some cloud images do not use DHCP or ignore -the DHCP MTU option and require configuration using metadata or a script. - -.. note:: - - Option 2 also supports attaching instances to provider networks. - -Choose one of the following networking options to configure services -specific to it. Afterwards, return here and proceed to -:ref:`neutron-controller-metadata-agent-ubuntu`. - -.. toctree:: - :maxdepth: 1 - - neutron-controller-install-option1.rst - neutron-controller-install-option2.rst - -.. _neutron-controller-metadata-agent-ubuntu: - -Configure the metadata agent ----------------------------- - -The :term:`metadata agent ` provides configuration information -such as credentials to instances. - -* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following - actions: - - * In the ``[DEFAULT]`` section, configure the metadata host and shared - secret: - - .. path /etc/neutron/metadata_agent.ini - .. code-block:: ini - - [DEFAULT] - # ... - nova_metadata_ip = controller - metadata_proxy_shared_secret = METADATA_SECRET - - .. end - - Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy. - -Configure the Compute service to use the Networking service ------------------------------------------------------------ - -* Edit the ``/etc/nova/nova.conf`` file and perform the following actions: - - * In the ``[neutron]`` section, configure access parameters, enable the - metadata proxy, and configure the secret: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [neutron] - # ... - url = http://controller:9696 - auth_url = http://controller:35357 - auth_type = password - project_domain_name = default - user_domain_name = default - region_name = RegionOne - project_name = service - username = neutron - password = NEUTRON_PASS - service_metadata_proxy = true - metadata_proxy_shared_secret = METADATA_SECRET - - .. end - - Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron`` - user in the Identity service. - - Replace ``METADATA_SECRET`` with the secret you chose for the metadata - proxy. - -Finalize installation ---------------------- - - - - -#. Populate the database: - - .. code-block:: console - - # su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \ - --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron - - .. end - - .. note:: - - Database population occurs later for Networking because the script - requires complete server and plug-in configuration files. - -#. Restart the Compute API service: - - .. code-block:: console - - # service nova-api restart - - .. end - -#. Restart the Networking services. - - For both networking options: - - .. code-block:: console - - # service neutron-server restart - # service neutron-linuxbridge-agent restart - # service neutron-dhcp-agent restart - # service neutron-metadata-agent restart - - .. end - - For networking option 2, also restart the layer-3 service: - - .. code-block:: console - - # service neutron-l3-agent restart - - .. end - diff --git a/doc/install-guide/source/neutron-controller-install.rst b/doc/install-guide/source/neutron-controller-install.rst deleted file mode 100644 index 38d077cfe2..0000000000 --- a/doc/install-guide/source/neutron-controller-install.rst +++ /dev/null @@ -1,9 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -.. toctree:: - - neutron-controller-install-debian - neutron-controller-install-obs - neutron-controller-install-rdo - neutron-controller-install-ubuntu diff --git a/doc/install-guide/source/neutron-next-steps.rst b/doc/install-guide/source/neutron-next-steps.rst deleted file mode 100644 index 643c4d92ae..0000000000 --- a/doc/install-guide/source/neutron-next-steps.rst +++ /dev/null @@ -1,7 +0,0 @@ -========== -Next steps -========== - -Your OpenStack environment now includes the core components necessary -to launch a basic instance. You can :ref:`launch-instance` or add more -OpenStack services to your environment. diff --git a/doc/install-guide/source/neutron-verify-option1.rst b/doc/install-guide/source/neutron-verify-option1.rst deleted file mode 100644 index cd6a74375d..0000000000 --- a/doc/install-guide/source/neutron-verify-option1.rst +++ /dev/null @@ -1,22 +0,0 @@ -Networking Option 1: Provider networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -* List agents to verify successful launch of the neutron agents: - - .. code-block:: console - - $ openstack network agent list - - +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ - | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | - +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ - | 0400c2f6-4d3b-44bc-89fa-99093432f3bf | Metadata agent | controller | None | True | UP | neutron-metadata-agent | - | 83cf853d-a2f2-450a-99d7-e9c6fc08f4c3 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent | - | ec302e51-6101-43cf-9f19-88a78613cbee | Linux bridge agent | compute | None | True | UP | neutron-linuxbridge-agent | - | fcb9bc6e-22b1-43bc-9054-272dd517d025 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent | - +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ - - .. end - - The output should indicate three agents on the controller node and one - agent on each compute node. diff --git a/doc/install-guide/source/neutron-verify-option2.rst b/doc/install-guide/source/neutron-verify-option2.rst deleted file mode 100644 index 37eb8f807e..0000000000 --- a/doc/install-guide/source/neutron-verify-option2.rst +++ /dev/null @@ -1,23 +0,0 @@ -Networking Option 2: Self-service networks -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -* List agents to verify successful launch of the neutron agents: - - .. code-block:: console - - $ openstack network agent list - - +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ - | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | - +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ - | f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | None | True | UP | neutron-metadata-agent | - | 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent | - | 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent | - | 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | nova | True | UP | neutron-l3-agent | - | dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent | - +--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+ - - .. end - - The output should indicate four agents on the controller node and one - agent on each compute node. diff --git a/doc/install-guide/source/neutron-verify.rst b/doc/install-guide/source/neutron-verify.rst deleted file mode 100644 index 771d2249ea..0000000000 --- a/doc/install-guide/source/neutron-verify.rst +++ /dev/null @@ -1,128 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -.. note:: - - Perform these commands on the controller node. - -#. Source the ``admin`` credentials to gain access to admin-only CLI - commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. List loaded extensions to verify successful launch of the - ``neutron-server`` process: - - .. code-block:: console - - $ openstack extension list --network - - +---------------------------+---------------------------+----------------------------+ - | Name | Alias | Description | - +---------------------------+---------------------------+----------------------------+ - | Default Subnetpools | default-subnetpools | Provides ability to mark | - | | | and use a subnetpool as | - | | | the default | - | Availability Zone | availability_zone | The availability zone | - | | | extension. | - | Network Availability Zone | network_availability_zone | Availability zone support | - | | | for network. | - | Port Binding | binding | Expose port bindings of a | - | | | virtual port to external | - | | | application | - | agent | agent | The agent management | - | | | extension. | - | Subnet Allocation | subnet_allocation | Enables allocation of | - | | | subnets from a subnet pool | - | DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among | - | | | dhcp agents | - | Tag support | tag | Enables to set tag on | - | | | resources. | - | Neutron external network | external-net | Adds external network | - | | | attribute to network | - | | | resource. | - | Neutron Service Flavors | flavors | Flavor specification for | - | | | Neutron advanced services | - | Network MTU | net-mtu | Provides MTU attribute for | - | | | a network resource. | - | Network IP Availability | network-ip-availability | Provides IP availability | - | | | data for each network and | - | | | subnet. | - | Quota management support | quotas | Expose functions for | - | | | quotas management per | - | | | tenant | - | Provider Network | provider | Expose mapping of virtual | - | | | networks to physical | - | | | networks | - | Multi Provider Network | multi-provider | Expose mapping of virtual | - | | | networks to multiple | - | | | physical networks | - | Address scope | address-scope | Address scopes extension. | - | Subnet service types | subnet-service-types | Provides ability to set | - | | | the subnet service_types | - | | | field | - | Resource timestamps | standard-attr-timestamp | Adds created_at and | - | | | updated_at fields to all | - | | | Neutron resources that | - | | | have Neutron standard | - | | | attributes. | - | Neutron Service Type | service-type | API for retrieving service | - | Management | | providers for Neutron | - | | | advanced services | - | Tag support for | tag-ext | Extends tag support to | - | resources: subnet, | | more L2 and L3 resources. | - | subnetpool, port, router | | | - | Neutron Extra DHCP opts | extra_dhcp_opt | Extra options | - | | | configuration for DHCP. | - | | | For example PXE boot | - | | | options to DHCP clients | - | | | can be specified (e.g. | - | | | tftp-server, server-ip- | - | | | address, bootfile-name) | - | Resource revision numbers | standard-attr-revisions | This extension will | - | | | display the revision | - | | | number of neutron | - | | | resources. | - | Pagination support | pagination | Extension that indicates | - | | | that pagination is | - | | | enabled. | - | Sorting support | sorting | Extension that indicates | - | | | that sorting is enabled. | - | security-group | security-group | The security groups | - | | | extension. | - | RBAC Policies | rbac-policies | Allows creation and | - | | | modification of policies | - | | | that control tenant access | - | | | to resources. | - | standard-attr-description | standard-attr-description | Extension to add | - | | | descriptions to standard | - | | | attributes | - | Port Security | port-security | Provides port security | - | Allowed Address Pairs | allowed-address-pairs | Provides allowed address | - | | | pairs | - | project_id field enabled | project-id | Extension that indicates | - | | | that project_id field is | - | | | enabled. | - +---------------------------+---------------------------+----------------------------+ - - .. end - - .. note:: - - Actual output may differ slightly from this example. - - -You can perform further testing of your networking using the -`neutron-sanity-check command line client `_. - -Use the verification section for the networking option that you chose to -deploy. - -.. toctree:: - - neutron-verify-option1.rst - neutron-verify-option2.rst diff --git a/doc/install-guide/source/neutron.rst b/doc/install-guide/source/neutron.rst deleted file mode 100644 index fdf200bce4..0000000000 --- a/doc/install-guide/source/neutron.rst +++ /dev/null @@ -1,23 +0,0 @@ -.. _networking: - -================== -Networking service -================== - -.. toctree:: - :maxdepth: 1 - - common/get-started-networking.rst - neutron-concepts.rst - neutron-controller-install.rst - neutron-compute-install.rst - neutron-verify.rst - neutron-next-steps.rst - -This chapter explains how to install and configure the Networking -service (neutron) using the :ref:`provider networks ` or -:ref:`self-service networks ` option. - -For more information about the Networking service including virtual -networking components, layout, and traffic flows, see the -`OpenStack Networking Guide `__. diff --git a/doc/install-guide/source/nova-compute-install-debian.rst b/doc/install-guide/source/nova-compute-install-debian.rst deleted file mode 100644 index fcd6a10dca..0000000000 --- a/doc/install-guide/source/nova-compute-install-debian.rst +++ /dev/null @@ -1,298 +0,0 @@ -Install and configure a compute node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Compute -service on a compute node. The service supports several -:term:`hypervisors ` to deploy :term:`instances ` -or :term:`VMs `. For simplicity, this configuration -uses the :term:`QEMU ` hypervisor with the -:term:`KVM ` extension -on compute nodes that support hardware acceleration for virtual machines. -On legacy hardware, this configuration uses the generic QEMU hypervisor. -You can follow these instructions with minor modifications to horizontally -scale your environment with additional compute nodes. - -.. note:: - - This section assumes that you are following the instructions in - this guide step-by-step to configure the first compute node. If you - want to configure additional compute nodes, prepare them in a similar - fashion to the first compute node in the :ref:`example architectures - ` section. Each additional compute node - requires a unique IP address. - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - -#. Install the packages: - - .. code-block:: console - - # apt install nova-compute - - .. end - - - -Respond to prompts for debconf. - -.. :doc:`database management `, - :doc:`Identity service credentials `, - and :doc:`message broker credentials `. Make - sure that you do not activate database management handling by debconf, - as a compute node should not access the central database. - - -2. Edit the ``/etc/nova/nova.conf`` file and - complete the following actions: - - - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for - the ``openstack`` account in ``RabbitMQ``. - - * In the ``[api]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the - ``nova`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - -* In the ``[DEFAULT]`` section, check that the ``my_ip`` option - is correctly set (this value is handled by the config and postinst - scripts of the ``nova-common`` package using debconf): - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - - .. end - - Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address - of the management network interface on your compute node, - typically 10.0.0.31 for the first node in the - :ref:`example architecture `. - - - - * In the ``[vnc]`` section, enable and configure remote console access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [vnc] - # ... - enabled = True - vncserver_listen = 0.0.0.0 - vncserver_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html - - .. end - - The server component listens on all IP addresses and the proxy - component only listens on the management interface IP address of - the compute node. The base URL indicates the location where you - can use a web browser to access remote consoles of instances - on this compute node. - - .. note:: - - If the web browser to access remote consoles resides on - a host that cannot resolve the ``controller`` hostname, - you must replace ``controller`` with the management - interface IP address of the controller node. - - * In the ``[glance]`` section, configure the location of the - Image service API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [glance] - # ... - api_servers = http://controller:9292 - - .. end - - - - - - * In the ``[placement]`` section, configure the Placement API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [placement] - # ... - os_region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:35357/v3 - username = placement - password = PLACEMENT_PASS - - Replace ``PLACEMENT_PASS`` with the password you choose for the - ``placement`` user in the Identity service. Comment out any other options - in the ``[placement]`` section. - - -3. Ensure the kernel module ``nbd`` is loaded. - - .. code-block:: console - - # modprobe nbd - - .. end - -4. Ensure the module loads on every boot by adding ``nbd`` - to the ``/etc/modules-load.d/nbd.conf`` file. - - -Finalize installation ---------------------- - -#. Determine whether your compute node supports hardware acceleration - for virtual machines: - - .. code-block:: console - - $ egrep -c '(vmx|svm)' /proc/cpuinfo - - .. end - - If this command returns a value of ``one or greater``, your compute - node supports hardware acceleration which typically requires no - additional configuration. - - If this command returns a value of ``zero``, your compute node does - not support hardware acceleration and you must configure ``libvirt`` - to use QEMU instead of KVM. - - - - -* Replace the ``nova-compute-kvm`` package with ``nova-compute-qemu`` - which automatically changes the ``/etc/nova/nova-compute.conf`` - file and installs the necessary dependencies: - - .. code-block:: console - - # apt install nova-compute-qemu - - .. end - - - - -2. Restart the Compute service: - - .. code-block:: console - - # service nova-compute restart - - .. end - - -.. note:: - - If the ``nova-compute`` service fails to start, check - ``/var/log/nova/nova-compute.log``. The error message - ``AMQP server on controller:5672 is unreachable`` likely indicates that - the firewall on the controller node is preventing access to port 5672. - Configure the firewall to open port 5672 on the controller node and - restart ``nova-compute`` service on the compute node. - -Add the compute node to the cell database ------------------------------------------ - -.. important:: - - Run the following commands on the **controller** node. - -#. Source the admin credentials to enable admin-only CLI commands, then - confirm there are compute hosts in the database: - - .. code-block:: console - - $ . admin-openrc - - $ openstack compute service list --service nova-compute - +----+-------+--------------+------+-------+---------+----------------------------+ - | ID | Host | Binary | Zone | State | Status | Updated At | - +----+-------+--------------+------+-------+---------+----------------------------+ - | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | - +----+-------+--------------+------+-------+---------+----------------------------+ - -#. Discover compute hosts: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova - - Found 2 cell mappings. - Skipping cell0 since it does not contain hosts. - Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc - Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc - Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 - Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 - - .. note:: - - When you add new compute nodes, you must run ``nova-manage cell_v2 - discover_hosts`` on the controller node to register those new compute - nodes. Alternatively, you can set an appropriate interval in - ``/etc/nova/nova.conf``: - - .. code-block:: ini - - [scheduler] - discover_hosts_in_cells_interval = 300 diff --git a/doc/install-guide/source/nova-compute-install-obs.rst b/doc/install-guide/source/nova-compute-install-obs.rst deleted file mode 100644 index 31dd0488ee..0000000000 --- a/doc/install-guide/source/nova-compute-install-obs.rst +++ /dev/null @@ -1,347 +0,0 @@ -Install and configure a compute node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Compute -service on a compute node. The service supports several -:term:`hypervisors ` to deploy :term:`instances ` -or :term:`VMs `. For simplicity, this configuration -uses the :term:`QEMU ` hypervisor with the -:term:`KVM ` extension -on compute nodes that support hardware acceleration for virtual machines. -On legacy hardware, this configuration uses the generic QEMU hypervisor. -You can follow these instructions with minor modifications to horizontally -scale your environment with additional compute nodes. - -.. note:: - - This section assumes that you are following the instructions in - this guide step-by-step to configure the first compute node. If you - want to configure additional compute nodes, prepare them in a similar - fashion to the first compute node in the :ref:`example architectures - ` section. Each additional compute node - requires a unique IP address. - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - -#. Install the packages: - - .. code-block:: console - - # zypper install openstack-nova-compute genisoimage qemu-kvm libvirt - - .. end - - - - - -2. Edit the ``/etc/nova/nova.conf`` file and - complete the following actions: - - -* In the ``[DEFAULT]`` section, enable only the compute and - metadata APIs: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - enabled_apis = osapi_compute,metadata - - .. end - - - -* In the ``[DEFAULT]`` section, set the ``compute_driver``: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - compute_driver = libvirt.LibvirtDriver - - .. end - - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for - the ``openstack`` account in ``RabbitMQ``. - - * In the ``[api]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the - ``nova`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - - -* In the ``[DEFAULT]`` section, configure the ``my_ip`` option: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - - .. end - - Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address - of the management network interface on your compute node, - typically 10.0.0.31 for the first node in the - :ref:`example architecture `. - -* In the ``[DEFAULT]`` section, enable support for the Networking service: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - use_neutron = True - firewall_driver = nova.virt.firewall.NoopFirewallDriver - - .. end - - .. note:: - - By default, Compute uses an internal firewall service. Since - Networking includes a firewall service, you must disable the Compute - firewall service by using the - ``nova.virt.firewall.NoopFirewallDriver`` firewall driver. - - - * In the ``[vnc]`` section, enable and configure remote console access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [vnc] - # ... - enabled = True - vncserver_listen = 0.0.0.0 - vncserver_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html - - .. end - - The server component listens on all IP addresses and the proxy - component only listens on the management interface IP address of - the compute node. The base URL indicates the location where you - can use a web browser to access remote consoles of instances - on this compute node. - - .. note:: - - If the web browser to access remote consoles resides on - a host that cannot resolve the ``controller`` hostname, - you must replace ``controller`` with the management - interface IP address of the controller node. - - * In the ``[glance]`` section, configure the location of the - Image service API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [glance] - # ... - api_servers = http://controller:9292 - - .. end - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/run/nova - - .. end - - - - - - * In the ``[placement]`` section, configure the Placement API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [placement] - # ... - os_region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:35357/v3 - username = placement - password = PLACEMENT_PASS - - Replace ``PLACEMENT_PASS`` with the password you choose for the - ``placement`` user in the Identity service. Comment out any other options - in the ``[placement]`` section. - - -3. Ensure the kernel module ``nbd`` is loaded. - - .. code-block:: console - - # modprobe nbd - - .. end - -4. Ensure the module loads on every boot by adding ``nbd`` - to the ``/etc/modules-load.d/nbd.conf`` file. - - -Finalize installation ---------------------- - -#. Determine whether your compute node supports hardware acceleration - for virtual machines: - - .. code-block:: console - - $ egrep -c '(vmx|svm)' /proc/cpuinfo - - .. end - - If this command returns a value of ``one or greater``, your compute - node supports hardware acceleration which typically requires no - additional configuration. - - If this command returns a value of ``zero``, your compute node does - not support hardware acceleration and you must configure ``libvirt`` - to use QEMU instead of KVM. - - -* Edit the ``[libvirt]`` section in the - ``/etc/nova/nova.conf`` file as follows: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [libvirt] - # ... - virt_type = qemu - - .. end - - - - - -2. Start the Compute service including its dependencies and configure - them to start automatically when the system boots: - - .. code-block:: console - - # systemctl enable libvirtd.service openstack-nova-compute.service - # systemctl start libvirtd.service openstack-nova-compute.service - - .. end - - - -.. note:: - - If the ``nova-compute`` service fails to start, check - ``/var/log/nova/nova-compute.log``. The error message - ``AMQP server on controller:5672 is unreachable`` likely indicates that - the firewall on the controller node is preventing access to port 5672. - Configure the firewall to open port 5672 on the controller node and - restart ``nova-compute`` service on the compute node. - -Add the compute node to the cell database ------------------------------------------ - -.. important:: - - Run the following commands on the **controller** node. - -#. Source the admin credentials to enable admin-only CLI commands, then - confirm there are compute hosts in the database: - - .. code-block:: console - - $ . admin-openrc - - $ openstack compute service list --service nova-compute - +----+-------+--------------+------+-------+---------+----------------------------+ - | ID | Host | Binary | Zone | State | Status | Updated At | - +----+-------+--------------+------+-------+---------+----------------------------+ - | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | - +----+-------+--------------+------+-------+---------+----------------------------+ - -#. Discover compute hosts: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova - - Found 2 cell mappings. - Skipping cell0 since it does not contain hosts. - Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc - Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc - Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 - Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 - - .. note:: - - When you add new compute nodes, you must run ``nova-manage cell_v2 - discover_hosts`` on the controller node to register those new compute - nodes. Alternatively, you can set an appropriate interval in - ``/etc/nova/nova.conf``: - - .. code-block:: ini - - [scheduler] - discover_hosts_in_cells_interval = 300 diff --git a/doc/install-guide/source/nova-compute-install-rdo.rst b/doc/install-guide/source/nova-compute-install-rdo.rst deleted file mode 100644 index ba09ef3c7e..0000000000 --- a/doc/install-guide/source/nova-compute-install-rdo.rst +++ /dev/null @@ -1,323 +0,0 @@ -Install and configure a compute node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Compute -service on a compute node. The service supports several -:term:`hypervisors ` to deploy :term:`instances ` -or :term:`VMs `. For simplicity, this configuration -uses the :term:`QEMU ` hypervisor with the -:term:`KVM ` extension -on compute nodes that support hardware acceleration for virtual machines. -On legacy hardware, this configuration uses the generic QEMU hypervisor. -You can follow these instructions with minor modifications to horizontally -scale your environment with additional compute nodes. - -.. note:: - - This section assumes that you are following the instructions in - this guide step-by-step to configure the first compute node. If you - want to configure additional compute nodes, prepare them in a similar - fashion to the first compute node in the :ref:`example architectures - ` section. Each additional compute node - requires a unique IP address. - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - -#. Install the packages: - - .. code-block:: console - - # yum install openstack-nova-compute - - .. end - - - - -2. Edit the ``/etc/nova/nova.conf`` file and - complete the following actions: - - -* In the ``[DEFAULT]`` section, enable only the compute and - metadata APIs: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - enabled_apis = osapi_compute,metadata - - .. end - - - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for - the ``openstack`` account in ``RabbitMQ``. - - * In the ``[api]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the - ``nova`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - - -* In the ``[DEFAULT]`` section, configure the ``my_ip`` option: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - - .. end - - Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address - of the management network interface on your compute node, - typically 10.0.0.31 for the first node in the - :ref:`example architecture `. - -* In the ``[DEFAULT]`` section, enable support for the Networking service: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - use_neutron = True - firewall_driver = nova.virt.firewall.NoopFirewallDriver - - .. end - - .. note:: - - By default, Compute uses an internal firewall service. Since - Networking includes a firewall service, you must disable the Compute - firewall service by using the - ``nova.virt.firewall.NoopFirewallDriver`` firewall driver. - - - * In the ``[vnc]`` section, enable and configure remote console access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [vnc] - # ... - enabled = True - vncserver_listen = 0.0.0.0 - vncserver_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html - - .. end - - The server component listens on all IP addresses and the proxy - component only listens on the management interface IP address of - the compute node. The base URL indicates the location where you - can use a web browser to access remote consoles of instances - on this compute node. - - .. note:: - - If the web browser to access remote consoles resides on - a host that cannot resolve the ``controller`` hostname, - you must replace ``controller`` with the management - interface IP address of the controller node. - - * In the ``[glance]`` section, configure the location of the - Image service API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [glance] - # ... - api_servers = http://controller:9292 - - .. end - - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/nova/tmp - - .. end - - - - - * In the ``[placement]`` section, configure the Placement API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [placement] - # ... - os_region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:35357/v3 - username = placement - password = PLACEMENT_PASS - - Replace ``PLACEMENT_PASS`` with the password you choose for the - ``placement`` user in the Identity service. Comment out any other options - in the ``[placement]`` section. - - -Finalize installation ---------------------- - -#. Determine whether your compute node supports hardware acceleration - for virtual machines: - - .. code-block:: console - - $ egrep -c '(vmx|svm)' /proc/cpuinfo - - .. end - - If this command returns a value of ``one or greater``, your compute - node supports hardware acceleration which typically requires no - additional configuration. - - If this command returns a value of ``zero``, your compute node does - not support hardware acceleration and you must configure ``libvirt`` - to use QEMU instead of KVM. - - -* Edit the ``[libvirt]`` section in the - ``/etc/nova/nova.conf`` file as follows: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [libvirt] - # ... - virt_type = qemu - - .. end - - - - - -2. Start the Compute service including its dependencies and configure - them to start automatically when the system boots: - - .. code-block:: console - - # systemctl enable libvirtd.service openstack-nova-compute.service - # systemctl start libvirtd.service openstack-nova-compute.service - - .. end - - - -.. note:: - - If the ``nova-compute`` service fails to start, check - ``/var/log/nova/nova-compute.log``. The error message - ``AMQP server on controller:5672 is unreachable`` likely indicates that - the firewall on the controller node is preventing access to port 5672. - Configure the firewall to open port 5672 on the controller node and - restart ``nova-compute`` service on the compute node. - -Add the compute node to the cell database ------------------------------------------ - -.. important:: - - Run the following commands on the **controller** node. - -#. Source the admin credentials to enable admin-only CLI commands, then - confirm there are compute hosts in the database: - - .. code-block:: console - - $ . admin-openrc - - $ openstack compute service list --service nova-compute - +----+-------+--------------+------+-------+---------+----------------------------+ - | ID | Host | Binary | Zone | State | Status | Updated At | - +----+-------+--------------+------+-------+---------+----------------------------+ - | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | - +----+-------+--------------+------+-------+---------+----------------------------+ - -#. Discover compute hosts: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova - - Found 2 cell mappings. - Skipping cell0 since it does not contain hosts. - Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc - Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc - Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 - Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 - - .. note:: - - When you add new compute nodes, you must run ``nova-manage cell_v2 - discover_hosts`` on the controller node to register those new compute - nodes. Alternatively, you can set an appropriate interval in - ``/etc/nova/nova.conf``: - - .. code-block:: ini - - [scheduler] - discover_hosts_in_cells_interval = 300 diff --git a/doc/install-guide/source/nova-compute-install-ubuntu.rst b/doc/install-guide/source/nova-compute-install-ubuntu.rst deleted file mode 100644 index bf7c97a0bc..0000000000 --- a/doc/install-guide/source/nova-compute-install-ubuntu.rst +++ /dev/null @@ -1,316 +0,0 @@ -Install and configure a compute node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Compute -service on a compute node. The service supports several -:term:`hypervisors ` to deploy :term:`instances ` -or :term:`VMs `. For simplicity, this configuration -uses the :term:`QEMU ` hypervisor with the -:term:`KVM ` extension -on compute nodes that support hardware acceleration for virtual machines. -On legacy hardware, this configuration uses the generic QEMU hypervisor. -You can follow these instructions with minor modifications to horizontally -scale your environment with additional compute nodes. - -.. note:: - - This section assumes that you are following the instructions in - this guide step-by-step to configure the first compute node. If you - want to configure additional compute nodes, prepare them in a similar - fashion to the first compute node in the :ref:`example architectures - ` section. Each additional compute node - requires a unique IP address. - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - -#. Install the packages: - - .. code-block:: console - - # apt install nova-compute - - .. end - - - -2. Edit the ``/etc/nova/nova.conf`` file and - complete the following actions: - - - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for - the ``openstack`` account in ``RabbitMQ``. - - * In the ``[api]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the - ``nova`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - - -* In the ``[DEFAULT]`` section, configure the ``my_ip`` option: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS - - .. end - - Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address - of the management network interface on your compute node, - typically 10.0.0.31 for the first node in the - :ref:`example architecture `. - -* In the ``[DEFAULT]`` section, enable support for the Networking service: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - use_neutron = True - firewall_driver = nova.virt.firewall.NoopFirewallDriver - - .. end - - .. note:: - - By default, Compute uses an internal firewall service. Since - Networking includes a firewall service, you must disable the Compute - firewall service by using the - ``nova.virt.firewall.NoopFirewallDriver`` firewall driver. - - - * In the ``[vnc]`` section, enable and configure remote console access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [vnc] - # ... - enabled = True - vncserver_listen = 0.0.0.0 - vncserver_proxyclient_address = $my_ip - novncproxy_base_url = http://controller:6080/vnc_auto.html - - .. end - - The server component listens on all IP addresses and the proxy - component only listens on the management interface IP address of - the compute node. The base URL indicates the location where you - can use a web browser to access remote consoles of instances - on this compute node. - - .. note:: - - If the web browser to access remote consoles resides on - a host that cannot resolve the ``controller`` hostname, - you must replace ``controller`` with the management - interface IP address of the controller node. - - * In the ``[glance]`` section, configure the location of the - Image service API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [glance] - # ... - api_servers = http://controller:9292 - - .. end - - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/nova/tmp - - .. end - - - -.. todo: - - https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1506667 - -* Due to a packaging bug, remove the ``log_dir`` option from the - ``[DEFAULT]`` section. - - - - * In the ``[placement]`` section, configure the Placement API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [placement] - # ... - os_region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:35357/v3 - username = placement - password = PLACEMENT_PASS - - Replace ``PLACEMENT_PASS`` with the password you choose for the - ``placement`` user in the Identity service. Comment out any other options - in the ``[placement]`` section. - - -Finalize installation ---------------------- - -#. Determine whether your compute node supports hardware acceleration - for virtual machines: - - .. code-block:: console - - $ egrep -c '(vmx|svm)' /proc/cpuinfo - - .. end - - If this command returns a value of ``one or greater``, your compute - node supports hardware acceleration which typically requires no - additional configuration. - - If this command returns a value of ``zero``, your compute node does - not support hardware acceleration and you must configure ``libvirt`` - to use QEMU instead of KVM. - - - -* Edit the ``[libvirt]`` section in the - ``/etc/nova/nova-compute.conf`` file as follows: - - .. path /etc/nova/nova-compute.conf - .. code-block:: ini - - [libvirt] - # ... - virt_type = qemu - - .. end - - - - - -2. Restart the Compute service: - - .. code-block:: console - - # service nova-compute restart - - .. end - - -.. note:: - - If the ``nova-compute`` service fails to start, check - ``/var/log/nova/nova-compute.log``. The error message - ``AMQP server on controller:5672 is unreachable`` likely indicates that - the firewall on the controller node is preventing access to port 5672. - Configure the firewall to open port 5672 on the controller node and - restart ``nova-compute`` service on the compute node. - -Add the compute node to the cell database ------------------------------------------ - -.. important:: - - Run the following commands on the **controller** node. - -#. Source the admin credentials to enable admin-only CLI commands, then - confirm there are compute hosts in the database: - - .. code-block:: console - - $ . admin-openrc - - $ openstack compute service list --service nova-compute - +----+-------+--------------+------+-------+---------+----------------------------+ - | ID | Host | Binary | Zone | State | Status | Updated At | - +----+-------+--------------+------+-------+---------+----------------------------+ - | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 | - +----+-------+--------------+------+-------+---------+----------------------------+ - -#. Discover compute hosts: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova - - Found 2 cell mappings. - Skipping cell0 since it does not contain hosts. - Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc - Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc - Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 - Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3 - - .. note:: - - When you add new compute nodes, you must run ``nova-manage cell_v2 - discover_hosts`` on the controller node to register those new compute - nodes. Alternatively, you can set an appropriate interval in - ``/etc/nova/nova.conf``: - - .. code-block:: ini - - [scheduler] - discover_hosts_in_cells_interval = 300 diff --git a/doc/install-guide/source/nova-compute-install.rst b/doc/install-guide/source/nova-compute-install.rst deleted file mode 100644 index 078af2191f..0000000000 --- a/doc/install-guide/source/nova-compute-install.rst +++ /dev/null @@ -1,27 +0,0 @@ -Install and configure a compute node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the Compute -service on a compute node. The service supports several -:term:`hypervisors ` to deploy :term:`instances ` -or :term:`VMs `. For simplicity, this configuration -uses the :term:`QEMU ` hypervisor with the -:term:`KVM ` extension -on compute nodes that support hardware acceleration for virtual machines. -On legacy hardware, this configuration uses the generic QEMU hypervisor. -You can follow these instructions with minor modifications to horizontally -scale your environment with additional compute nodes. - -.. note:: - - This section assumes that you are following the instructions in - this guide step-by-step to configure the first compute node. If you - want to configure additional compute nodes, prepare them in a similar - fashion to the first compute node in the :ref:`example architectures - ` section. Each additional compute node - requires a unique IP address. - -.. toctree:: - :glob: - - nova-compute-install-* diff --git a/doc/install-guide/source/nova-controller-install-debian.rst b/doc/install-guide/source/nova-controller-install-debian.rst deleted file mode 100644 index 163a14487e..0000000000 --- a/doc/install-guide/source/nova-controller-install-debian.rst +++ /dev/null @@ -1,594 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the -Compute service, code-named nova, on the controller node. - -Prerequisites -------------- - -Before you install and configure the Compute service, you must -create databases, service credentials, and API endpoints. - -#. To create the databases, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - - .. end - - * Grant proper access to the databases: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - .. end - - Replace ``NOVA_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to - admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. Create the Compute service credentials: - - * Create the ``nova`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt nova - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 8a7dbf5279404537b1c7b86c033620fe | - | name | nova | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``nova`` user: - - .. code-block:: console - - $ openstack role add --project service --user nova admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``nova`` service entity: - - .. code-block:: console - - $ openstack service create --name nova \ - --description "OpenStack Compute" compute - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Compute | - | enabled | True | - | id | 060d59eac51b4594815603d75a00aba2 | - | name | nova | - | type | compute | - +-------------+----------------------------------+ - - .. end - -#. Create the Compute API service endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - compute public http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | 3c1caa473bfe4390a11e7177894bcc7b | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - compute internal http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | e3c918de680746a586eac1f2d9bc10ab | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - compute admin http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | 38f7af91666a47cfb97b4dc790b94424 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - .. end - -#. Create a Placement service user using your chosen ``PLACEMENT_PASS``: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt placement - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | fa742015a6494a949f67629884fc7ec8 | - | name | placement | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - -#. Add the Placement user to the service project with the admin role: - - .. code-block:: console - - $ openstack role add --project service --user placement admin - - .. note:: - - This command provides no output. - -#. Create the Placement API entry in the service catalog: - - .. code-block:: console - - $ openstack service create --name placement --description "Placement API" placement - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | Placement API | - | enabled | True | - | id | 2d1a27022e6e4185b86adac4444c495f | - | name | placement | - | type | placement | - +-------------+----------------------------------+ - -#. Create the Placement API service endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne placement public http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 2b1b2637908b4137a9c2e0470487cbc0 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne placement internal http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 02bcda9a150a4bd7993ff4879df971ab | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne placement admin http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 3d71177b9e0f406f98cbff198d74b182 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - - - -#. Install the packages: - - .. code-block:: console - - # apt install nova-api nova-conductor nova-consoleauth \ - nova-consoleproxy nova-scheduler - - .. end - - .. note:: - - ``nova-api-metadata`` is included in the ``nova-api`` package, - and can be selected through debconf. - - .. note:: - - A unique ``nova-consoleproxy`` package provides the - ``nova-novncproxy``, ``nova-spicehtml5proxy``, and - ``nova-xvpvncproxy`` packages. To select packages, edit the - ``/etc/default/nova-consoleproxy`` file or use the debconf interface. - You can also manually edit the ``/etc/default/nova-consoleproxy`` - file, and stop and start the console daemons. - - -2. Edit the ``/etc/nova/nova.conf`` file and - complete the following actions: - - - * In the ``[api_database]`` and ``[database]`` sections, configure - database access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api_database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api - - [database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova - - .. end - - Replace ``NOVA_DBPASS`` with the password you chose for - the Compute databases. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in ``RabbitMQ``. - - * In the ``[api]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the - ``nova`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to - use the management interface IP address of the controller node: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = 10.0.0.11 - - .. end - - -* In the ``[vnc]`` section, configure the VNC proxy to use the management - interface IP address of the controller node: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [vnc] - enabled = true - # ... - vncserver_listen = $my_ip - vncserver_proxyclient_address = $my_ip - - .. end - - -* In the ``[spice]`` section, disable spice: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [spice] - enabled = false - - .. end - - -* In the ``[glance]`` section, configure the location of the - Image service API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [glance] - # ... - api_servers = http://controller:9292 - - .. end - - - - - -* In the ``[placement]`` section, configure the Placement API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [placement] - # ... - os_region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:35357/v3 - username = placement - password = PLACEMENT_PASS - - Replace ``PLACEMENT_PASS`` with the password you choose for the - ``placement`` user in the Identity service. Comment out any other options in - the ``[placement]`` section. - - - -3. Populate the nova-api database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage api_db sync" nova - - .. end - - .. note:: - - Ignore any deprecation messages in this output. - -4. Register the ``cell0`` database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova - - .. end - -5. Create the ``cell1`` cell: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova - 109e1d4b-536a-40d0-83c6-5f121b82b650 - - .. end - -6. Populate the nova database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage db sync" nova - -7. Verify nova cell0 and cell1 are registered correctly: - - .. code-block:: console - - # nova-manage cell_v2 list_cells - +-------+--------------------------------------+ - | Name | UUID | - +-------+--------------------------------------+ - | cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 | - | cell0 | 00000000-0000-0000-0000-000000000000 | - +-------+--------------------------------------+ - - .. end - - -Finalize installation ---------------------- - - - - -* Shutdown ``nova-spicehtml5proxy``: - - .. code-block:: console - - # service nova-spicehtml5proxy stop - - .. end - -* Select novnc startup in ``/etc/default/nova-consoleproxy``: - - .. path /etc/default/nova-consoleproxy - .. code-block:: ini - - NOVA_CONSOLE_PROXY_TYPE=novnc - - .. end - -* Add a systemd service file for nova-novncproxy in - ``/lib/systemd/system/nova-novncproxy.service``: - - .. path /lib/systemd/system/nova-novncproxy.service: - .. code-block:: ini - - [Unit] - Description=OpenStack Compute NoVNC proxy - After=postgresql.service mysql.service keystone.service rabbitmq-server.service ntp.service - - Documentation=man:nova-novncproxy(1) - - [Service] - User=nova - Group=nova - Type=simple - WorkingDirectory=/var/lib/nova - PermissionsStartOnly=true - ExecStartPre=/bin/mkdir -p /var/lock/nova /var/log/nova /var/lib/nova - ExecStartPre=/bin/chown nova:nova /var/lock/nova /var/lib/nova - ExecStartPre=/bin/chown nova:adm /var/log/nova - ExecStart=/etc/init.d/nova-novncproxy systemd-start - Restart=on-failure - LimitNOFILE=65535 - TimeoutStopSec=65 - - [Install] - WantedBy=multi-user.target - - .. end - -* Start the noVNC proxy: - - .. code-block:: console - - # systemctl daemon-reload - # systemctl enable nova-novncproxy - # service start nova-novncproxy - - .. end - -* Restart the other Compute services: - - .. code-block:: console - - # service nova-api restart - # service nova-consoleauth restart - # service nova-scheduler restart - # service nova-conductor restart - - .. end - - - -* Restart the Compute services: - - .. code-block:: console - - # service nova-api restart - # service nova-consoleauth restart - # service nova-scheduler restart - # service nova-conductor restart - # service nova-novncproxy restart - - .. end - diff --git a/doc/install-guide/source/nova-controller-install-obs.rst b/doc/install-guide/source/nova-controller-install-obs.rst deleted file mode 100644 index 9940188907..0000000000 --- a/doc/install-guide/source/nova-controller-install-obs.rst +++ /dev/null @@ -1,565 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the -Compute service, code-named nova, on the controller node. - -Prerequisites -------------- - -Before you install and configure the Compute service, you must -create databases, service credentials, and API endpoints. - -#. To create the databases, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - - .. end - - * Grant proper access to the databases: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - .. end - - Replace ``NOVA_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to - admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. Create the Compute service credentials: - - * Create the ``nova`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt nova - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 8a7dbf5279404537b1c7b86c033620fe | - | name | nova | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``nova`` user: - - .. code-block:: console - - $ openstack role add --project service --user nova admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``nova`` service entity: - - .. code-block:: console - - $ openstack service create --name nova \ - --description "OpenStack Compute" compute - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Compute | - | enabled | True | - | id | 060d59eac51b4594815603d75a00aba2 | - | name | nova | - | type | compute | - +-------------+----------------------------------+ - - .. end - -#. Create the Compute API service endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - compute public http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | 3c1caa473bfe4390a11e7177894bcc7b | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - compute internal http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | e3c918de680746a586eac1f2d9bc10ab | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - compute admin http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | 38f7af91666a47cfb97b4dc790b94424 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - .. end - -#. Create a Placement service user using your chosen ``PLACEMENT_PASS``: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt placement - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | fa742015a6494a949f67629884fc7ec8 | - | name | placement | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - -#. Add the Placement user to the service project with the admin role: - - .. code-block:: console - - $ openstack role add --project service --user placement admin - - .. note:: - - This command provides no output. - -#. Create the Placement API entry in the service catalog: - - .. code-block:: console - - $ openstack service create --name placement --description "Placement API" placement - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | Placement API | - | enabled | True | - | id | 2d1a27022e6e4185b86adac4444c495f | - | name | placement | - | type | placement | - +-------------+----------------------------------+ - -#. Create the Placement API service endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne placement public http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 2b1b2637908b4137a9c2e0470487cbc0 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne placement internal http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 02bcda9a150a4bd7993ff4879df971ab | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne placement admin http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 3d71177b9e0f406f98cbff198d74b182 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - -.. note:: - - As of the Newton release, SUSE OpenStack packages are shipped - with the upstream default configuration files. For example, - ``/etc/nova/nova.conf`` has customizations in - ``/etc/nova/nova.conf.d/010-nova.conf``. While the following - instructions modify the default configuration file, adding a new file - in ``/etc/nova/nova.conf.d`` achieves the same result. - - - -#. Install the packages: - - .. code-block:: console - - # zypper install openstack-nova-api openstack-nova-scheduler \ - openstack-nova-conductor openstack-nova-consoleauth \ - openstack-nova-novncproxy openstack-nova-placement-api \ - iptables - - .. end - - - - - -2. Edit the ``/etc/nova/nova.conf`` file and - complete the following actions: - - -* In the ``[DEFAULT]`` section, enable only the compute and metadata - APIs: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - enabled_apis = osapi_compute,metadata - - .. end - - - * In the ``[api_database]`` and ``[database]`` sections, configure - database access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api_database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api - - [database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova - - .. end - - Replace ``NOVA_DBPASS`` with the password you chose for - the Compute databases. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in ``RabbitMQ``. - - * In the ``[api]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the - ``nova`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to - use the management interface IP address of the controller node: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = 10.0.0.11 - - .. end - - -* In the ``[DEFAULT]`` section, enable support for the Networking service: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - use_neutron = True - firewall_driver = nova.virt.firewall.NoopFirewallDriver - - .. end - - .. note:: - - By default, Compute uses an internal firewall driver. Since the - Networking service includes a firewall driver, you must disable the - Compute firewall driver by using the - ``nova.virt.firewall.NoopFirewallDriver`` firewall driver. - - -* In the ``[vnc]`` section, configure the VNC proxy to use the management - interface IP address of the controller node: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [vnc] - enabled = true - # ... - vncserver_listen = $my_ip - vncserver_proxyclient_address = $my_ip - - .. end - - -* In the ``[glance]`` section, configure the location of the - Image service API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [glance] - # ... - api_servers = http://controller:9292 - - .. end - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - -.. path /etc/nova/nova.conf -.. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/run/nova - -.. end - - - - - -* In the ``[placement]`` section, configure the Placement API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [placement] - # ... - os_region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:35357/v3 - username = placement - password = PLACEMENT_PASS - - Replace ``PLACEMENT_PASS`` with the password you choose for the - ``placement`` user in the Identity service. Comment out any other options in - the ``[placement]`` section. - - - -3. Populate the nova-api database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage api_db sync" nova - - .. end - - .. note:: - - Ignore any deprecation messages in this output. - -4. Register the ``cell0`` database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova - - .. end - -5. Create the ``cell1`` cell: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova - 109e1d4b-536a-40d0-83c6-5f121b82b650 - - .. end - -6. Populate the nova database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage db sync" nova - -7. Verify nova cell0 and cell1 are registered correctly: - - .. code-block:: console - - # nova-manage cell_v2 list_cells - +-------+--------------------------------------+ - | Name | UUID | - +-------+--------------------------------------+ - | cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 | - | cell0 | 00000000-0000-0000-0000-000000000000 | - +-------+--------------------------------------+ - - .. end - - -Finalize installation ---------------------- - - -* Enable the placement API Apache vhost: - - .. code-block:: console - - # mv /etc/apache2/vhosts.d/nova-placement-api.conf.sample /etc/apache2/vhosts.d/nova-placement-api.conf - # systemctl reload apache2.service - -* Start the Compute services and configure them to start - when the system boots: - - .. code-block:: console - - # systemctl enable openstack-nova-api.service \ - openstack-nova-consoleauth.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service openstack-nova-novncproxy.service - # systemctl start openstack-nova-api.service \ - openstack-nova-consoleauth.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service openstack-nova-novncproxy.service - - .. end - - - - diff --git a/doc/install-guide/source/nova-controller-install-rdo.rst b/doc/install-guide/source/nova-controller-install-rdo.rst deleted file mode 100644 index cd0bd5f49c..0000000000 --- a/doc/install-guide/source/nova-controller-install-rdo.rst +++ /dev/null @@ -1,572 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the -Compute service, code-named nova, on the controller node. - -Prerequisites -------------- - -Before you install and configure the Compute service, you must -create databases, service credentials, and API endpoints. - -#. To create the databases, complete these steps: - - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - $ mysql -u root -p - - .. end - - - * Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - - .. end - - * Grant proper access to the databases: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - .. end - - Replace ``NOVA_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to - admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. Create the Compute service credentials: - - * Create the ``nova`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt nova - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 8a7dbf5279404537b1c7b86c033620fe | - | name | nova | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``nova`` user: - - .. code-block:: console - - $ openstack role add --project service --user nova admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``nova`` service entity: - - .. code-block:: console - - $ openstack service create --name nova \ - --description "OpenStack Compute" compute - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Compute | - | enabled | True | - | id | 060d59eac51b4594815603d75a00aba2 | - | name | nova | - | type | compute | - +-------------+----------------------------------+ - - .. end - -#. Create the Compute API service endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - compute public http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | 3c1caa473bfe4390a11e7177894bcc7b | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - compute internal http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | e3c918de680746a586eac1f2d9bc10ab | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - compute admin http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | 38f7af91666a47cfb97b4dc790b94424 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - .. end - -#. Create a Placement service user using your chosen ``PLACEMENT_PASS``: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt placement - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | fa742015a6494a949f67629884fc7ec8 | - | name | placement | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - -#. Add the Placement user to the service project with the admin role: - - .. code-block:: console - - $ openstack role add --project service --user placement admin - - .. note:: - - This command provides no output. - -#. Create the Placement API entry in the service catalog: - - .. code-block:: console - - $ openstack service create --name placement --description "Placement API" placement - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | Placement API | - | enabled | True | - | id | 2d1a27022e6e4185b86adac4444c495f | - | name | placement | - | type | placement | - +-------------+----------------------------------+ - -#. Create the Placement API service endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne placement public http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 2b1b2637908b4137a9c2e0470487cbc0 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne placement internal http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 02bcda9a150a4bd7993ff4879df971ab | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne placement admin http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 3d71177b9e0f406f98cbff198d74b182 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - -#. Install the packages: - - .. code-block:: console - - # yum install openstack-nova-api openstack-nova-conductor \ - openstack-nova-console openstack-nova-novncproxy \ - openstack-nova-scheduler openstack-nova-placement-api - - .. end - - - - -2. Edit the ``/etc/nova/nova.conf`` file and - complete the following actions: - - -* In the ``[DEFAULT]`` section, enable only the compute and metadata - APIs: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - enabled_apis = osapi_compute,metadata - - .. end - - - * In the ``[api_database]`` and ``[database]`` sections, configure - database access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api_database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api - - [database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova - - .. end - - Replace ``NOVA_DBPASS`` with the password you chose for - the Compute databases. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in ``RabbitMQ``. - - * In the ``[api]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the - ``nova`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to - use the management interface IP address of the controller node: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = 10.0.0.11 - - .. end - - -* In the ``[DEFAULT]`` section, enable support for the Networking service: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - use_neutron = True - firewall_driver = nova.virt.firewall.NoopFirewallDriver - - .. end - - .. note:: - - By default, Compute uses an internal firewall driver. Since the - Networking service includes a firewall driver, you must disable the - Compute firewall driver by using the - ``nova.virt.firewall.NoopFirewallDriver`` firewall driver. - - -* In the ``[vnc]`` section, configure the VNC proxy to use the management - interface IP address of the controller node: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [vnc] - enabled = true - # ... - vncserver_listen = $my_ip - vncserver_proxyclient_address = $my_ip - - .. end - - -* In the ``[glance]`` section, configure the location of the - Image service API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [glance] - # ... - api_servers = http://controller:9292 - - .. end - - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/nova/tmp - - .. end - - - - -* In the ``[placement]`` section, configure the Placement API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [placement] - # ... - os_region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:35357/v3 - username = placement - password = PLACEMENT_PASS - - Replace ``PLACEMENT_PASS`` with the password you choose for the - ``placement`` user in the Identity service. Comment out any other options in - the ``[placement]`` section. - - -* Due to a `packaging bug - `_, you must enable - access to the Placement API by adding the following configuration to - ``/etc/httpd/conf.d/00-nova-placement-api.conf``: - - .. path /etc/httpd/conf.d/00-nova-placement-api.conf - .. code-block:: ini - - - = 2.4> - Require all granted - - - Order allow,deny - Allow from all - - - -* Restart the httpd service: - - .. code-block:: console - - # systemctl restart httpd - - - -3. Populate the nova-api database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage api_db sync" nova - - .. end - - .. note:: - - Ignore any deprecation messages in this output. - -4. Register the ``cell0`` database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova - - .. end - -5. Create the ``cell1`` cell: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova - 109e1d4b-536a-40d0-83c6-5f121b82b650 - - .. end - -6. Populate the nova database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage db sync" nova - -7. Verify nova cell0 and cell1 are registered correctly: - - .. code-block:: console - - # nova-manage cell_v2 list_cells - +-------+--------------------------------------+ - | Name | UUID | - +-------+--------------------------------------+ - | cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 | - | cell0 | 00000000-0000-0000-0000-000000000000 | - +-------+--------------------------------------+ - - .. end - - -Finalize installation ---------------------- - - - -* Start the Compute services and configure them to start - when the system boots: - - .. code-block:: console - - # systemctl enable openstack-nova-api.service \ - openstack-nova-consoleauth.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service openstack-nova-novncproxy.service - # systemctl start openstack-nova-api.service \ - openstack-nova-consoleauth.service openstack-nova-scheduler.service \ - openstack-nova-conductor.service openstack-nova-novncproxy.service - - .. end - - - diff --git a/doc/install-guide/source/nova-controller-install-ubuntu.rst b/doc/install-guide/source/nova-controller-install-ubuntu.rst deleted file mode 100644 index 5bb71a7e54..0000000000 --- a/doc/install-guide/source/nova-controller-install-ubuntu.rst +++ /dev/null @@ -1,539 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the -Compute service, code-named nova, on the controller node. - -Prerequisites -------------- - -Before you install and configure the Compute service, you must -create databases, service credentials, and API endpoints. - -#. To create the databases, complete these steps: - - -* Use the database access client to connect to the database - server as the ``root`` user: - - .. code-block:: console - - # mysql - - .. end - - - - * Create the ``nova_api``, ``nova``, and ``nova_cell0`` databases: - - .. code-block:: console - - MariaDB [(none)]> CREATE DATABASE nova_api; - MariaDB [(none)]> CREATE DATABASE nova; - MariaDB [(none)]> CREATE DATABASE nova_cell0; - - .. end - - * Grant proper access to the databases: - - .. code-block:: console - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' \ - IDENTIFIED BY 'NOVA_DBPASS'; - MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \ - IDENTIFIED BY 'NOVA_DBPASS'; - - .. end - - Replace ``NOVA_DBPASS`` with a suitable password. - - * Exit the database access client. - -#. Source the ``admin`` credentials to gain access to - admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. Create the Compute service credentials: - - * Create the ``nova`` user: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt nova - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | 8a7dbf5279404537b1c7b86c033620fe | - | name | nova | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - - .. end - - * Add the ``admin`` role to the ``nova`` user: - - .. code-block:: console - - $ openstack role add --project service --user nova admin - - .. end - - .. note:: - - This command provides no output. - - * Create the ``nova`` service entity: - - .. code-block:: console - - $ openstack service create --name nova \ - --description "OpenStack Compute" compute - - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | OpenStack Compute | - | enabled | True | - | id | 060d59eac51b4594815603d75a00aba2 | - | name | nova | - | type | compute | - +-------------+----------------------------------+ - - .. end - -#. Create the Compute API service endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne \ - compute public http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | 3c1caa473bfe4390a11e7177894bcc7b | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - compute internal http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | e3c918de680746a586eac1f2d9bc10ab | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - $ openstack endpoint create --region RegionOne \ - compute admin http://controller:8774/v2.1 - - +--------------+-------------------------------------------+ - | Field | Value | - +--------------+-------------------------------------------+ - | enabled | True | - | id | 38f7af91666a47cfb97b4dc790b94424 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 060d59eac51b4594815603d75a00aba2 | - | service_name | nova | - | service_type | compute | - | url | http://controller:8774/v2.1 | - +--------------+-------------------------------------------+ - - .. end - -#. Create a Placement service user using your chosen ``PLACEMENT_PASS``: - - .. code-block:: console - - $ openstack user create --domain default --password-prompt placement - - User Password: - Repeat User Password: - +---------------------+----------------------------------+ - | Field | Value | - +---------------------+----------------------------------+ - | domain_id | default | - | enabled | True | - | id | fa742015a6494a949f67629884fc7ec8 | - | name | placement | - | options | {} | - | password_expires_at | None | - +---------------------+----------------------------------+ - -#. Add the Placement user to the service project with the admin role: - - .. code-block:: console - - $ openstack role add --project service --user placement admin - - .. note:: - - This command provides no output. - -#. Create the Placement API entry in the service catalog: - - .. code-block:: console - - $ openstack service create --name placement --description "Placement API" placement - +-------------+----------------------------------+ - | Field | Value | - +-------------+----------------------------------+ - | description | Placement API | - | enabled | True | - | id | 2d1a27022e6e4185b86adac4444c495f | - | name | placement | - | type | placement | - +-------------+----------------------------------+ - -#. Create the Placement API service endpoints: - - .. code-block:: console - - $ openstack endpoint create --region RegionOne placement public http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 2b1b2637908b4137a9c2e0470487cbc0 | - | interface | public | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne placement internal http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 02bcda9a150a4bd7993ff4879df971ab | - | interface | internal | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - - $ openstack endpoint create --region RegionOne placement admin http://controller:8778 - +--------------+----------------------------------+ - | Field | Value | - +--------------+----------------------------------+ - | enabled | True | - | id | 3d71177b9e0f406f98cbff198d74b182 | - | interface | admin | - | region | RegionOne | - | region_id | RegionOne | - | service_id | 2d1a27022e6e4185b86adac4444c495f | - | service_name | placement | - | service_type | placement | - | url | http://controller:8778 | - +--------------+----------------------------------+ - -Install and configure components --------------------------------- - -.. include:: shared/note_configuration_vary_by_distribution.rst - - - - - -#. Install the packages: - - .. code-block:: console - - # apt install nova-api nova-conductor nova-consoleauth \ - nova-novncproxy nova-scheduler nova-placement-api - - .. end - - - -2. Edit the ``/etc/nova/nova.conf`` file and - complete the following actions: - - - * In the ``[api_database]`` and ``[database]`` sections, configure - database access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api_database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api - - [database] - # ... - connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova - - .. end - - Replace ``NOVA_DBPASS`` with the password you chose for - the Compute databases. - - * In the ``[DEFAULT]`` section, configure ``RabbitMQ`` - message queue access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - transport_url = rabbit://openstack:RABBIT_PASS@controller - - .. end - - Replace ``RABBIT_PASS`` with the password you chose for the - ``openstack`` account in ``RabbitMQ``. - - * In the ``[api]`` and ``[keystone_authtoken]`` sections, - configure Identity service access: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [api] - # ... - auth_strategy = keystone - - [keystone_authtoken] - # ... - auth_uri = http://controller:5000 - auth_url = http://controller:35357 - memcached_servers = controller:11211 - auth_type = password - project_domain_name = default - user_domain_name = default - project_name = service - username = nova - password = NOVA_PASS - - .. end - - Replace ``NOVA_PASS`` with the password you chose for the - ``nova`` user in the Identity service. - - .. note:: - - Comment out or remove any other options in the - ``[keystone_authtoken]`` section. - - * In the ``[DEFAULT]`` section, configure the ``my_ip`` option to - use the management interface IP address of the controller node: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - my_ip = 10.0.0.11 - - .. end - - -* In the ``[DEFAULT]`` section, enable support for the Networking service: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [DEFAULT] - # ... - use_neutron = True - firewall_driver = nova.virt.firewall.NoopFirewallDriver - - .. end - - .. note:: - - By default, Compute uses an internal firewall driver. Since the - Networking service includes a firewall driver, you must disable the - Compute firewall driver by using the - ``nova.virt.firewall.NoopFirewallDriver`` firewall driver. - - -* In the ``[vnc]`` section, configure the VNC proxy to use the management - interface IP address of the controller node: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [vnc] - enabled = true - # ... - vncserver_listen = $my_ip - vncserver_proxyclient_address = $my_ip - - .. end - - -* In the ``[glance]`` section, configure the location of the - Image service API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [glance] - # ... - api_servers = http://controller:9292 - - .. end - - - - -* In the ``[oslo_concurrency]`` section, configure the lock path: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [oslo_concurrency] - # ... - lock_path = /var/lib/nova/tmp - - .. end - - - -.. todo: - - https://bugs.launchpad.net/ubuntu/+source/nova/+bug/1506667 - -* Due to a packaging bug, remove the ``log_dir`` option from the - ``[DEFAULT]`` section. - - -* In the ``[placement]`` section, configure the Placement API: - - .. path /etc/nova/nova.conf - .. code-block:: ini - - [placement] - # ... - os_region_name = RegionOne - project_domain_name = Default - project_name = service - auth_type = password - user_domain_name = Default - auth_url = http://controller:35357/v3 - username = placement - password = PLACEMENT_PASS - - Replace ``PLACEMENT_PASS`` with the password you choose for the - ``placement`` user in the Identity service. Comment out any other options in - the ``[placement]`` section. - - - -3. Populate the nova-api database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage api_db sync" nova - - .. end - - .. note:: - - Ignore any deprecation messages in this output. - -4. Register the ``cell0`` database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova - - .. end - -5. Create the ``cell1`` cell: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova - 109e1d4b-536a-40d0-83c6-5f121b82b650 - - .. end - -6. Populate the nova database: - - .. code-block:: console - - # su -s /bin/sh -c "nova-manage db sync" nova - -7. Verify nova cell0 and cell1 are registered correctly: - - .. code-block:: console - - # nova-manage cell_v2 list_cells - +-------+--------------------------------------+ - | Name | UUID | - +-------+--------------------------------------+ - | cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 | - | cell0 | 00000000-0000-0000-0000-000000000000 | - +-------+--------------------------------------+ - - .. end - - -Finalize installation ---------------------- - - - - - -* Restart the Compute services: - - .. code-block:: console - - # service nova-api restart - # service nova-consoleauth restart - # service nova-scheduler restart - # service nova-conductor restart - # service nova-novncproxy restart - - .. end - diff --git a/doc/install-guide/source/nova-controller-install.rst b/doc/install-guide/source/nova-controller-install.rst deleted file mode 100644 index 2321b0b256..0000000000 --- a/doc/install-guide/source/nova-controller-install.rst +++ /dev/null @@ -1,10 +0,0 @@ -Install and configure controller node -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -This section describes how to install and configure the -Compute service, code-named nova, on the controller node. - -.. toctree:: - :glob: - - nova-controller-install-* diff --git a/doc/install-guide/source/nova-verify.rst b/doc/install-guide/source/nova-verify.rst deleted file mode 100644 index 49f40913f1..0000000000 --- a/doc/install-guide/source/nova-verify.rst +++ /dev/null @@ -1,124 +0,0 @@ -Verify operation -~~~~~~~~~~~~~~~~ - -Verify operation of the Compute service. - -.. note:: - - Perform these commands on the controller node. - -#. Source the ``admin`` credentials to gain access to - admin-only CLI commands: - - .. code-block:: console - - $ . admin-openrc - - .. end - -#. List service components to verify successful launch and - registration of each process: - - .. code-block:: console - - $ openstack compute service list - - +----+--------------------+------------+----------+---------+-------+----------------------------+ - | Id | Binary | Host | Zone | Status | State | Updated At | - +----+--------------------+------------+----------+---------+-------+----------------------------+ - | 1 | nova-consoleauth | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 | - | 2 | nova-scheduler | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 | - | 3 | nova-conductor | controller | internal | enabled | up | 2016-02-09T23:11:16.000000 | - | 4 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 | - +----+--------------------+------------+----------+---------+-------+----------------------------+ - - .. end - - .. note:: - - This output should indicate three service components enabled on - the controller node and one service component enabled on the - compute node. - -#. List API endpoints in the Identity service to verify connectivity - with the Identity service: - - .. note:: - - Below endpoints list may differ depending on the installation of OpenStack components. - - .. code-block:: console - - $ openstack catalog list - - +-----------+-----------+-----------------------------------------+ - | Name | Type | Endpoints | - +-----------+-----------+-----------------------------------------+ - | keystone | identity | RegionOne | - | | | public: http://controller:5000/v3/ | - | | | RegionOne | - | | | internal: http://controller:5000/v3/ | - | | | RegionOne | - | | | admin: http://controller:35357/v3/ | - | | | | - | glance | image | RegionOne | - | | | admin: http://controller:9292 | - | | | RegionOne | - | | | public: http://controller:9292 | - | | | RegionOne | - | | | internal: http://controller:9292 | - | | | | - | nova | compute | RegionOne | - | | | admin: http://controller:8774/v2.1 | - | | | RegionOne | - | | | internal: http://controller:8774/v2.1 | - | | | RegionOne | - | | | public: http://controller:8774/v2.1 | - | | | | - | placement | placement | RegionOne | - | | | public: http://controller:8778 | - | | | RegionOne | - | | | admin: http://controller:8778 | - | | | RegionOne | - | | | internal: http://controller:8778 | - | | | | - +-----------+-----------+-----------------------------------------+ - - .. note:: - - Ignore any warnings in this output. - -#. List images in the Image service to verify connectivity with the Image - service: - - .. code-block:: console - - $ openstack image list - - +--------------------------------------+-------------+-------------+ - | ID | Name | Status | - +--------------------------------------+-------------+-------------+ - | 9a76d9f9-9620-4f2e-8c69-6c5691fae163 | cirros | active | - +--------------------------------------+-------------+-------------+ - -#. Check the cells and placement API are working successfully: - - .. code-block:: console - - # nova-status upgrade check - - +---------------------------+ - | Upgrade Check Results | - +---------------------------+ - | Check: Cells v2 | - | Result: Success | - | Details: None | - +---------------------------+ - | Check: Placement API | - | Result: Success | - | Details: None | - +---------------------------+ - | Check: Resource Providers | - | Result: Success | - | Details: None | - +---------------------------+ diff --git a/doc/install-guide/source/nova.rst b/doc/install-guide/source/nova.rst deleted file mode 100644 index 60f3b48018..0000000000 --- a/doc/install-guide/source/nova.rst +++ /dev/null @@ -1,10 +0,0 @@ -=============== -Compute service -=============== - -.. toctree:: - - common/get-started-compute.rst - nova-controller-install.rst - nova-compute-install.rst - nova-verify.rst