delete project-specific steps from installation guide
Change-Id: I79f1f4448adeb37a493bfe9796df8694e47eb804 Depends-On: Ia750cb049c0f53a234ea70ce1f2bbbb7a2aa9454 Signed-off-by: Doug Hellmann <doug@doughellmann.com>
This commit is contained in:
parent
f52ec61a68
commit
36a040fb7a
@ -1,36 +0,0 @@
|
|||||||
==============================
|
|
||||||
Block Storage service overview
|
|
||||||
==============================
|
|
||||||
|
|
||||||
The OpenStack Block Storage service (cinder) adds persistent storage
|
|
||||||
to a virtual machine. Block Storage provides an infrastructure for managing
|
|
||||||
volumes, and interacts with OpenStack Compute to provide volumes for
|
|
||||||
instances. The service also enables management of volume snapshots, and
|
|
||||||
volume types.
|
|
||||||
|
|
||||||
The Block Storage service consists of the following components:
|
|
||||||
|
|
||||||
cinder-api
|
|
||||||
Accepts API requests, and routes them to the ``cinder-volume`` for
|
|
||||||
action.
|
|
||||||
|
|
||||||
cinder-volume
|
|
||||||
Interacts directly with the Block Storage service, and processes
|
|
||||||
such as the ``cinder-scheduler``. It also interacts with these processes
|
|
||||||
through a message queue. The ``cinder-volume`` service responds to read
|
|
||||||
and write requests sent to the Block Storage service to maintain
|
|
||||||
state. It can interact with a variety of storage providers through a
|
|
||||||
driver architecture.
|
|
||||||
|
|
||||||
cinder-scheduler daemon
|
|
||||||
Selects the optimal storage provider node on which to create the
|
|
||||||
volume. A similar component to the ``nova-scheduler``.
|
|
||||||
|
|
||||||
cinder-backup daemon
|
|
||||||
The ``cinder-backup`` service provides backing up volumes of any type to
|
|
||||||
a backup storage provider. Like the ``cinder-volume`` service, it can
|
|
||||||
interact with a variety of storage providers through a driver
|
|
||||||
architecture.
|
|
||||||
|
|
||||||
Messaging queue
|
|
||||||
Routes information between the Block Storage processes.
|
|
@ -1,102 +0,0 @@
|
|||||||
========================
|
|
||||||
Compute service overview
|
|
||||||
========================
|
|
||||||
|
|
||||||
Use OpenStack Compute to host and manage cloud computing systems.
|
|
||||||
OpenStack Compute is a major part of an :term:`Infrastructure-as-a-Service
|
|
||||||
(IaaS)` system. The main modules are implemented in Python.
|
|
||||||
|
|
||||||
OpenStack Compute interacts with OpenStack Identity for authentication;
|
|
||||||
OpenStack Image service for disk and server images; and OpenStack
|
|
||||||
Dashboard for the user and administrative interface. Image access is
|
|
||||||
limited by projects, and by users; quotas are limited per project (the
|
|
||||||
number of instances, for example). OpenStack Compute can scale
|
|
||||||
horizontally on standard hardware, and download images to launch
|
|
||||||
instances.
|
|
||||||
|
|
||||||
OpenStack Compute consists of the following areas and their components:
|
|
||||||
|
|
||||||
``nova-api`` service
|
|
||||||
Accepts and responds to end user compute API calls. The service
|
|
||||||
supports the OpenStack Compute API, the Amazon EC2 API, and a
|
|
||||||
special Admin API for privileged users to perform administrative
|
|
||||||
actions. It enforces some policies and initiates most orchestration
|
|
||||||
activities, such as running an instance.
|
|
||||||
|
|
||||||
``nova-api-metadata`` service
|
|
||||||
Accepts metadata requests from instances. The ``nova-api-metadata``
|
|
||||||
service is generally used when you run in multi-host mode with
|
|
||||||
``nova-network`` installations. For details, see `Metadata
|
|
||||||
service <https://docs.openstack.org/admin-guide/compute-networking-nova.html#metadata-service>`__
|
|
||||||
in the OpenStack Administrator Guide.
|
|
||||||
|
|
||||||
``nova-compute`` service
|
|
||||||
A worker daemon that creates and terminates virtual machine
|
|
||||||
instances through hypervisor APIs. For example:
|
|
||||||
|
|
||||||
- XenAPI for XenServer/XCP
|
|
||||||
|
|
||||||
- libvirt for KVM or QEMU
|
|
||||||
|
|
||||||
- VMwareAPI for VMware
|
|
||||||
|
|
||||||
Processing is fairly complex. Basically, the daemon accepts actions
|
|
||||||
from the queue and performs a series of system commands such as
|
|
||||||
launching a KVM instance and updating its state in the database.
|
|
||||||
|
|
||||||
``nova-placement-api`` service
|
|
||||||
Tracks the inventory and usage of each provider. For details, see
|
|
||||||
`Placement API <https://docs.openstack.org/developer/nova/placement.html>`__.
|
|
||||||
|
|
||||||
``nova-scheduler`` service
|
|
||||||
Takes a virtual machine instance request from the queue and
|
|
||||||
determines on which compute server host it runs.
|
|
||||||
|
|
||||||
``nova-conductor`` module
|
|
||||||
Mediates interactions between the ``nova-compute`` service and the
|
|
||||||
database. It eliminates direct accesses to the cloud database made
|
|
||||||
by the ``nova-compute`` service. The ``nova-conductor`` module scales
|
|
||||||
horizontally. However, do not deploy it on nodes where the
|
|
||||||
``nova-compute`` service runs. For more information, see `Configuration
|
|
||||||
Reference Guide <https://docs.openstack.org/ocata/config-reference/compute/config-options.html#nova-conductor>`__.
|
|
||||||
|
|
||||||
``nova-consoleauth`` daemon
|
|
||||||
Authorizes tokens for users that console proxies provide. See
|
|
||||||
``nova-novncproxy`` and ``nova-xvpvncproxy``. This service must be running
|
|
||||||
for console proxies to work. You can run proxies of either type
|
|
||||||
against a single nova-consoleauth service in a cluster
|
|
||||||
configuration. For information, see `About
|
|
||||||
nova-consoleauth <https://docs.openstack.org/admin-guide/compute-remote-console-access.html#about-nova-consoleauth>`__.
|
|
||||||
|
|
||||||
``nova-novncproxy`` daemon
|
|
||||||
Provides a proxy for accessing running instances through a VNC
|
|
||||||
connection. Supports browser-based novnc clients.
|
|
||||||
|
|
||||||
``nova-spicehtml5proxy`` daemon
|
|
||||||
Provides a proxy for accessing running instances through a SPICE
|
|
||||||
connection. Supports browser-based HTML5 client.
|
|
||||||
|
|
||||||
``nova-xvpvncproxy`` daemon
|
|
||||||
Provides a proxy for accessing running instances through a VNC
|
|
||||||
connection. Supports an OpenStack-specific Java client.
|
|
||||||
|
|
||||||
The queue
|
|
||||||
A central hub for passing messages between daemons. Usually
|
|
||||||
implemented with `RabbitMQ <https://www.rabbitmq.com/>`__, also can be
|
|
||||||
implemented with another AMQP message queue, such as `ZeroMQ <http://www.zeromq.org/>`__.
|
|
||||||
|
|
||||||
SQL database
|
|
||||||
Stores most build-time and run-time states for a cloud
|
|
||||||
infrastructure, including:
|
|
||||||
|
|
||||||
- Available instance types
|
|
||||||
|
|
||||||
- Instances in use
|
|
||||||
|
|
||||||
- Available networks
|
|
||||||
|
|
||||||
- Projects
|
|
||||||
|
|
||||||
Theoretically, OpenStack Compute can support any database that
|
|
||||||
SQLAlchemy supports. Common databases are SQLite3 for test and
|
|
||||||
development work, MySQL, MariaDB, and PostgreSQL.
|
|
@ -1,21 +0,0 @@
|
|||||||
==================
|
|
||||||
Dashboard overview
|
|
||||||
==================
|
|
||||||
|
|
||||||
The OpenStack Dashboard is a modular `Django web
|
|
||||||
application <https://www.djangoproject.com/>`__ that provides a
|
|
||||||
graphical interface to OpenStack services.
|
|
||||||
|
|
||||||
.. image:: figures/horizon-screenshot.png
|
|
||||||
:width: 100%
|
|
||||||
|
|
||||||
The dashboard is usually deployed through
|
|
||||||
`mod_wsgi <http://code.google.com/p/modwsgi/>`__ in Apache. You can
|
|
||||||
modify the dashboard code to make it suitable for different sites.
|
|
||||||
|
|
||||||
From a network architecture point of view, this service must be
|
|
||||||
accessible to customers and the public API for each OpenStack service.
|
|
||||||
To use the administrator functionality for other services, it must also
|
|
||||||
connect to Admin API endpoints, which should not be accessible by
|
|
||||||
customers.
|
|
||||||
|
|
@ -1,40 +0,0 @@
|
|||||||
================================
|
|
||||||
Data Processing service overview
|
|
||||||
================================
|
|
||||||
|
|
||||||
The Data processing service for OpenStack (sahara) aims to provide users
|
|
||||||
with a simple means to provision data processing (Hadoop, Spark)
|
|
||||||
clusters by specifying several parameters like Hadoop version, cluster
|
|
||||||
topology, node hardware details and a few more. After a user fills in
|
|
||||||
all the parameters, the Data processing service deploys the cluster in a
|
|
||||||
few minutes. Sahara also provides a means to scale already provisioned
|
|
||||||
clusters by adding or removing worker nodes on demand.
|
|
||||||
|
|
||||||
The solution addresses the following use cases:
|
|
||||||
|
|
||||||
* Fast provisioning of Hadoop clusters on OpenStack for development and
|
|
||||||
QA.
|
|
||||||
|
|
||||||
* Utilization of unused compute power from general purpose OpenStack
|
|
||||||
IaaS cloud.
|
|
||||||
|
|
||||||
* Analytics-as-a-Service for ad-hoc or bursty analytic workloads.
|
|
||||||
|
|
||||||
Key features are:
|
|
||||||
|
|
||||||
* Designed as an OpenStack component.
|
|
||||||
|
|
||||||
* Managed through REST API with UI available as part of OpenStack
|
|
||||||
Dashboard.
|
|
||||||
|
|
||||||
* Support for different Hadoop distributions:
|
|
||||||
|
|
||||||
* Pluggable system of Hadoop installation engines.
|
|
||||||
|
|
||||||
* Integration with vendor specific management tools, such as Apache
|
|
||||||
Ambari or Cloudera Management Console.
|
|
||||||
|
|
||||||
* Predefined templates of Hadoop configurations with the ability to
|
|
||||||
modify parameters.
|
|
||||||
|
|
||||||
* User-friendly UI for ad-hoc analytics queries based on Hive or Pig.
|
|
@ -1,66 +0,0 @@
|
|||||||
=========================
|
|
||||||
Database service overview
|
|
||||||
=========================
|
|
||||||
|
|
||||||
The Database service provides scalable and reliable cloud provisioning
|
|
||||||
functionality for both relational and non-relational database engines.
|
|
||||||
Users can quickly and easily use database features without the burden of
|
|
||||||
handling complex administrative tasks. Cloud users and database
|
|
||||||
administrators can provision and manage multiple database instances as
|
|
||||||
needed.
|
|
||||||
|
|
||||||
The Database service provides resource isolation at high performance
|
|
||||||
levels and automates complex administrative tasks such as deployment,
|
|
||||||
configuration, patching, backups, restores, and monitoring.
|
|
||||||
|
|
||||||
**Process flow example**
|
|
||||||
|
|
||||||
This example is a high-level process flow for using Database services:
|
|
||||||
|
|
||||||
#. The OpenStack Administrator configures the basic infrastructure using
|
|
||||||
the following steps:
|
|
||||||
|
|
||||||
#. Install the Database service.
|
|
||||||
#. Create an image for each type of database. For example, one for MySQL
|
|
||||||
and one for MongoDB.
|
|
||||||
#. Use the :command:`trove-manage` command to import images and offer them
|
|
||||||
to projects.
|
|
||||||
|
|
||||||
#. The OpenStack end user deploys the Database service using the following
|
|
||||||
steps:
|
|
||||||
|
|
||||||
#. Create a Database service instance using the :command:`trove create`
|
|
||||||
command.
|
|
||||||
#. Use the :command:`trove list` command to get the ID of the instance,
|
|
||||||
followed by the :command:`trove show` command to get the IP address of
|
|
||||||
it.
|
|
||||||
#. Access the Database service instance using typical database access
|
|
||||||
commands. For example, with MySQL:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u myuser -p -h TROVE_IP_ADDRESS mydb
|
|
||||||
|
|
||||||
**Components**
|
|
||||||
|
|
||||||
The Database service includes the following components:
|
|
||||||
|
|
||||||
``python-troveclient`` command-line client
|
|
||||||
A CLI that communicates with the ``trove-api`` component.
|
|
||||||
|
|
||||||
``trove-api`` component
|
|
||||||
Provides an OpenStack-native RESTful API that supports JSON to
|
|
||||||
provision and manage Trove instances.
|
|
||||||
|
|
||||||
``trove-conductor`` service
|
|
||||||
Runs on the host, and receives messages from guest instances that
|
|
||||||
want to update information on the host.
|
|
||||||
|
|
||||||
``trove-taskmanager`` service
|
|
||||||
Instruments the complex system flows that support provisioning
|
|
||||||
instances, managing the lifecycle of instances, and performing
|
|
||||||
operations on instances.
|
|
||||||
|
|
||||||
``trove-guestagent`` service
|
|
||||||
Runs within the guest instance. Manages and performs operations on
|
|
||||||
the database itself.
|
|
@ -1,54 +0,0 @@
|
|||||||
=========================
|
|
||||||
Identity service overview
|
|
||||||
=========================
|
|
||||||
|
|
||||||
The OpenStack :term:`Identity service <Identity service (keystone)>` provides
|
|
||||||
a single point of integration for managing authentication, authorization, and
|
|
||||||
a catalog of services.
|
|
||||||
|
|
||||||
The Identity service is typically the first service a user interacts with. Once
|
|
||||||
authenticated, an end user can use their identity to access other OpenStack
|
|
||||||
services. Likewise, other OpenStack services leverage the Identity service to
|
|
||||||
ensure users are who they say they are and discover where other services are
|
|
||||||
within the deployment. The Identity service can also integrate with some
|
|
||||||
external user management systems (such as LDAP).
|
|
||||||
|
|
||||||
Users and services can locate other services by using the service catalog,
|
|
||||||
which is managed by the Identity service. As the name implies, a service
|
|
||||||
catalog is a collection of available services in an OpenStack deployment. Each
|
|
||||||
service can have one or many endpoints and each endpoint can be one of three
|
|
||||||
types: admin, internal, or public. In a production environment, different
|
|
||||||
endpoint types might reside on separate networks exposed to different types of
|
|
||||||
users for security reasons. For instance, the public API network might be
|
|
||||||
visible from the Internet so customers can manage their clouds. The admin API
|
|
||||||
network might be restricted to operators within the organization that manages
|
|
||||||
cloud infrastructure. The internal API network might be restricted to the hosts
|
|
||||||
that contain OpenStack services. Also, OpenStack supports multiple regions for
|
|
||||||
scalability. For simplicity, this guide uses the management network for all
|
|
||||||
endpoint types and the default ``RegionOne`` region. Together, regions,
|
|
||||||
services, and endpoints created within the Identity service comprise the
|
|
||||||
service catalog for a deployment. Each OpenStack service in your deployment
|
|
||||||
needs a service entry with corresponding endpoints stored in the Identity
|
|
||||||
service. This can all be done after the Identity service has been installed and
|
|
||||||
configured.
|
|
||||||
|
|
||||||
The Identity service contains these components:
|
|
||||||
|
|
||||||
Server
|
|
||||||
A centralized server provides authentication and authorization
|
|
||||||
services using a RESTful interface.
|
|
||||||
|
|
||||||
Drivers
|
|
||||||
Drivers or a service back end are integrated to the centralized
|
|
||||||
server. They are used for accessing identity information in
|
|
||||||
repositories external to OpenStack, and may already exist in
|
|
||||||
the infrastructure where OpenStack is deployed (for example, SQL
|
|
||||||
databases or LDAP servers).
|
|
||||||
|
|
||||||
Modules
|
|
||||||
Middleware modules run in the address space of the OpenStack
|
|
||||||
component that is using the Identity service. These modules
|
|
||||||
intercept service requests, extract user credentials, and send them
|
|
||||||
to the centralized server for authorization. The integration between
|
|
||||||
the middleware modules and OpenStack components uses the Python Web
|
|
||||||
Server Gateway Interface.
|
|
@ -1,71 +0,0 @@
|
|||||||
======================
|
|
||||||
Image service overview
|
|
||||||
======================
|
|
||||||
|
|
||||||
The Image service (glance) enables users to discover,
|
|
||||||
register, and retrieve virtual machine images. It offers a
|
|
||||||
:term:`REST <RESTful>` API that enables you to query virtual
|
|
||||||
machine image metadata and retrieve an actual image.
|
|
||||||
You can store virtual machine images made available through
|
|
||||||
the Image service in a variety of locations, from simple file
|
|
||||||
systems to object-storage systems like OpenStack Object Storage.
|
|
||||||
|
|
||||||
.. important::
|
|
||||||
|
|
||||||
For simplicity, this guide describes configuring the Image service to
|
|
||||||
use the ``file`` back end, which uploads and stores in a
|
|
||||||
directory on the controller node hosting the Image service. By
|
|
||||||
default, this directory is ``/var/lib/glance/images/``.
|
|
||||||
|
|
||||||
Before you proceed, ensure that the controller node has at least
|
|
||||||
several gigabytes of space available in this directory. Keep in
|
|
||||||
mind that since the ``file`` back end is often local to a controller
|
|
||||||
node, it is not typically suitable for a multi-node glance deployment.
|
|
||||||
|
|
||||||
For information on requirements for other back ends, see
|
|
||||||
`Configuration Reference
|
|
||||||
<https://docs.openstack.org/ocata/config-reference/image.html>`__.
|
|
||||||
|
|
||||||
The OpenStack Image service is central to Infrastructure-as-a-Service
|
|
||||||
(IaaS) as shown in :ref:`get_started_conceptual_architecture`. It accepts API
|
|
||||||
requests for disk or server images, and metadata definitions from end users or
|
|
||||||
OpenStack Compute components. It also supports the storage of disk or server
|
|
||||||
images on various repository types, including OpenStack Object Storage.
|
|
||||||
|
|
||||||
A number of periodic processes run on the OpenStack Image service to
|
|
||||||
support caching. Replication services ensure consistency and
|
|
||||||
availability through the cluster. Other periodic processes include
|
|
||||||
auditors, updaters, and reapers.
|
|
||||||
|
|
||||||
The OpenStack Image service includes the following components:
|
|
||||||
|
|
||||||
glance-api
|
|
||||||
Accepts Image API calls for image discovery, retrieval, and storage.
|
|
||||||
|
|
||||||
glance-registry
|
|
||||||
Stores, processes, and retrieves metadata about images. Metadata
|
|
||||||
includes items such as size and type.
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
The registry is a private internal service meant for use by
|
|
||||||
OpenStack Image service. Do not expose this service to users.
|
|
||||||
|
|
||||||
Database
|
|
||||||
Stores image metadata and you can choose your database depending on
|
|
||||||
your preference. Most deployments use MySQL or SQLite.
|
|
||||||
|
|
||||||
Storage repository for image files
|
|
||||||
Various repository types are supported including normal file
|
|
||||||
systems (or any filesystem mounted on the glance-api controller
|
|
||||||
node), Object Storage, RADOS block devices, VMware datastore,
|
|
||||||
and HTTP. Note that some repositories will only support read-only
|
|
||||||
usage.
|
|
||||||
|
|
||||||
Metadata definition service
|
|
||||||
A common API for vendors, admins, services, and users to meaningfully
|
|
||||||
define their own custom metadata. This metadata can be used on
|
|
||||||
different types of resources like images, artifacts, volumes,
|
|
||||||
flavors, and aggregates. A definition includes the new property's key,
|
|
||||||
description, constraints, and the resource types which it can be
|
|
||||||
associated with.
|
|
@ -24,7 +24,7 @@ several message broker and database solutions, such as RabbitMQ,
|
|||||||
MySQL, MariaDB, and SQLite.
|
MySQL, MariaDB, and SQLite.
|
||||||
|
|
||||||
Users can access OpenStack via the web-based user interface implemented
|
Users can access OpenStack via the web-based user interface implemented
|
||||||
by :doc:`Dashboard <get-started-dashboard>`, via `command-line
|
by the Horizon Dashboard, via `command-line
|
||||||
clients <https://docs.openstack.org/cli-reference/>`__ and by
|
clients <https://docs.openstack.org/cli-reference/>`__ and by
|
||||||
issuing API requests through tools like browser plug-ins or :command:`curl`.
|
issuing API requests through tools like browser plug-ins or :command:`curl`.
|
||||||
For applications, `several SDKs <https://developer.openstack.org/#sdk>`__
|
For applications, `several SDKs <https://developer.openstack.org/#sdk>`__
|
||||||
|
@ -1,33 +0,0 @@
|
|||||||
===========================
|
|
||||||
Networking service overview
|
|
||||||
===========================
|
|
||||||
|
|
||||||
OpenStack Networking (neutron) allows you to create and attach interface
|
|
||||||
devices managed by other OpenStack services to networks. Plug-ins can be
|
|
||||||
implemented to accommodate different networking equipment and software,
|
|
||||||
providing flexibility to OpenStack architecture and deployment.
|
|
||||||
|
|
||||||
It includes the following components:
|
|
||||||
|
|
||||||
neutron-server
|
|
||||||
Accepts and routes API requests to the appropriate OpenStack
|
|
||||||
Networking plug-in for action.
|
|
||||||
|
|
||||||
OpenStack Networking plug-ins and agents
|
|
||||||
Plug and unplug ports, create networks or subnets, and provide
|
|
||||||
IP addressing. These plug-ins and agents differ depending on the
|
|
||||||
vendor and technologies used in the particular cloud. OpenStack
|
|
||||||
Networking ships with plug-ins and agents for Cisco virtual and
|
|
||||||
physical switches, NEC OpenFlow products, Open vSwitch, Linux
|
|
||||||
bridging, and the VMware NSX product.
|
|
||||||
|
|
||||||
The common agents are L3 (layer 3), DHCP (dynamic host IP
|
|
||||||
addressing), and a plug-in agent.
|
|
||||||
|
|
||||||
Messaging queue
|
|
||||||
Used by most OpenStack Networking installations to route information
|
|
||||||
between the neutron-server and various agents. Also acts as a database
|
|
||||||
to store networking state for particular plug-ins.
|
|
||||||
|
|
||||||
OpenStack Networking mainly interacts with OpenStack Compute to provide
|
|
||||||
networks and connectivity for its instances.
|
|
@ -1,53 +0,0 @@
|
|||||||
===============================
|
|
||||||
Object Storage service overview
|
|
||||||
===============================
|
|
||||||
|
|
||||||
The OpenStack Object Storage is a multi-project object storage system. It
|
|
||||||
is highly scalable and can manage large amounts of unstructured data at
|
|
||||||
low cost through a RESTful HTTP API.
|
|
||||||
|
|
||||||
It includes the following components:
|
|
||||||
|
|
||||||
Proxy servers (swift-proxy-server)
|
|
||||||
Accepts OpenStack Object Storage API and raw HTTP requests to upload
|
|
||||||
files, modify metadata, and create containers. It also serves file
|
|
||||||
or container listings to web browsers. To improve performance, the
|
|
||||||
proxy server can use an optional cache that is usually deployed with
|
|
||||||
memcache.
|
|
||||||
|
|
||||||
Account servers (swift-account-server)
|
|
||||||
Manages accounts defined with Object Storage.
|
|
||||||
|
|
||||||
Container servers (swift-container-server)
|
|
||||||
Manages the mapping of containers or folders, within Object Storage.
|
|
||||||
|
|
||||||
Object servers (swift-object-server)
|
|
||||||
Manages actual objects, such as files, on the storage nodes.
|
|
||||||
|
|
||||||
Various periodic processes
|
|
||||||
Performs housekeeping tasks on the large data store. The replication
|
|
||||||
services ensure consistency and availability through the cluster.
|
|
||||||
Other periodic processes include auditors, updaters, and reapers.
|
|
||||||
|
|
||||||
WSGI middleware
|
|
||||||
Handles authentication and is usually OpenStack Identity.
|
|
||||||
|
|
||||||
swift client
|
|
||||||
Enables users to submit commands to the REST API through a
|
|
||||||
command-line client authorized as either a admin user, reseller
|
|
||||||
user, or swift user.
|
|
||||||
|
|
||||||
swift-init
|
|
||||||
Script that initializes the building of the ring file, takes daemon
|
|
||||||
names as parameter and offers commands. Documented in
|
|
||||||
`Managing Services
|
|
||||||
<https://docs.openstack.org/developer/swift/admin_guide.html#managing-services>`_.
|
|
||||||
|
|
||||||
swift-recon
|
|
||||||
A cli tool used to retrieve various metrics and telemetry information
|
|
||||||
about a cluster that has been collected by the swift-recon middleware.
|
|
||||||
|
|
||||||
swift-ring-builder
|
|
||||||
Storage ring build and rebalance utility. Documented in
|
|
||||||
`Managing the Rings
|
|
||||||
<https://docs.openstack.org/developer/swift/admin_guide.html#managing-the-rings>`_.
|
|
@ -1,23 +0,0 @@
|
|||||||
==================
|
|
||||||
OpenStack services
|
|
||||||
==================
|
|
||||||
|
|
||||||
This section describes OpenStack services in detail.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 2
|
|
||||||
|
|
||||||
|
|
||||||
get-started-compute.rst
|
|
||||||
get-started-storage-concepts.rst
|
|
||||||
get-started-object-storage.rst
|
|
||||||
get-started-block-storage.rst
|
|
||||||
get-started-shared-file-systems.rst
|
|
||||||
get-started-networking.rst
|
|
||||||
get-started-dashboard.rst
|
|
||||||
get-started-identity.rst
|
|
||||||
get-started-image-service.rst
|
|
||||||
get-started-telemetry.rst
|
|
||||||
get-started-orchestration.rst
|
|
||||||
get-started-database-service.rst
|
|
||||||
get-started-data-processing.rst
|
|
@ -1,35 +0,0 @@
|
|||||||
==============================
|
|
||||||
Orchestration service overview
|
|
||||||
==============================
|
|
||||||
|
|
||||||
The Orchestration service provides a template-based orchestration for
|
|
||||||
describing a cloud application by running OpenStack API calls to
|
|
||||||
generate running cloud applications. The software integrates other core
|
|
||||||
components of OpenStack into a one-file template system. The templates
|
|
||||||
allow you to create most OpenStack resource types such as instances,
|
|
||||||
floating IPs, volumes, security groups, and users. It also provides
|
|
||||||
advanced functionality such as instance high availability, instance
|
|
||||||
auto-scaling, and nested stacks. This enables OpenStack core projects to
|
|
||||||
receive a larger user base.
|
|
||||||
|
|
||||||
The service enables deployers to integrate with the Orchestration service
|
|
||||||
directly or through custom plug-ins.
|
|
||||||
|
|
||||||
The Orchestration service consists of the following components:
|
|
||||||
|
|
||||||
``heat`` command-line client
|
|
||||||
A CLI that communicates with the ``heat-api`` to run AWS
|
|
||||||
CloudFormation APIs. End developers can directly use the Orchestration
|
|
||||||
REST API.
|
|
||||||
|
|
||||||
``heat-api`` component
|
|
||||||
An OpenStack-native REST API that processes API requests by sending
|
|
||||||
them to the ``heat-engine`` over :term:`Remote Procedure Call (RPC)`.
|
|
||||||
|
|
||||||
``heat-api-cfn`` component
|
|
||||||
An AWS Query API that is compatible with AWS CloudFormation. It
|
|
||||||
processes API requests by sending them to the ``heat-engine`` over RPC.
|
|
||||||
|
|
||||||
``heat-engine``
|
|
||||||
Orchestrates the launching of templates and provides events back to
|
|
||||||
the API consumer.
|
|
@ -1,38 +0,0 @@
|
|||||||
====================================
|
|
||||||
Shared File Systems service overview
|
|
||||||
====================================
|
|
||||||
|
|
||||||
The OpenStack Shared File Systems service (manila) provides file storage to a
|
|
||||||
virtual machine. The Shared File Systems service provides an infrastructure
|
|
||||||
for managing and provisioning of file shares. The service also enables
|
|
||||||
management of share types as well as share snapshots if a driver supports
|
|
||||||
them.
|
|
||||||
|
|
||||||
The Shared File Systems service consists of the following components:
|
|
||||||
|
|
||||||
manila-api
|
|
||||||
A WSGI app that authenticates and routes requests throughout the Shared File
|
|
||||||
Systems service. It supports the OpenStack APIs.
|
|
||||||
|
|
||||||
manila-data
|
|
||||||
A standalone service whose purpose is to receive requests, process data
|
|
||||||
operations such as copying, share migration or backup, and send back a
|
|
||||||
response after an operation has been completed.
|
|
||||||
|
|
||||||
manila-scheduler
|
|
||||||
Schedules and routes requests to the appropriate share service. The
|
|
||||||
scheduler uses configurable filters and weighers to route requests. The
|
|
||||||
Filter Scheduler is the default and enables filters on things like Capacity,
|
|
||||||
Availability Zone, Share Types, and Capabilities as well as custom filters.
|
|
||||||
|
|
||||||
manila-share
|
|
||||||
Manages back-end devices that provide shared file systems. A manila-share
|
|
||||||
process can run in one of two modes, with or without handling of share
|
|
||||||
servers. Share servers export file shares via share networks. When share
|
|
||||||
servers are not used, the networking requirements are handled outside of
|
|
||||||
Manila.
|
|
||||||
|
|
||||||
Messaging queue
|
|
||||||
Routes information between the Shared File Systems processes.
|
|
||||||
|
|
||||||
For more information, see `OpenStack Configuration Reference <https://docs.openstack.org/ocata/config-reference/shared-file-systems/overview.html>`__.
|
|
@ -1,61 +0,0 @@
|
|||||||
================
|
|
||||||
Storage concepts
|
|
||||||
================
|
|
||||||
|
|
||||||
The OpenStack stack uses the following storage types:
|
|
||||||
|
|
||||||
.. list-table:: Storage types
|
|
||||||
:header-rows: 1
|
|
||||||
:widths: 30 30 30 30
|
|
||||||
|
|
||||||
* - On-instance / ephemeral
|
|
||||||
- Block storage (cinder)
|
|
||||||
- Object Storage (swift)
|
|
||||||
- File Storage (manila)
|
|
||||||
* - Runs operating systems and provides scratch space
|
|
||||||
- Used for adding additional persistent storage to a virtual machine (VM)
|
|
||||||
- Used for storing virtual machine images and data
|
|
||||||
- Used for providing file shares to a virtual machine
|
|
||||||
* - Persists until VM is terminated
|
|
||||||
- Persists until deleted
|
|
||||||
- Persists until deleted
|
|
||||||
- Persists until deleted
|
|
||||||
* - Access associated with a VM
|
|
||||||
- Access associated with a VM
|
|
||||||
- Available from anywhere
|
|
||||||
- Access can be provided to a VM
|
|
||||||
* - Implemented as a filesystem underlying OpenStack Compute
|
|
||||||
- Mounted via OpenStack Block Storage controlled protocol (for example, iSCSI)
|
|
||||||
- REST API
|
|
||||||
- Provides Shared File System service via nfs, cifs, glusterfs, or hdfs protocol
|
|
||||||
* - Encryption is available
|
|
||||||
- Encryption is available
|
|
||||||
- Work in progress - expected for the Mitaka release
|
|
||||||
- Encryption is not available yet
|
|
||||||
* - Administrator configures size setting, based on flavors
|
|
||||||
- Sizings based on need
|
|
||||||
- Easily scalable for future growth
|
|
||||||
- Sizing based on need
|
|
||||||
* - Example: 10 GB first disk, 30 GB/core second disk
|
|
||||||
- Example: 1 TB "extra hard drive"
|
|
||||||
- Example: 10s of TBs of data set storage
|
|
||||||
- Example: 1 TB of file share
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
- *You cannot use OpenStack Object Storage like a traditional hard
|
|
||||||
drive.* The Object Storage relaxes some of the constraints of a
|
|
||||||
POSIX-style file system to get other gains. You can access the
|
|
||||||
objects through an API which uses HTTP. Subsequently you don't have
|
|
||||||
to provide atomic operations (that is, relying on eventual
|
|
||||||
consistency), you can scale a storage system easily and avoid a
|
|
||||||
central point of failure.
|
|
||||||
|
|
||||||
- *The OpenStack Image service is used to manage the virtual machine
|
|
||||||
images in an OpenStack cluster, not store them.* It provides an
|
|
||||||
abstraction to different methods for storage - a bridge to the
|
|
||||||
storage, not the storage itself.
|
|
||||||
|
|
||||||
- *The OpenStack Object Storage can function on its own.* The Object
|
|
||||||
Storage (swift) product can be used independently of the Compute
|
|
||||||
(nova) product.
|
|
@ -1,70 +0,0 @@
|
|||||||
==========================
|
|
||||||
Telemetry service overview
|
|
||||||
==========================
|
|
||||||
|
|
||||||
Telemetry Data Collection service
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The Telemetry Data Collection services provide the following functions:
|
|
||||||
|
|
||||||
* Efficiently polls metering data related to OpenStack services.
|
|
||||||
|
|
||||||
* Collects event and metering data by monitoring notifications sent
|
|
||||||
from services.
|
|
||||||
|
|
||||||
* Publishes collected data to various targets including data stores and
|
|
||||||
message queues.
|
|
||||||
|
|
||||||
The Telemetry service consists of the following components:
|
|
||||||
|
|
||||||
A compute agent (``ceilometer-agent-compute``)
|
|
||||||
Runs on each compute node and polls for resource utilization
|
|
||||||
statistics. There may be other types of agents in the future, but
|
|
||||||
for now our focus is creating the compute agent.
|
|
||||||
|
|
||||||
A central agent (``ceilometer-agent-central``)
|
|
||||||
Runs on a central management server to poll for resource utilization
|
|
||||||
statistics for resources not tied to instances or compute nodes.
|
|
||||||
Multiple agents can be started to scale service horizontally.
|
|
||||||
|
|
||||||
A notification agent (``ceilometer-agent-notification``)
|
|
||||||
Runs on a central management server(s) and consumes messages from
|
|
||||||
the message queue(s) to build event and metering data.
|
|
||||||
|
|
||||||
A collector (``ceilometer-collector``)
|
|
||||||
Runs on central management server(s) and dispatches collected
|
|
||||||
telemetry data to a data store or external consumer without
|
|
||||||
modification.
|
|
||||||
|
|
||||||
An API server (``ceilometer-api``)
|
|
||||||
Runs on one or more central management servers to provide data
|
|
||||||
access from the data store.
|
|
||||||
|
|
||||||
Telemetry Alarming service
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The Telemetry Alarming services trigger alarms when the collected metering
|
|
||||||
or event data break the defined rules.
|
|
||||||
|
|
||||||
The Telemetry Alarming service consists of the following components:
|
|
||||||
|
|
||||||
An API server (``aodh-api``)
|
|
||||||
Runs on one or more central management servers to provide access
|
|
||||||
to the alarm information stored in the data store.
|
|
||||||
|
|
||||||
An alarm evaluator (``aodh-evaluator``)
|
|
||||||
Runs on one or more central management servers to determine when
|
|
||||||
alarms fire due to the associated statistic trend crossing a
|
|
||||||
threshold over a sliding time window.
|
|
||||||
|
|
||||||
A notification listener (``aodh-listener``)
|
|
||||||
Runs on a central management server and determines when to fire alarms.
|
|
||||||
The alarms are generated based on defined rules against events, which are
|
|
||||||
captured by the Telemetry Data Collection service's notification agents.
|
|
||||||
|
|
||||||
An alarm notifier (``aodh-notifier``)
|
|
||||||
Runs on one or more central management servers to allow alarms to be
|
|
||||||
set based on the threshold evaluation for a collection of samples.
|
|
||||||
|
|
||||||
These services communicate by using the OpenStack messaging bus. Only
|
|
||||||
the collector and API server have access to the data store.
|
|
@ -86,4 +86,3 @@ OpenStack architecture:
|
|||||||
|
|
||||||
get-started-conceptual-architecture.rst
|
get-started-conceptual-architecture.rst
|
||||||
get-started-logical-architecture.rst
|
get-started-logical-architecture.rst
|
||||||
get-started-openstack-services.rst
|
|
||||||
|
@ -1,137 +0,0 @@
|
|||||||
.. _additional-services:
|
|
||||||
|
|
||||||
===================
|
|
||||||
Additional services
|
|
||||||
===================
|
|
||||||
|
|
||||||
Installation and configuration of additional OpenStack services is documented
|
|
||||||
in separate, project-specific installation guides.
|
|
||||||
|
|
||||||
Application Catalog service (murano)
|
|
||||||
====================================
|
|
||||||
|
|
||||||
The Application Catalog service (murano) combines an application catalog with
|
|
||||||
versatile tooling to simplify and accelerate packaging and deployment.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`Application Catalog installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/application-catalog/draft/>`_.
|
|
||||||
|
|
||||||
Bare Metal service (ironic)
|
|
||||||
===========================
|
|
||||||
|
|
||||||
The Bare Metal service is a collection of components that provides
|
|
||||||
support to manage and provision physical machines.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`Bare Metal installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/baremetal/draft/>`_.
|
|
||||||
|
|
||||||
Container Infrastructure Management service (magnum)
|
|
||||||
====================================================
|
|
||||||
|
|
||||||
The Container Infrastructure Management service (magnum) is an OpenStack API
|
|
||||||
service making container orchestration engines (COE) such as Docker Swarm,
|
|
||||||
Kubernetes and Mesos available as first class resources in OpenStack.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`Container Infrastructure Management installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/container-infrastructure-management/draft/>`_.
|
|
||||||
|
|
||||||
Database service (trove)
|
|
||||||
========================
|
|
||||||
|
|
||||||
The Database service (trove) provides cloud provisioning functionality for
|
|
||||||
database engines.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`Database installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/database/draft/>`_.
|
|
||||||
|
|
||||||
DNS service (designate)
|
|
||||||
========================
|
|
||||||
|
|
||||||
The DNS service (designate) provides cloud provisioning functionality for
|
|
||||||
DNS Zones and Recordsets.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`DNS installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/dns/draft/>`_.
|
|
||||||
|
|
||||||
Key Manager service (barbican)
|
|
||||||
==============================
|
|
||||||
|
|
||||||
The Key Manager service provides a RESTful API for the storage and provisioning
|
|
||||||
of secret data such as passphrases, encryption keys, and X.509 certificates.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`Key Manager installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/key-manager/draft/>`_.
|
|
||||||
|
|
||||||
Messaging service (zaqar)
|
|
||||||
=========================
|
|
||||||
|
|
||||||
The Messaging service allows developers to share data between distributed
|
|
||||||
application components performing different tasks, without losing messages or
|
|
||||||
requiring each component to be always available.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`Messaging installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/messaging/draft/>`_.
|
|
||||||
|
|
||||||
Object Storage services (swift)
|
|
||||||
===============================
|
|
||||||
|
|
||||||
The Object Storage services (swift) work together to provide object storage and
|
|
||||||
retrieval through a REST API.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`Object Storage installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/object-storage/draft/>`_.
|
|
||||||
|
|
||||||
Orchestration service (heat)
|
|
||||||
============================
|
|
||||||
|
|
||||||
The Orchestration service (heat) uses a
|
|
||||||
`Heat Orchestration Template (HOT)
|
|
||||||
<https://docs.openstack.org/developer/heat/template_guide/hot_guide.html>`_
|
|
||||||
to create and manage cloud resources.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`Orchestration installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/orchestration/draft/>`_.
|
|
||||||
|
|
||||||
Shared File Systems service (manila)
|
|
||||||
====================================
|
|
||||||
|
|
||||||
The Shared File Systems service (manila) provides coordinated access to shared
|
|
||||||
or distributed file systems.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`Shared File Systems installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/shared-file-systems/draft/>`_.
|
|
||||||
|
|
||||||
Telemetry Alarming services (aodh)
|
|
||||||
==================================
|
|
||||||
|
|
||||||
The Telemetry Alarming services trigger alarms when the collected metering or
|
|
||||||
event data break the defined rules.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`Telemetry Alarming installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/telemetry-alarming/draft/>`_.
|
|
||||||
|
|
||||||
Telemetry Data Collection service (ceilometer)
|
|
||||||
==============================================
|
|
||||||
|
|
||||||
The Telemetry Data Collection services provide the following functions:
|
|
||||||
|
|
||||||
* Efficiently polls metering data related to OpenStack services.
|
|
||||||
* Collects event and metering data by monitoring notifications sent from
|
|
||||||
services.
|
|
||||||
* Publishes collected data to various targets including data stores and message
|
|
||||||
queues.
|
|
||||||
|
|
||||||
Installation and configuration is documented in the
|
|
||||||
`Telemetry Data Collection installation guide
|
|
||||||
<https://docs.openstack.org/project-install-guide/telemetry/draft/>`_.
|
|
@ -1,71 +0,0 @@
|
|||||||
:orphan:
|
|
||||||
|
|
||||||
Install and configure the backup service
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Optionally, install and configure the backup service. For simplicity,
|
|
||||||
this configuration uses the Block Storage node and the Object Storage
|
|
||||||
(swift) driver, thus depending on the
|
|
||||||
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
You must :ref:`install and configure a storage node <cinder-storage>` prior
|
|
||||||
to installing and configuring the backup service.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these steps on the Block Storage node.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install cinder-backup
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
|
||||||
and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure backup options:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
backup_driver = cinder.backup.drivers.swift
|
|
||||||
backup_swift_url = SWIFT_URL
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
|
|
||||||
URL can be found by showing the object-store API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack catalog show object-store
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Restart the Block Storage backup service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service cinder-backup restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,73 +0,0 @@
|
|||||||
:orphan:
|
|
||||||
|
|
||||||
Install and configure the backup service
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Optionally, install and configure the backup service. For simplicity,
|
|
||||||
this configuration uses the Block Storage node and the Object Storage
|
|
||||||
(swift) driver, thus depending on the
|
|
||||||
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
You must :ref:`install and configure a storage node <cinder-storage>` prior
|
|
||||||
to installing and configuring the backup service.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these steps on the Block Storage node.
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install openstack-cinder-backup
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
|
||||||
and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure backup options:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
backup_driver = cinder.backup.drivers.swift
|
|
||||||
backup_swift_url = SWIFT_URL
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
|
|
||||||
URL can be found by showing the object-store API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack catalog show object-store
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
Start the Block Storage backup service and configure it to
|
|
||||||
start when the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable openstack-cinder-backup.service
|
|
||||||
# systemctl start openstack-cinder-backup.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
@ -1,73 +0,0 @@
|
|||||||
:orphan:
|
|
||||||
|
|
||||||
Install and configure the backup service
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Optionally, install and configure the backup service. For simplicity,
|
|
||||||
this configuration uses the Block Storage node and the Object Storage
|
|
||||||
(swift) driver, thus depending on the
|
|
||||||
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
You must :ref:`install and configure a storage node <cinder-storage>` prior
|
|
||||||
to installing and configuring the backup service.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these steps on the Block Storage node.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install openstack-cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
|
||||||
and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure backup options:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
backup_driver = cinder.backup.drivers.swift
|
|
||||||
backup_swift_url = SWIFT_URL
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
|
|
||||||
URL can be found by showing the object-store API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack catalog show object-store
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
Start the Block Storage backup service and configure it to
|
|
||||||
start when the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable openstack-cinder-backup.service
|
|
||||||
# systemctl start openstack-cinder-backup.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
@ -1,71 +0,0 @@
|
|||||||
:orphan:
|
|
||||||
|
|
||||||
Install and configure the backup service
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Optionally, install and configure the backup service. For simplicity,
|
|
||||||
this configuration uses the Block Storage node and the Object Storage
|
|
||||||
(swift) driver, thus depending on the
|
|
||||||
`Object Storage service <https://docs.openstack.org/project-install-guide/object-storage/ocata/>`_.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
You must :ref:`install and configure a storage node <cinder-storage>` prior
|
|
||||||
to installing and configuring the backup service.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these steps on the Block Storage node.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install cinder-backup
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
|
||||||
and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure backup options:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
backup_driver = cinder.backup.drivers.swift
|
|
||||||
backup_swift_url = SWIFT_URL
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``SWIFT_URL`` with the URL of the Object Storage service. The
|
|
||||||
URL can be found by showing the object-store API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack catalog show object-store
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Restart the Block Storage backup service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service cinder-backup restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,11 +0,0 @@
|
|||||||
:orphan:
|
|
||||||
|
|
||||||
.. _cinder-backup-install:
|
|
||||||
|
|
||||||
Install and configure the backup service
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:glob:
|
|
||||||
|
|
||||||
cinder-backup-install-*
|
|
@ -1,394 +0,0 @@
|
|||||||
Install and configure controller node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the Block
|
|
||||||
Storage service, code-named cinder, on the controller node. This
|
|
||||||
service requires at least one additional storage node that provides
|
|
||||||
volumes to instances.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Block Storage service, you
|
|
||||||
must create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``cinder`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE cinder;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``cinder`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
|
|
||||||
IDENTIFIED BY 'CINDER_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
|
|
||||||
IDENTIFIED BY 'CINDER_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to admin-only
|
|
||||||
CLI commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create a ``cinder`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt cinder
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 9d7e33de3e1a498390353819bc7d245d |
|
|
||||||
| name | cinder |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``cinder`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user cinder admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``cinderv2`` and ``cinderv3`` service entities:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name cinderv2 \
|
|
||||||
--description "OpenStack Block Storage" volumev2
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Block Storage |
|
|
||||||
| enabled | True |
|
|
||||||
| id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| name | cinderv2 |
|
|
||||||
| type | volumev2 |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name cinderv3 \
|
|
||||||
--description "OpenStack Block Storage" volumev3
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Block Storage |
|
|
||||||
| enabled | True |
|
|
||||||
| id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| name | cinderv3 |
|
|
||||||
| type | volumev3 |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Block Storage services require two service entities.
|
|
||||||
|
|
||||||
#. Create the Block Storage service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 public http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 513e73819e14460fb904163f41ef3759 |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 internal http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 6436a8a23d014cfdb69c586eff146a32 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 admin http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | e652cf84dd334f359ae9b045a2c91d96 |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 public http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 03fa2c90153546c295bf30ca86b1344b |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 internal http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 94f684395d1b41068c70e4ecb11364b2 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 admin http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 4511c28a0f9840c78bacb25f10f62c98 |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Block Storage services require endpoints for each service
|
|
||||||
entity.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install cinder-api cinder-scheduler
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with the password you chose for the
|
|
||||||
Block Storage database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in ``RabbitMQ``.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = cinder
|
|
||||||
password = CINDER_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_PASS`` with the password you chose for
|
|
||||||
the ``cinder`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
|
|
||||||
use the management interface IP address of the controller node:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
my_ip = 10.0.0.11
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
3. Populate the Block Storage database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "cinder-manage db sync" cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Ignore any deprecation messages in this output.
|
|
||||||
|
|
||||||
|
|
||||||
Configure Compute to use Block Storage
|
|
||||||
--------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and add the following
|
|
||||||
to it:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[cinder]
|
|
||||||
os_region_name = RegionOne
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Compute API service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service nova-api restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Restart the Block Storage services:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service cinder-scheduler restart
|
|
||||||
# service apache2 restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,394 +0,0 @@
|
|||||||
Install and configure controller node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the Block
|
|
||||||
Storage service, code-named cinder, on the controller node. This
|
|
||||||
service requires at least one additional storage node that provides
|
|
||||||
volumes to instances.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Block Storage service, you
|
|
||||||
must create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``cinder`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE cinder;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``cinder`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
|
|
||||||
IDENTIFIED BY 'CINDER_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
|
|
||||||
IDENTIFIED BY 'CINDER_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to admin-only
|
|
||||||
CLI commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create a ``cinder`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt cinder
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 9d7e33de3e1a498390353819bc7d245d |
|
|
||||||
| name | cinder |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``cinder`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user cinder admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``cinderv2`` and ``cinderv3`` service entities:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name cinderv2 \
|
|
||||||
--description "OpenStack Block Storage" volumev2
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Block Storage |
|
|
||||||
| enabled | True |
|
|
||||||
| id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| name | cinderv2 |
|
|
||||||
| type | volumev2 |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name cinderv3 \
|
|
||||||
--description "OpenStack Block Storage" volumev3
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Block Storage |
|
|
||||||
| enabled | True |
|
|
||||||
| id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| name | cinderv3 |
|
|
||||||
| type | volumev3 |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Block Storage services require two service entities.
|
|
||||||
|
|
||||||
#. Create the Block Storage service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 public http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 513e73819e14460fb904163f41ef3759 |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 internal http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 6436a8a23d014cfdb69c586eff146a32 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 admin http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | e652cf84dd334f359ae9b045a2c91d96 |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 public http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 03fa2c90153546c295bf30ca86b1344b |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 internal http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 94f684395d1b41068c70e4ecb11364b2 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 admin http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 4511c28a0f9840c78bacb25f10f62c98 |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Block Storage services require endpoints for each service
|
|
||||||
entity.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install openstack-cinder-api openstack-cinder-scheduler
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with the password you chose for the
|
|
||||||
Block Storage database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in ``RabbitMQ``.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = cinder
|
|
||||||
password = CINDER_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_PASS`` with the password you chose for
|
|
||||||
the ``cinder`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
|
|
||||||
use the management interface IP address of the controller node:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
my_ip = 10.0.0.11
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[oslo_concurrency]
|
|
||||||
# ...
|
|
||||||
lock_path = /var/lib/cinder/tmp
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Configure Compute to use Block Storage
|
|
||||||
--------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and add the following
|
|
||||||
to it:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[cinder]
|
|
||||||
os_region_name = RegionOne
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Compute API service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl restart openstack-nova-api.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Start the Block Storage services and configure them to start when
|
|
||||||
the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
|
|
||||||
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
@ -1,407 +0,0 @@
|
|||||||
Install and configure controller node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the Block
|
|
||||||
Storage service, code-named cinder, on the controller node. This
|
|
||||||
service requires at least one additional storage node that provides
|
|
||||||
volumes to instances.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Block Storage service, you
|
|
||||||
must create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``cinder`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE cinder;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``cinder`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
|
|
||||||
IDENTIFIED BY 'CINDER_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
|
|
||||||
IDENTIFIED BY 'CINDER_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to admin-only
|
|
||||||
CLI commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create a ``cinder`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt cinder
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 9d7e33de3e1a498390353819bc7d245d |
|
|
||||||
| name | cinder |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``cinder`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user cinder admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``cinderv2`` and ``cinderv3`` service entities:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name cinderv2 \
|
|
||||||
--description "OpenStack Block Storage" volumev2
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Block Storage |
|
|
||||||
| enabled | True |
|
|
||||||
| id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| name | cinderv2 |
|
|
||||||
| type | volumev2 |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name cinderv3 \
|
|
||||||
--description "OpenStack Block Storage" volumev3
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Block Storage |
|
|
||||||
| enabled | True |
|
|
||||||
| id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| name | cinderv3 |
|
|
||||||
| type | volumev3 |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Block Storage services require two service entities.
|
|
||||||
|
|
||||||
#. Create the Block Storage service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 public http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 513e73819e14460fb904163f41ef3759 |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 internal http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 6436a8a23d014cfdb69c586eff146a32 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 admin http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | e652cf84dd334f359ae9b045a2c91d96 |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 public http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 03fa2c90153546c295bf30ca86b1344b |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 internal http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 94f684395d1b41068c70e4ecb11364b2 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 admin http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 4511c28a0f9840c78bacb25f10f62c98 |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Block Storage services require endpoints for each service
|
|
||||||
entity.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install openstack-cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with the password you chose for the
|
|
||||||
Block Storage database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in ``RabbitMQ``.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = cinder
|
|
||||||
password = CINDER_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_PASS`` with the password you chose for
|
|
||||||
the ``cinder`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
|
|
||||||
use the management interface IP address of the controller node:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
my_ip = 10.0.0.11
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[oslo_concurrency]
|
|
||||||
# ...
|
|
||||||
lock_path = /var/lib/cinder/tmp
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
3. Populate the Block Storage database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "cinder-manage db sync" cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Ignore any deprecation messages in this output.
|
|
||||||
|
|
||||||
|
|
||||||
Configure Compute to use Block Storage
|
|
||||||
--------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and add the following
|
|
||||||
to it:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[cinder]
|
|
||||||
os_region_name = RegionOne
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Compute API service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl restart openstack-nova-api.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Start the Block Storage services and configure them to start when
|
|
||||||
the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
|
|
||||||
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
@ -1,406 +0,0 @@
|
|||||||
Install and configure controller node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the Block
|
|
||||||
Storage service, code-named cinder, on the controller node. This
|
|
||||||
service requires at least one additional storage node that provides
|
|
||||||
volumes to instances.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Block Storage service, you
|
|
||||||
must create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# mysql
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``cinder`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE cinder;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``cinder`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
|
|
||||||
IDENTIFIED BY 'CINDER_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
|
|
||||||
IDENTIFIED BY 'CINDER_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to admin-only
|
|
||||||
CLI commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create a ``cinder`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt cinder
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 9d7e33de3e1a498390353819bc7d245d |
|
|
||||||
| name | cinder |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``cinder`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user cinder admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``cinderv2`` and ``cinderv3`` service entities:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name cinderv2 \
|
|
||||||
--description "OpenStack Block Storage" volumev2
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Block Storage |
|
|
||||||
| enabled | True |
|
|
||||||
| id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| name | cinderv2 |
|
|
||||||
| type | volumev2 |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name cinderv3 \
|
|
||||||
--description "OpenStack Block Storage" volumev3
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Block Storage |
|
|
||||||
| enabled | True |
|
|
||||||
| id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| name | cinderv3 |
|
|
||||||
| type | volumev3 |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Block Storage services require two service entities.
|
|
||||||
|
|
||||||
#. Create the Block Storage service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 public http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 513e73819e14460fb904163f41ef3759 |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 internal http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 6436a8a23d014cfdb69c586eff146a32 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev2 admin http://controller:8776/v2/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | e652cf84dd334f359ae9b045a2c91d96 |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | eb9fd245bdbc414695952e93f29fe3ac |
|
|
||||||
| service_name | cinderv2 |
|
|
||||||
| service_type | volumev2 |
|
|
||||||
| url | http://controller:8776/v2/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 public http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 03fa2c90153546c295bf30ca86b1344b |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 internal http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 94f684395d1b41068c70e4ecb11364b2 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
volumev3 admin http://controller:8776/v3/%\(project_id\)s
|
|
||||||
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 4511c28a0f9840c78bacb25f10f62c98 |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | ab3bbbef780845a1a283490d281e7fda |
|
|
||||||
| service_name | cinderv3 |
|
|
||||||
| service_type | volumev3 |
|
|
||||||
| url | http://controller:8776/v3/%(project_id)s |
|
|
||||||
+--------------+------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Block Storage services require endpoints for each service
|
|
||||||
entity.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install cinder-api cinder-scheduler
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with the password you chose for the
|
|
||||||
Block Storage database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in ``RabbitMQ``.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = cinder
|
|
||||||
password = CINDER_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_PASS`` with the password you chose for
|
|
||||||
the ``cinder`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option to
|
|
||||||
use the management interface IP address of the controller node:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
my_ip = 10.0.0.11
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[oslo_concurrency]
|
|
||||||
# ...
|
|
||||||
lock_path = /var/lib/cinder/tmp
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
3. Populate the Block Storage database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "cinder-manage db sync" cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Ignore any deprecation messages in this output.
|
|
||||||
|
|
||||||
|
|
||||||
Configure Compute to use Block Storage
|
|
||||||
--------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and add the following
|
|
||||||
to it:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[cinder]
|
|
||||||
os_region_name = RegionOne
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Compute API service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service nova-api restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Restart the Block Storage services:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service cinder-scheduler restart
|
|
||||||
# service apache2 restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,9 +0,0 @@
|
|||||||
.. _cinder-controller:
|
|
||||||
|
|
||||||
Install and configure controller node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:glob:
|
|
||||||
|
|
||||||
cinder-controller-install-*
|
|
@ -1,9 +0,0 @@
|
|||||||
.. _cinder-next-steps:
|
|
||||||
|
|
||||||
==========
|
|
||||||
Next steps
|
|
||||||
==========
|
|
||||||
|
|
||||||
Your OpenStack environment now includes Block Storage. You can
|
|
||||||
:doc:`launch an instance <launch-instance>` or add more
|
|
||||||
services to your environment in the following chapters.
|
|
@ -1,263 +0,0 @@
|
|||||||
Install and configure a storage node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure storage nodes
|
|
||||||
for the Block Storage service. For simplicity, this configuration
|
|
||||||
references one storage node with an empty local block storage device.
|
|
||||||
The instructions use ``/dev/sdb``, but you can substitute a different
|
|
||||||
value for your particular node.
|
|
||||||
|
|
||||||
The service provisions logical volumes on this device using the
|
|
||||||
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
|
|
||||||
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
|
|
||||||
You can follow these instructions with minor modifications to horizontally
|
|
||||||
scale your environment with additional storage nodes.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Block Storage service on the
|
|
||||||
storage node, you must prepare the storage device.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these steps on the storage node.
|
|
||||||
|
|
||||||
#. Install the supporting utility packages:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Some distributions include LVM by default.
|
|
||||||
|
|
||||||
#. Create the LVM physical volume ``/dev/sdb``:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# pvcreate /dev/sdb
|
|
||||||
|
|
||||||
Physical volume "/dev/sdb" successfully created
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the LVM volume group ``cinder-volumes``:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# vgcreate cinder-volumes /dev/sdb
|
|
||||||
|
|
||||||
Volume group "cinder-volumes" successfully created
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
The Block Storage service creates logical volumes in this volume group.
|
|
||||||
|
|
||||||
#. Only instances can access Block Storage volumes. However, the
|
|
||||||
underlying operating system manages the devices associated with
|
|
||||||
the volumes. By default, the LVM volume scanning tool scans the
|
|
||||||
``/dev`` directory for block storage devices that
|
|
||||||
contain volumes. If projects use LVM on their volumes, the scanning
|
|
||||||
tool detects these volumes and attempts to cache them which can cause
|
|
||||||
a variety of problems with both the underlying operating system
|
|
||||||
and project volumes. You must reconfigure LVM to scan only the devices
|
|
||||||
that contain the ``cinder-volumes`` volume group. Edit the
|
|
||||||
``/etc/lvm/lvm.conf`` file and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``devices`` section, add a filter that accepts the
|
|
||||||
``/dev/sdb`` device and rejects all other devices:
|
|
||||||
|
|
||||||
.. path /etc/lvm/lvm.conf
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
devices {
|
|
||||||
...
|
|
||||||
filter = [ "a/sdb/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Each item in the filter array begins with ``a`` for **accept** or
|
|
||||||
``r`` for **reject** and includes a regular expression for the
|
|
||||||
device name. The array must end with ``r/.*/`` to reject any
|
|
||||||
remaining devices. You can use the :command:`vgs -vvvv` command
|
|
||||||
to test filters.
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
If your storage nodes use LVM on the operating system disk, you
|
|
||||||
must also add the associated device to the filter. For example,
|
|
||||||
if the ``/dev/sda`` device contains the operating system:
|
|
||||||
|
|
||||||
.. ignore_path /etc/lvm/lvm.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Similarly, if your compute nodes use LVM on the operating
|
|
||||||
system disk, you must also modify the filter in the
|
|
||||||
``/etc/lvm/lvm.conf`` file on those nodes to include only
|
|
||||||
the operating system disk. For example, if the ``/dev/sda``
|
|
||||||
device contains the operating system:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
filter = [ "a/sda/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install cinder-volume
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
|
||||||
and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with the password you chose for
|
|
||||||
the Block Storage database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for
|
|
||||||
the ``openstack`` account in ``RabbitMQ``.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = cinder
|
|
||||||
password = CINDER_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_PASS`` with the password you chose for the
|
|
||||||
``cinder`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
|
|
||||||
of the management network interface on your storage node,
|
|
||||||
typically 10.0.0.41 for the first node in the
|
|
||||||
:ref:`example architecture <overview-example-architectures>`.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the LVM back end:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
enabled_backends = lvm
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Back-end names are arbitrary. As an example, this guide
|
|
||||||
uses the name of the driver as the name of the back end.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the location of the
|
|
||||||
Image service API:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
glance_api_servers = http://controller:9292
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[oslo_concurrency]
|
|
||||||
# ...
|
|
||||||
lock_path = /var/lib/cinder/tmp
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Block Storage volume service including its dependencies:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service tgt restart
|
|
||||||
# service cinder-volume restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,309 +0,0 @@
|
|||||||
Install and configure a storage node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure storage nodes
|
|
||||||
for the Block Storage service. For simplicity, this configuration
|
|
||||||
references one storage node with an empty local block storage device.
|
|
||||||
The instructions use ``/dev/sdb``, but you can substitute a different
|
|
||||||
value for your particular node.
|
|
||||||
|
|
||||||
The service provisions logical volumes on this device using the
|
|
||||||
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
|
|
||||||
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
|
|
||||||
You can follow these instructions with minor modifications to horizontally
|
|
||||||
scale your environment with additional storage nodes.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Block Storage service on the
|
|
||||||
storage node, you must prepare the storage device.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these steps on the storage node.
|
|
||||||
|
|
||||||
#. Install the supporting utility packages:
|
|
||||||
|
|
||||||
|
|
||||||
* Install the LVM packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install lvm2
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* (Optional) If you intend to use non-raw image types such as QCOW2
|
|
||||||
and VMDK, install the QEMU package:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install qemu
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Some distributions include LVM by default.
|
|
||||||
|
|
||||||
#. Create the LVM physical volume ``/dev/sdb``:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# pvcreate /dev/sdb
|
|
||||||
|
|
||||||
Physical volume "/dev/sdb" successfully created
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the LVM volume group ``cinder-volumes``:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# vgcreate cinder-volumes /dev/sdb
|
|
||||||
|
|
||||||
Volume group "cinder-volumes" successfully created
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
The Block Storage service creates logical volumes in this volume group.
|
|
||||||
|
|
||||||
#. Only instances can access Block Storage volumes. However, the
|
|
||||||
underlying operating system manages the devices associated with
|
|
||||||
the volumes. By default, the LVM volume scanning tool scans the
|
|
||||||
``/dev`` directory for block storage devices that
|
|
||||||
contain volumes. If projects use LVM on their volumes, the scanning
|
|
||||||
tool detects these volumes and attempts to cache them which can cause
|
|
||||||
a variety of problems with both the underlying operating system
|
|
||||||
and project volumes. You must reconfigure LVM to scan only the devices
|
|
||||||
that contain the ``cinder-volumes`` volume group. Edit the
|
|
||||||
``/etc/lvm/lvm.conf`` file and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``devices`` section, add a filter that accepts the
|
|
||||||
``/dev/sdb`` device and rejects all other devices:
|
|
||||||
|
|
||||||
.. path /etc/lvm/lvm.conf
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
devices {
|
|
||||||
...
|
|
||||||
filter = [ "a/sdb/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Each item in the filter array begins with ``a`` for **accept** or
|
|
||||||
``r`` for **reject** and includes a regular expression for the
|
|
||||||
device name. The array must end with ``r/.*/`` to reject any
|
|
||||||
remaining devices. You can use the :command:`vgs -vvvv` command
|
|
||||||
to test filters.
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
If your storage nodes use LVM on the operating system disk, you
|
|
||||||
must also add the associated device to the filter. For example,
|
|
||||||
if the ``/dev/sda`` device contains the operating system:
|
|
||||||
|
|
||||||
.. ignore_path /etc/lvm/lvm.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Similarly, if your compute nodes use LVM on the operating
|
|
||||||
system disk, you must also modify the filter in the
|
|
||||||
``/etc/lvm/lvm.conf`` file on those nodes to include only
|
|
||||||
the operating system disk. For example, if the ``/dev/sda``
|
|
||||||
device contains the operating system:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
filter = [ "a/sda/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install openstack-cinder-volume tgt
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
|
||||||
and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with the password you chose for
|
|
||||||
the Block Storage database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for
|
|
||||||
the ``openstack`` account in ``RabbitMQ``.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = cinder
|
|
||||||
password = CINDER_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_PASS`` with the password you chose for the
|
|
||||||
``cinder`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
|
|
||||||
of the management network interface on your storage node,
|
|
||||||
typically 10.0.0.41 for the first node in the
|
|
||||||
:ref:`example architecture <overview-example-architectures>`.
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[lvm]`` section, configure the LVM back end with the
|
|
||||||
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
|
|
||||||
and appropriate iSCSI service:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[lvm]
|
|
||||||
# ...
|
|
||||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
|
||||||
volume_group = cinder-volumes
|
|
||||||
iscsi_protocol = iscsi
|
|
||||||
iscsi_helper = tgtadm
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the LVM back end:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
enabled_backends = lvm
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Back-end names are arbitrary. As an example, this guide
|
|
||||||
uses the name of the driver as the name of the back end.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the location of the
|
|
||||||
Image service API:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
glance_api_servers = http://controller:9292
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[oslo_concurrency]
|
|
||||||
# ...
|
|
||||||
lock_path = /var/lib/cinder/tmp
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
3. Create the ``/etc/tgt/conf.d/cinder.conf`` file
|
|
||||||
with the following data:
|
|
||||||
|
|
||||||
.. code-block:: shell
|
|
||||||
|
|
||||||
include /var/lib/cinder/volumes/*
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
* Start the Block Storage volume service including its dependencies
|
|
||||||
and configure them to start when the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable openstack-cinder-volume.service tgtd.service
|
|
||||||
# systemctl start openstack-cinder-volume.service tgtd.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1,300 +0,0 @@
|
|||||||
Install and configure a storage node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure storage nodes
|
|
||||||
for the Block Storage service. For simplicity, this configuration
|
|
||||||
references one storage node with an empty local block storage device.
|
|
||||||
The instructions use ``/dev/sdb``, but you can substitute a different
|
|
||||||
value for your particular node.
|
|
||||||
|
|
||||||
The service provisions logical volumes on this device using the
|
|
||||||
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
|
|
||||||
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
|
|
||||||
You can follow these instructions with minor modifications to horizontally
|
|
||||||
scale your environment with additional storage nodes.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Block Storage service on the
|
|
||||||
storage node, you must prepare the storage device.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these steps on the storage node.
|
|
||||||
|
|
||||||
#. Install the supporting utility packages:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Install the LVM packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install lvm2
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Start the LVM metadata service and configure it to start when the
|
|
||||||
system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable lvm2-lvmetad.service
|
|
||||||
# systemctl start lvm2-lvmetad.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Some distributions include LVM by default.
|
|
||||||
|
|
||||||
#. Create the LVM physical volume ``/dev/sdb``:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# pvcreate /dev/sdb
|
|
||||||
|
|
||||||
Physical volume "/dev/sdb" successfully created
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the LVM volume group ``cinder-volumes``:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# vgcreate cinder-volumes /dev/sdb
|
|
||||||
|
|
||||||
Volume group "cinder-volumes" successfully created
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
The Block Storage service creates logical volumes in this volume group.
|
|
||||||
|
|
||||||
#. Only instances can access Block Storage volumes. However, the
|
|
||||||
underlying operating system manages the devices associated with
|
|
||||||
the volumes. By default, the LVM volume scanning tool scans the
|
|
||||||
``/dev`` directory for block storage devices that
|
|
||||||
contain volumes. If projects use LVM on their volumes, the scanning
|
|
||||||
tool detects these volumes and attempts to cache them which can cause
|
|
||||||
a variety of problems with both the underlying operating system
|
|
||||||
and project volumes. You must reconfigure LVM to scan only the devices
|
|
||||||
that contain the ``cinder-volumes`` volume group. Edit the
|
|
||||||
``/etc/lvm/lvm.conf`` file and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``devices`` section, add a filter that accepts the
|
|
||||||
``/dev/sdb`` device and rejects all other devices:
|
|
||||||
|
|
||||||
.. path /etc/lvm/lvm.conf
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
devices {
|
|
||||||
...
|
|
||||||
filter = [ "a/sdb/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Each item in the filter array begins with ``a`` for **accept** or
|
|
||||||
``r`` for **reject** and includes a regular expression for the
|
|
||||||
device name. The array must end with ``r/.*/`` to reject any
|
|
||||||
remaining devices. You can use the :command:`vgs -vvvv` command
|
|
||||||
to test filters.
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
If your storage nodes use LVM on the operating system disk, you
|
|
||||||
must also add the associated device to the filter. For example,
|
|
||||||
if the ``/dev/sda`` device contains the operating system:
|
|
||||||
|
|
||||||
.. ignore_path /etc/lvm/lvm.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Similarly, if your compute nodes use LVM on the operating
|
|
||||||
system disk, you must also modify the filter in the
|
|
||||||
``/etc/lvm/lvm.conf`` file on those nodes to include only
|
|
||||||
the operating system disk. For example, if the ``/dev/sda``
|
|
||||||
device contains the operating system:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
filter = [ "a/sda/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install openstack-cinder targetcli python-keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
|
||||||
and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with the password you chose for
|
|
||||||
the Block Storage database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for
|
|
||||||
the ``openstack`` account in ``RabbitMQ``.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = cinder
|
|
||||||
password = CINDER_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_PASS`` with the password you chose for the
|
|
||||||
``cinder`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
|
|
||||||
of the management network interface on your storage node,
|
|
||||||
typically 10.0.0.41 for the first node in the
|
|
||||||
:ref:`example architecture <overview-example-architectures>`.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[lvm]`` section, configure the LVM back end with the
|
|
||||||
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
|
|
||||||
and appropriate iSCSI service. If the ``[lvm]`` section does not exist,
|
|
||||||
create it:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[lvm]
|
|
||||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
|
||||||
volume_group = cinder-volumes
|
|
||||||
iscsi_protocol = iscsi
|
|
||||||
iscsi_helper = lioadm
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the LVM back end:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
enabled_backends = lvm
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Back-end names are arbitrary. As an example, this guide
|
|
||||||
uses the name of the driver as the name of the back end.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the location of the
|
|
||||||
Image service API:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
glance_api_servers = http://controller:9292
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[oslo_concurrency]
|
|
||||||
# ...
|
|
||||||
lock_path = /var/lib/cinder/tmp
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Start the Block Storage volume service including its dependencies
|
|
||||||
and configure them to start when the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable openstack-cinder-volume.service target.service
|
|
||||||
# systemctl start openstack-cinder-volume.service target.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
@ -1,287 +0,0 @@
|
|||||||
Install and configure a storage node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure storage nodes
|
|
||||||
for the Block Storage service. For simplicity, this configuration
|
|
||||||
references one storage node with an empty local block storage device.
|
|
||||||
The instructions use ``/dev/sdb``, but you can substitute a different
|
|
||||||
value for your particular node.
|
|
||||||
|
|
||||||
The service provisions logical volumes on this device using the
|
|
||||||
:term:`LVM <Logical Volume Manager (LVM)>` driver and provides them
|
|
||||||
to instances via :term:`iSCSI <iSCSI Qualified Name (IQN)>` transport.
|
|
||||||
You can follow these instructions with minor modifications to horizontally
|
|
||||||
scale your environment with additional storage nodes.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Block Storage service on the
|
|
||||||
storage node, you must prepare the storage device.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these steps on the storage node.
|
|
||||||
|
|
||||||
#. Install the supporting utility packages:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install lvm2
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Some distributions include LVM by default.
|
|
||||||
|
|
||||||
#. Create the LVM physical volume ``/dev/sdb``:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# pvcreate /dev/sdb
|
|
||||||
|
|
||||||
Physical volume "/dev/sdb" successfully created
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the LVM volume group ``cinder-volumes``:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# vgcreate cinder-volumes /dev/sdb
|
|
||||||
|
|
||||||
Volume group "cinder-volumes" successfully created
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
The Block Storage service creates logical volumes in this volume group.
|
|
||||||
|
|
||||||
#. Only instances can access Block Storage volumes. However, the
|
|
||||||
underlying operating system manages the devices associated with
|
|
||||||
the volumes. By default, the LVM volume scanning tool scans the
|
|
||||||
``/dev`` directory for block storage devices that
|
|
||||||
contain volumes. If projects use LVM on their volumes, the scanning
|
|
||||||
tool detects these volumes and attempts to cache them which can cause
|
|
||||||
a variety of problems with both the underlying operating system
|
|
||||||
and project volumes. You must reconfigure LVM to scan only the devices
|
|
||||||
that contain the ``cinder-volumes`` volume group. Edit the
|
|
||||||
``/etc/lvm/lvm.conf`` file and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``devices`` section, add a filter that accepts the
|
|
||||||
``/dev/sdb`` device and rejects all other devices:
|
|
||||||
|
|
||||||
.. path /etc/lvm/lvm.conf
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
devices {
|
|
||||||
...
|
|
||||||
filter = [ "a/sdb/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Each item in the filter array begins with ``a`` for **accept** or
|
|
||||||
``r`` for **reject** and includes a regular expression for the
|
|
||||||
device name. The array must end with ``r/.*/`` to reject any
|
|
||||||
remaining devices. You can use the :command:`vgs -vvvv` command
|
|
||||||
to test filters.
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
If your storage nodes use LVM on the operating system disk, you
|
|
||||||
must also add the associated device to the filter. For example,
|
|
||||||
if the ``/dev/sda`` device contains the operating system:
|
|
||||||
|
|
||||||
.. ignore_path /etc/lvm/lvm.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
filter = [ "a/sda/", "a/sdb/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Similarly, if your compute nodes use LVM on the operating
|
|
||||||
system disk, you must also modify the filter in the
|
|
||||||
``/etc/lvm/lvm.conf`` file on those nodes to include only
|
|
||||||
the operating system disk. For example, if the ``/dev/sda``
|
|
||||||
device contains the operating system:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
filter = [ "a/sda/", "r/.*/"]
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install cinder-volume
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/cinder/cinder.conf`` file
|
|
||||||
and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_DBPASS`` with the password you chose for
|
|
||||||
the Block Storage database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for
|
|
||||||
the ``openstack`` account in ``RabbitMQ``.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = cinder
|
|
||||||
password = CINDER_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``CINDER_PASS`` with the password you chose for the
|
|
||||||
``cinder`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the ``my_ip`` option:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``MANAGEMENT_INTERFACE_IP_ADDRESS`` with the IP address
|
|
||||||
of the management network interface on your storage node,
|
|
||||||
typically 10.0.0.41 for the first node in the
|
|
||||||
:ref:`example architecture <overview-example-architectures>`.
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[lvm]`` section, configure the LVM back end with the
|
|
||||||
LVM driver, ``cinder-volumes`` volume group, iSCSI protocol,
|
|
||||||
and appropriate iSCSI service:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[lvm]
|
|
||||||
# ...
|
|
||||||
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
|
|
||||||
volume_group = cinder-volumes
|
|
||||||
iscsi_protocol = iscsi
|
|
||||||
iscsi_helper = tgtadm
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the LVM back end:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
enabled_backends = lvm
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Back-end names are arbitrary. As an example, this guide
|
|
||||||
uses the name of the driver as the name of the back end.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the location of the
|
|
||||||
Image service API:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
glance_api_servers = http://controller:9292
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
|
||||||
|
|
||||||
.. path /etc/cinder/cinder.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[oslo_concurrency]
|
|
||||||
# ...
|
|
||||||
lock_path = /var/lib/cinder/tmp
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Block Storage volume service including its dependencies:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service tgt restart
|
|
||||||
# service cinder-volume restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,9 +0,0 @@
|
|||||||
.. _cinder-storage:
|
|
||||||
|
|
||||||
Install and configure a storage node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:glob:
|
|
||||||
|
|
||||||
cinder-storage-install-*
|
|
@ -1,35 +0,0 @@
|
|||||||
.. _cinder-verify:
|
|
||||||
|
|
||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify operation of the Block Storage service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these commands on the controller node.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to
|
|
||||||
admin-only CLI commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. List service components to verify successful launch of each process:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack volume service list
|
|
||||||
|
|
||||||
+------------------+------------+------+---------+-------+----------------------------+
|
|
||||||
| Binary | Host | Zone | Status | State | Updated_at |
|
|
||||||
+------------------+------------+------+---------+-------+----------------------------+
|
|
||||||
| cinder-scheduler | controller | nova | enabled | up | 2016-09-30T02:27:41.000000 |
|
|
||||||
| cinder-volume | block@lvm | nova | enabled | up | 2016-09-30T02:27:46.000000 |
|
|
||||||
+------------------+------------+------+---------+-------+----------------------------+
|
|
||||||
|
|
||||||
|
|
||||||
.. end
|
|
@ -1,26 +0,0 @@
|
|||||||
.. _cinder:
|
|
||||||
|
|
||||||
=====================
|
|
||||||
Block Storage service
|
|
||||||
=====================
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
common/get-started-block-storage.rst
|
|
||||||
cinder-controller-install.rst
|
|
||||||
cinder-storage-install.rst
|
|
||||||
cinder-verify.rst
|
|
||||||
cinder-next-steps.rst
|
|
||||||
|
|
||||||
The Block Storage service (cinder) provides block storage devices
|
|
||||||
to guest instances. The method in which the storage is provisioned and
|
|
||||||
consumed is determined by the Block Storage driver, or drivers
|
|
||||||
in the case of a multi-backend configuration. There are a variety of
|
|
||||||
drivers that are available: NAS/SAN, NFS, iSCSI, Ceph, and more.
|
|
||||||
|
|
||||||
The Block Storage API and scheduler services typically run on the controller
|
|
||||||
nodes. Depending upon the drivers used, the volume service can run
|
|
||||||
on controller nodes, compute nodes, or standalone storage nodes.
|
|
||||||
|
|
||||||
For more information, see the
|
|
||||||
`Configuration Reference <https://docs.openstack.org/ocata/config-reference/block-storage/volume-drivers.html>`_.
|
|
@ -80,19 +80,12 @@ release = '15.0.0'
|
|||||||
|
|
||||||
# List of patterns, relative to source directory, that match files and
|
# List of patterns, relative to source directory, that match files and
|
||||||
# directories to ignore when looking for source files.
|
# directories to ignore when looking for source files.
|
||||||
exclude_patterns = ['common/cli*', 'common/nova*',
|
exclude_patterns = [
|
||||||
'common/get-started-with-openstack.rst',
|
'common/cli*',
|
||||||
'common/get-started-openstack-services.rst',
|
'common/nova*',
|
||||||
'common/get-started-logical-architecture.rst',
|
'common/get-started-*.rst',
|
||||||
'common/get-started-dashboard.rst',
|
'shared/note_configuration_vary_by_distribution.rst',
|
||||||
'common/get-started-storage-concepts.rst',
|
]
|
||||||
'common/get-started-database-service.rst',
|
|
||||||
'common/get-started-data-processing.rst',
|
|
||||||
'common/get-started-object-storage.rst',
|
|
||||||
'common/get-started-orchestration.rst',
|
|
||||||
'common/get-started-shared-file-systems.rst',
|
|
||||||
'common/get-started-telemetry.rst',
|
|
||||||
'shared/note_configuration_vary_by_distribution.rst']
|
|
||||||
|
|
||||||
# The reST default role (used for this markup: `text`) to use for all
|
# The reST default role (used for this markup: `text`) to use for all
|
||||||
# documents.
|
# documents.
|
||||||
|
@ -1,329 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the Image service,
|
|
||||||
code-named glance, on the controller node. For simplicity, this
|
|
||||||
configuration stores images on the local file system.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Image service, you must
|
|
||||||
create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``glance`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE glance;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``glance`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
|
|
||||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
|
|
||||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to
|
|
||||||
admin-only CLI commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create the ``glance`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt glance
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 3f4e777c4062483ab8d9edd7dff829df |
|
|
||||||
| name | glance |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``glance`` user and
|
|
||||||
``service`` project:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user glance admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``glance`` service entity:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name glance \
|
|
||||||
--description "OpenStack Image" image
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Image |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| name | glance |
|
|
||||||
| type | image |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the Image service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image public http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 340be3625e9b4239a6415d034e98aace |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image internal http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image admin http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 0c37ed58103f4300a84ff125a539032d |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/glance/glance-api.conf`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
|
||||||
Image service database.
|
|
||||||
|
|
||||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = glance
|
|
||||||
password = GLANCE_PASS
|
|
||||||
|
|
||||||
[paste_deploy]
|
|
||||||
# ...
|
|
||||||
flavor = keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
|
||||||
``glance`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[glance_store]`` section, configure the local file
|
|
||||||
system store and location of image files:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[glance_store]
|
|
||||||
# ...
|
|
||||||
stores = file,http
|
|
||||||
default_store = file
|
|
||||||
filesystem_store_datadir = /var/lib/glance/images/
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. Edit the ``/etc/glance/glance-registry.conf`` file and complete
|
|
||||||
the following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance-registry.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
|
||||||
Image service database.
|
|
||||||
|
|
||||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance-registry.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = glance
|
|
||||||
password = GLANCE_PASS
|
|
||||||
|
|
||||||
[paste_deploy]
|
|
||||||
# ...
|
|
||||||
flavor = keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
|
||||||
``glance`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
|
|
||||||
4. Populate the Image service database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "glance-manage db_sync" glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Ignore any deprecation messages in this output.
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Image services:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service glance-registry restart
|
|
||||||
# service glance-api restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,333 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the Image service,
|
|
||||||
code-named glance, on the controller node. For simplicity, this
|
|
||||||
configuration stores images on the local file system.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Image service, you must
|
|
||||||
create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``glance`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE glance;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``glance`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
|
|
||||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
|
|
||||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to
|
|
||||||
admin-only CLI commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create the ``glance`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt glance
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 3f4e777c4062483ab8d9edd7dff829df |
|
|
||||||
| name | glance |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``glance`` user and
|
|
||||||
``service`` project:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user glance admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``glance`` service entity:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name glance \
|
|
||||||
--description "OpenStack Image" image
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Image |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| name | glance |
|
|
||||||
| type | image |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the Image service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image public http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 340be3625e9b4239a6415d034e98aace |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image internal http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image admin http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 0c37ed58103f4300a84ff125a539032d |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Starting with the Newton release, SUSE OpenStack packages are shipping
|
|
||||||
with the upstream default configuration files. For example
|
|
||||||
``/etc/glance/glance-api.conf`` or
|
|
||||||
``/etc/glance/glance-registry.conf``, with customizations in
|
|
||||||
``/etc/glance/glance-api.conf.d/`` or
|
|
||||||
``/etc/glance/glance-registry.conf.d/``. While the following
|
|
||||||
instructions modify the default configuration files, adding new files
|
|
||||||
in ``/etc/glance/glance-api.conf.d`` or
|
|
||||||
``/etc/glance/glance-registry.conf.d`` achieves the same result.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install openstack-glance \
|
|
||||||
openstack-glance-api openstack-glance-registry
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/glance/glance-api.conf`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
|
||||||
Image service database.
|
|
||||||
|
|
||||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = glance
|
|
||||||
password = GLANCE_PASS
|
|
||||||
|
|
||||||
[paste_deploy]
|
|
||||||
# ...
|
|
||||||
flavor = keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
|
||||||
``glance`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[glance_store]`` section, configure the local file
|
|
||||||
system store and location of image files:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[glance_store]
|
|
||||||
# ...
|
|
||||||
stores = file,http
|
|
||||||
default_store = file
|
|
||||||
filesystem_store_datadir = /var/lib/glance/images/
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. Edit the ``/etc/glance/glance-registry.conf`` file and complete
|
|
||||||
the following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance-registry.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
|
||||||
Image service database.
|
|
||||||
|
|
||||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance-registry.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = glance
|
|
||||||
password = GLANCE_PASS
|
|
||||||
|
|
||||||
[paste_deploy]
|
|
||||||
# ...
|
|
||||||
flavor = keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
|
||||||
``glance`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
* Start the Image services and configure them to start when
|
|
||||||
the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable openstack-glance-api.service \
|
|
||||||
openstack-glance-registry.service
|
|
||||||
# systemctl start openstack-glance-api.service \
|
|
||||||
openstack-glance-registry.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
@ -1,332 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the Image service,
|
|
||||||
code-named glance, on the controller node. For simplicity, this
|
|
||||||
configuration stores images on the local file system.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Image service, you must
|
|
||||||
create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``glance`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE glance;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``glance`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
|
|
||||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
|
|
||||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to
|
|
||||||
admin-only CLI commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create the ``glance`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt glance
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 3f4e777c4062483ab8d9edd7dff829df |
|
|
||||||
| name | glance |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``glance`` user and
|
|
||||||
``service`` project:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user glance admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``glance`` service entity:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name glance \
|
|
||||||
--description "OpenStack Image" image
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Image |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| name | glance |
|
|
||||||
| type | image |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the Image service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image public http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 340be3625e9b4239a6415d034e98aace |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image internal http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image admin http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 0c37ed58103f4300a84ff125a539032d |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install openstack-glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/glance/glance-api.conf`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
|
||||||
Image service database.
|
|
||||||
|
|
||||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = glance
|
|
||||||
password = GLANCE_PASS
|
|
||||||
|
|
||||||
[paste_deploy]
|
|
||||||
# ...
|
|
||||||
flavor = keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
|
||||||
``glance`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[glance_store]`` section, configure the local file
|
|
||||||
system store and location of image files:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[glance_store]
|
|
||||||
# ...
|
|
||||||
stores = file,http
|
|
||||||
default_store = file
|
|
||||||
filesystem_store_datadir = /var/lib/glance/images/
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. Edit the ``/etc/glance/glance-registry.conf`` file and complete
|
|
||||||
the following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance-registry.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
|
||||||
Image service database.
|
|
||||||
|
|
||||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance-registry.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = glance
|
|
||||||
password = GLANCE_PASS
|
|
||||||
|
|
||||||
[paste_deploy]
|
|
||||||
# ...
|
|
||||||
flavor = keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
|
||||||
``glance`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
|
|
||||||
4. Populate the Image service database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "glance-manage db_sync" glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Ignore any deprecation messages in this output.
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
* Start the Image services and configure them to start when
|
|
||||||
the system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable openstack-glance-api.service \
|
|
||||||
openstack-glance-registry.service
|
|
||||||
# systemctl start openstack-glance-api.service \
|
|
||||||
openstack-glance-registry.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
@ -1,329 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the Image service,
|
|
||||||
code-named glance, on the controller node. For simplicity, this
|
|
||||||
configuration stores images on the local file system.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Image service, you must
|
|
||||||
create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# mysql
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``glance`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE glance;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``glance`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' \
|
|
||||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
|
|
||||||
IDENTIFIED BY 'GLANCE_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to
|
|
||||||
admin-only CLI commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create the ``glance`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt glance
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 3f4e777c4062483ab8d9edd7dff829df |
|
|
||||||
| name | glance |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``glance`` user and
|
|
||||||
``service`` project:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user glance admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``glance`` service entity:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name glance \
|
|
||||||
--description "OpenStack Image" image
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Image |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| name | glance |
|
|
||||||
| type | image |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the Image service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image public http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 340be3625e9b4239a6415d034e98aace |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image internal http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | a6e4b153c2ae4c919eccfdbb7dceb5d2 |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
image admin http://controller:9292
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 0c37ed58103f4300a84ff125a539032d |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | 8c2c7f1b9b5049ea9e63757b5533e6d2 |
|
|
||||||
| service_name | glance |
|
|
||||||
| service_type | image |
|
|
||||||
| url | http://controller:9292 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/glance/glance-api.conf`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
|
||||||
Image service database.
|
|
||||||
|
|
||||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = glance
|
|
||||||
password = GLANCE_PASS
|
|
||||||
|
|
||||||
[paste_deploy]
|
|
||||||
# ...
|
|
||||||
flavor = keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
|
||||||
``glance`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[glance_store]`` section, configure the local file
|
|
||||||
system store and location of image files:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[glance_store]
|
|
||||||
# ...
|
|
||||||
stores = file,http
|
|
||||||
default_store = file
|
|
||||||
filesystem_store_datadir = /var/lib/glance/images/
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. Edit the ``/etc/glance/glance-registry.conf`` file and complete
|
|
||||||
the following actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance-registry.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_DBPASS`` with the password you chose for the
|
|
||||||
Image service database.
|
|
||||||
|
|
||||||
* In the ``[keystone_authtoken]`` and ``[paste_deploy]`` sections,
|
|
||||||
configure Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/glance/glance-registry.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = glance
|
|
||||||
password = GLANCE_PASS
|
|
||||||
|
|
||||||
[paste_deploy]
|
|
||||||
# ...
|
|
||||||
flavor = keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``GLANCE_PASS`` with the password you chose for the
|
|
||||||
``glance`` user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
|
|
||||||
4. Populate the Image service database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "glance-manage db_sync" glance
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Ignore any deprecation messages in this output.
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Image services:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service glance-registry restart
|
|
||||||
# service glance-api restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,11 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the Image service,
|
|
||||||
code-named glance, on the controller node. For simplicity, this
|
|
||||||
configuration stores images on the local file system.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:glob:
|
|
||||||
|
|
||||||
glance-install-*
|
|
@ -1,103 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify operation of the Image service using
|
|
||||||
`CirrOS <http://launchpad.net/cirros>`__, a small
|
|
||||||
Linux image that helps you test your OpenStack deployment.
|
|
||||||
|
|
||||||
For more information about how to download and build images, see
|
|
||||||
`OpenStack Virtual Machine Image Guide
|
|
||||||
<https://docs.openstack.org/image-guide/>`__.
|
|
||||||
For information about how to manage images, see the
|
|
||||||
`OpenStack End User Guide
|
|
||||||
<https://docs.openstack.org/user-guide/common/cli-manage-images.html>`__.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these commands on the controller node.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to
|
|
||||||
admin-only CLI commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Download the source image:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Install ``wget`` if your distribution does not include it.
|
|
||||||
|
|
||||||
#. Upload the image to the Image service using the
|
|
||||||
:term:`QCOW2 <QEMU Copy On Write 2 (QCOW2)>` disk format, :term:`bare`
|
|
||||||
container format, and public visibility so all projects can access it:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack image create "cirros" \
|
|
||||||
--file cirros-0.3.5-x86_64-disk.img \
|
|
||||||
--disk-format qcow2 --container-format bare \
|
|
||||||
--public
|
|
||||||
|
|
||||||
+------------------+------------------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+------------------+------------------------------------------------------+
|
|
||||||
| checksum | 133eae9fb1c98f45894a4e60d8736619 |
|
|
||||||
| container_format | bare |
|
|
||||||
| created_at | 2015-03-26T16:52:10Z |
|
|
||||||
| disk_format | qcow2 |
|
|
||||||
| file | /v2/images/cc5c6982-4910-471e-b864-1098015901b5/file |
|
|
||||||
| id | cc5c6982-4910-471e-b864-1098015901b5 |
|
|
||||||
| min_disk | 0 |
|
|
||||||
| min_ram | 0 |
|
|
||||||
| name | cirros |
|
|
||||||
| owner | ae7a98326b9c455588edd2656d723b9d |
|
|
||||||
| protected | False |
|
|
||||||
| schema | /v2/schemas/image |
|
|
||||||
| size | 13200896 |
|
|
||||||
| status | active |
|
|
||||||
| tags | |
|
|
||||||
| updated_at | 2015-03-26T16:52:10Z |
|
|
||||||
| virtual_size | None |
|
|
||||||
| visibility | public |
|
|
||||||
+------------------+------------------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
For information about the :command:`openstack image create` parameters,
|
|
||||||
see `Create or update an image (glance)
|
|
||||||
<https://docs.openstack.org/user-guide/common/cli-manage-images.html#create-or-update-an-image-glance>`__
|
|
||||||
in the ``OpenStack User Guide``.
|
|
||||||
|
|
||||||
For information about disk and container formats for images, see
|
|
||||||
`Disk and container formats for images
|
|
||||||
<https://docs.openstack.org/image-guide/image-formats.html>`__
|
|
||||||
in the ``OpenStack Virtual Machine Image Guide``.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
OpenStack generates IDs dynamically, so you will see
|
|
||||||
different values in the example command output.
|
|
||||||
|
|
||||||
#. Confirm upload of the image and validate attributes:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack image list
|
|
||||||
|
|
||||||
+--------------------------------------+--------+--------+
|
|
||||||
| ID | Name | Status |
|
|
||||||
+--------------------------------------+--------+--------+
|
|
||||||
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active |
|
|
||||||
+--------------------------------------+--------+--------+
|
|
||||||
|
|
||||||
.. end
|
|
@ -1,9 +0,0 @@
|
|||||||
=============
|
|
||||||
Image service
|
|
||||||
=============
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
common/get-started-image-service.rst
|
|
||||||
glance-install.rst
|
|
||||||
glance-verify.rst
|
|
@ -1,212 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the dashboard
|
|
||||||
on the controller node.
|
|
||||||
|
|
||||||
The only core service required by the dashboard is the Identity service.
|
|
||||||
You can use the dashboard in combination with other services, such as
|
|
||||||
Image service, Compute, and Networking. You can also use the dashboard
|
|
||||||
in environments with stand-alone services such as Object Storage.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This section assumes proper installation, configuration, and operation
|
|
||||||
of the Identity service using the Apache HTTP server and Memcached
|
|
||||||
service as described in the :ref:`Install and configure the Identity
|
|
||||||
service <keystone-install>` section.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
1. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install openstack-dashboard-apache
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
2. Respond to prompts for web server configuration.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The automatic configuration process generates a self-signed
|
|
||||||
SSL certificate. Consider obtaining an official certificate
|
|
||||||
for production environments.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
There are two modes of installation. One using ``/horizon`` as the URL,
|
|
||||||
keeping your default vhost and only adding an Alias directive: this is
|
|
||||||
the default. The other mode will remove the default Apache vhost and install
|
|
||||||
the dashboard on the webroot. It was the only available option
|
|
||||||
before the Liberty release. If you prefer to set the Apache configuration
|
|
||||||
manually, install the ``openstack-dashboard`` package instead of
|
|
||||||
``openstack-dashboard-apache``.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the
|
|
||||||
``/etc/openstack-dashboard/local_settings.py``
|
|
||||||
file and complete the following actions:
|
|
||||||
|
|
||||||
* Configure the dashboard to use OpenStack services on the
|
|
||||||
``controller`` node:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_HOST = "controller"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the Dashboard configuration section, allow your hosts to access
|
|
||||||
Dashboard:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
- Do not edit the ``ALLOWED_HOSTS`` parameter under the Ubuntu
|
|
||||||
configuration section.
|
|
||||||
- ``ALLOWED_HOSTS`` can also be ``['*']`` to accept all hosts. This
|
|
||||||
may be useful for development work, but is potentially insecure
|
|
||||||
and should not be used in production. See the
|
|
||||||
`Django documentation
|
|
||||||
<https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts>`_
|
|
||||||
for further information.
|
|
||||||
|
|
||||||
* Configure the ``memcached`` session storage service:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
|
||||||
|
|
||||||
CACHES = {
|
|
||||||
'default': {
|
|
||||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
|
||||||
'LOCATION': 'controller:11211',
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out any other session storage configuration.
|
|
||||||
|
|
||||||
* Enable the Identity API version 3:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Enable support for domains:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure API versions:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_API_VERSIONS = {
|
|
||||||
"identity": 3,
|
|
||||||
"image": 2,
|
|
||||||
"volume": 2,
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure ``Default`` as the default domain for users that you create
|
|
||||||
via the dashboard:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure ``user`` as the default role for
|
|
||||||
users that you create via the dashboard:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* If you chose networking option 1, disable support for layer-3
|
|
||||||
networking services:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_NEUTRON_NETWORK = {
|
|
||||||
...
|
|
||||||
'enable_router': False,
|
|
||||||
'enable_quotas': False,
|
|
||||||
'enable_ipv6': False,
|
|
||||||
'enable_distributed_router': False,
|
|
||||||
'enable_ha_router': False,
|
|
||||||
'enable_lb': False,
|
|
||||||
'enable_firewall': False,
|
|
||||||
'enable_vpn': False,
|
|
||||||
'enable_fip_topology_check': False,
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Optionally, configure the time zone:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
TIME_ZONE = "TIME_ZONE"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
|
||||||
For more information, see the `list of time zones
|
|
||||||
<https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
* Reload the web server configuration:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service apache2 reload
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1,204 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the dashboard
|
|
||||||
on the controller node.
|
|
||||||
|
|
||||||
The only core service required by the dashboard is the Identity service.
|
|
||||||
You can use the dashboard in combination with other services, such as
|
|
||||||
Image service, Compute, and Networking. You can also use the dashboard
|
|
||||||
in environments with stand-alone services such as Object Storage.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This section assumes proper installation, configuration, and operation
|
|
||||||
of the Identity service using the Apache HTTP server and Memcached
|
|
||||||
service as described in the :ref:`Install and configure the Identity
|
|
||||||
service <keystone-install>` section.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
1. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install openstack-dashboard
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Configure the web server:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# cp /etc/apache2/conf.d/openstack-dashboard.conf.sample \
|
|
||||||
/etc/apache2/conf.d/openstack-dashboard.conf
|
|
||||||
# a2enmod rewrite
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. Edit the
|
|
||||||
``/srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py``
|
|
||||||
file and complete the following actions:
|
|
||||||
|
|
||||||
* Configure the dashboard to use OpenStack services on the
|
|
||||||
``controller`` node:
|
|
||||||
|
|
||||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_HOST = "controller"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Allow your hosts to access the dashboard:
|
|
||||||
|
|
||||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
``ALLOWED_HOSTS`` can also be ``['*']`` to accept all hosts. This may be
|
|
||||||
useful for development work, but is potentially insecure and should
|
|
||||||
not be used in production. See `Django documentation
|
|
||||||
<https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts>`_
|
|
||||||
for further information.
|
|
||||||
|
|
||||||
* Configure the ``memcached`` session storage service:
|
|
||||||
|
|
||||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
|
||||||
|
|
||||||
CACHES = {
|
|
||||||
'default': {
|
|
||||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
|
||||||
'LOCATION': 'controller:11211',
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out any other session storage configuration.
|
|
||||||
|
|
||||||
* Enable the Identity API version 3:
|
|
||||||
|
|
||||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Enable support for domains:
|
|
||||||
|
|
||||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure API versions:
|
|
||||||
|
|
||||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_API_VERSIONS = {
|
|
||||||
"identity": 3,
|
|
||||||
"image": 2,
|
|
||||||
"volume": 2,
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure ``Default`` as the default domain for users that you create
|
|
||||||
via the dashboard:
|
|
||||||
|
|
||||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure ``user`` as the default role for
|
|
||||||
users that you create via the dashboard:
|
|
||||||
|
|
||||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* If you chose networking option 1, disable support for layer-3
|
|
||||||
networking services:
|
|
||||||
|
|
||||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_NEUTRON_NETWORK = {
|
|
||||||
...
|
|
||||||
'enable_router': False,
|
|
||||||
'enable_quotas': False,
|
|
||||||
'enable_distributed_router': False,
|
|
||||||
'enable_ha_router': False,
|
|
||||||
'enable_lb': False,
|
|
||||||
'enable_firewall': False,
|
|
||||||
'enable_vpn': False,
|
|
||||||
'enable_fip_topology_check': False,
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Optionally, configure the time zone:
|
|
||||||
|
|
||||||
.. path /srv/www/openstack-dashboard/openstack_dashboard/local/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
TIME_ZONE = "TIME_ZONE"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
|
||||||
For more information, see the `list of time zones
|
|
||||||
<https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Restart the web server and session storage service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl restart apache2.service memcached.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The ``systemctl restart`` command starts each service if
|
|
||||||
not currently running.
|
|
||||||
|
|
||||||
|
|
@ -1,194 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the dashboard
|
|
||||||
on the controller node.
|
|
||||||
|
|
||||||
The only core service required by the dashboard is the Identity service.
|
|
||||||
You can use the dashboard in combination with other services, such as
|
|
||||||
Image service, Compute, and Networking. You can also use the dashboard
|
|
||||||
in environments with stand-alone services such as Object Storage.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This section assumes proper installation, configuration, and operation
|
|
||||||
of the Identity service using the Apache HTTP server and Memcached
|
|
||||||
service as described in the :ref:`Install and configure the Identity
|
|
||||||
service <keystone-install>` section.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
1. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install openstack-dashboard
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the
|
|
||||||
``/etc/openstack-dashboard/local_settings``
|
|
||||||
file and complete the following actions:
|
|
||||||
|
|
||||||
* Configure the dashboard to use OpenStack services on the
|
|
||||||
``controller`` node:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_HOST = "controller"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Allow your hosts to access the dashboard:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
ALLOWED_HOSTS can also be ['*'] to accept all hosts. This may be
|
|
||||||
useful for development work, but is potentially insecure and should
|
|
||||||
not be used in production. See
|
|
||||||
https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts
|
|
||||||
for further information.
|
|
||||||
|
|
||||||
* Configure the ``memcached`` session storage service:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
|
||||||
|
|
||||||
CACHES = {
|
|
||||||
'default': {
|
|
||||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
|
||||||
'LOCATION': 'controller:11211',
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out any other session storage configuration.
|
|
||||||
|
|
||||||
* Enable the Identity API version 3:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Enable support for domains:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure API versions:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_API_VERSIONS = {
|
|
||||||
"identity": 3,
|
|
||||||
"image": 2,
|
|
||||||
"volume": 2,
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure ``Default`` as the default domain for users that you create
|
|
||||||
via the dashboard:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure ``user`` as the default role for
|
|
||||||
users that you create via the dashboard:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* If you chose networking option 1, disable support for layer-3
|
|
||||||
networking services:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_NEUTRON_NETWORK = {
|
|
||||||
...
|
|
||||||
'enable_router': False,
|
|
||||||
'enable_quotas': False,
|
|
||||||
'enable_distributed_router': False,
|
|
||||||
'enable_ha_router': False,
|
|
||||||
'enable_lb': False,
|
|
||||||
'enable_firewall': False,
|
|
||||||
'enable_vpn': False,
|
|
||||||
'enable_fip_topology_check': False,
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Optionally, configure the time zone:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
TIME_ZONE = "TIME_ZONE"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
|
||||||
For more information, see the `list of time zones
|
|
||||||
<https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Restart the web server and session storage service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl restart httpd.service memcached.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The ``systemctl restart`` command starts each service if
|
|
||||||
not currently running.
|
|
||||||
|
|
@ -1,194 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the dashboard
|
|
||||||
on the controller node.
|
|
||||||
|
|
||||||
The only core service required by the dashboard is the Identity service.
|
|
||||||
You can use the dashboard in combination with other services, such as
|
|
||||||
Image service, Compute, and Networking. You can also use the dashboard
|
|
||||||
in environments with stand-alone services such as Object Storage.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This section assumes proper installation, configuration, and operation
|
|
||||||
of the Identity service using the Apache HTTP server and Memcached
|
|
||||||
service as described in the :ref:`Install and configure the Identity
|
|
||||||
service <keystone-install>` section.
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
1. Install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install openstack-dashboard
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the
|
|
||||||
``/etc/openstack-dashboard/local_settings.py``
|
|
||||||
file and complete the following actions:
|
|
||||||
|
|
||||||
* Configure the dashboard to use OpenStack services on the
|
|
||||||
``controller`` node:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_HOST = "controller"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the Dashboard configuration section, allow your hosts to access
|
|
||||||
Dashboard:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
ALLOWED_HOSTS = ['one.example.com', 'two.example.com']
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
- Do not edit the ``ALLOWED_HOSTS`` parameter under the Ubuntu
|
|
||||||
configuration section.
|
|
||||||
- ``ALLOWED_HOSTS`` can also be ``['*']`` to accept all hosts. This
|
|
||||||
may be useful for development work, but is potentially insecure
|
|
||||||
and should not be used in production. See the
|
|
||||||
`Django documentation
|
|
||||||
<https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts>`_
|
|
||||||
for further information.
|
|
||||||
|
|
||||||
* Configure the ``memcached`` session storage service:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
|
|
||||||
|
|
||||||
CACHES = {
|
|
||||||
'default': {
|
|
||||||
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
|
|
||||||
'LOCATION': 'controller:11211',
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out any other session storage configuration.
|
|
||||||
|
|
||||||
* Enable the Identity API version 3:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Enable support for domains:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure API versions:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_API_VERSIONS = {
|
|
||||||
"identity": 3,
|
|
||||||
"image": 2,
|
|
||||||
"volume": 2,
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure ``Default`` as the default domain for users that you create
|
|
||||||
via the dashboard:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Configure ``user`` as the default role for
|
|
||||||
users that you create via the dashboard:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* If you chose networking option 1, disable support for layer-3
|
|
||||||
networking services:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
OPENSTACK_NEUTRON_NETWORK = {
|
|
||||||
...
|
|
||||||
'enable_router': False,
|
|
||||||
'enable_quotas': False,
|
|
||||||
'enable_ipv6': False,
|
|
||||||
'enable_distributed_router': False,
|
|
||||||
'enable_ha_router': False,
|
|
||||||
'enable_lb': False,
|
|
||||||
'enable_firewall': False,
|
|
||||||
'enable_vpn': False,
|
|
||||||
'enable_fip_topology_check': False,
|
|
||||||
}
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Optionally, configure the time zone:
|
|
||||||
|
|
||||||
.. path /etc/openstack-dashboard/local_settings.py
|
|
||||||
.. code-block:: python
|
|
||||||
|
|
||||||
TIME_ZONE = "TIME_ZONE"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``TIME_ZONE`` with an appropriate time zone identifier.
|
|
||||||
For more information, see the `list of time zones
|
|
||||||
<https://en.wikipedia.org/wiki/List_of_tz_database_time_zones>`__.
|
|
||||||
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
* Reload the web server configuration:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service apache2 reload
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1,22 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the dashboard
|
|
||||||
on the controller node.
|
|
||||||
|
|
||||||
The only core service required by the dashboard is the Identity service.
|
|
||||||
You can use the dashboard in combination with other services, such as
|
|
||||||
Image service, Compute, and Networking. You can also use the dashboard
|
|
||||||
in environments with stand-alone services such as Object Storage.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This section assumes proper installation, configuration, and operation
|
|
||||||
of the Identity service using the Apache HTTP server and Memcached
|
|
||||||
service as described in the :ref:`Install and configure the Identity
|
|
||||||
service <keystone-install>` section.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:glob:
|
|
||||||
|
|
||||||
horizon-install-*
|
|
@ -1,31 +0,0 @@
|
|||||||
==========
|
|
||||||
Next steps
|
|
||||||
==========
|
|
||||||
|
|
||||||
Your OpenStack environment now includes the dashboard. You can
|
|
||||||
:ref:`launch-instance` or add more services to your environment.
|
|
||||||
|
|
||||||
After you install and configure the dashboard, you can
|
|
||||||
complete the following tasks:
|
|
||||||
|
|
||||||
* Provide users with a public IP address, a username, and a password
|
|
||||||
so they can access the dashboard through a web browser. In case of
|
|
||||||
any SSL certificate connection problems, point the server
|
|
||||||
IP address to a domain name, and give users access.
|
|
||||||
|
|
||||||
* Customize your dashboard. See section
|
|
||||||
`Customize and configure the Dashboard
|
|
||||||
<https://docs.openstack.org/admin-guide/dashboard-customize-configure.html>`__.
|
|
||||||
|
|
||||||
* Set up session storage. See
|
|
||||||
`Set up session storage for the dashboard
|
|
||||||
<https://docs.openstack.org/admin-guide/dashboard-sessions.html>`__.
|
|
||||||
|
|
||||||
* To use the VNC client with the dashboard, the browser
|
|
||||||
must support HTML5 Canvas and HTML5 WebSockets.
|
|
||||||
|
|
||||||
For details about browsers that support noVNC, see
|
|
||||||
`README
|
|
||||||
<https://github.com/kanaka/noVNC/blob/master/README.md>`__
|
|
||||||
and `browser support
|
|
||||||
<https://github.com/kanaka/noVNC/wiki/Browser-support>`__.
|
|
@ -1,14 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify operation of the dashboard.
|
|
||||||
|
|
||||||
|
|
||||||
Access the dashboard using a web browser at
|
|
||||||
``http://controller/``.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Authenticate using ``admin`` or ``demo`` user
|
|
||||||
and ``default`` domain credentials.
|
|
@ -1,14 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify operation of the dashboard.
|
|
||||||
|
|
||||||
|
|
||||||
Access the dashboard using a web browser at
|
|
||||||
``http://controller/``.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Authenticate using ``admin`` or ``demo`` user
|
|
||||||
and ``default`` domain credentials.
|
|
@ -1,14 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify operation of the dashboard.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Access the dashboard using a web browser at
|
|
||||||
``http://controller/dashboard``.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Authenticate using ``admin`` or ``demo`` user
|
|
||||||
and ``default`` domain credentials.
|
|
@ -1,14 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify operation of the dashboard.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Access the dashboard using a web browser at
|
|
||||||
``http://controller/horizon``.
|
|
||||||
|
|
||||||
|
|
||||||
Authenticate using ``admin`` or ``demo`` user
|
|
||||||
and ``default`` domain credentials.
|
|
@ -1,7 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:glob:
|
|
||||||
|
|
||||||
horizon-verify-*
|
|
@ -1,15 +0,0 @@
|
|||||||
=========
|
|
||||||
Dashboard
|
|
||||||
=========
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
horizon-install.rst
|
|
||||||
horizon-verify.rst
|
|
||||||
horizon-next-steps.rst
|
|
||||||
|
|
||||||
The Dashboard (horizon) is a web interface that enables cloud
|
|
||||||
administrators and users to manage various OpenStack resources
|
|
||||||
and services.
|
|
||||||
|
|
||||||
This example deployment uses an Apache web server.
|
|
@ -65,13 +65,6 @@ Contents
|
|||||||
common/conventions.rst
|
common/conventions.rst
|
||||||
overview.rst
|
overview.rst
|
||||||
environment.rst
|
environment.rst
|
||||||
keystone.rst
|
|
||||||
glance.rst
|
|
||||||
nova.rst
|
|
||||||
neutron.rst
|
|
||||||
horizon.rst
|
|
||||||
cinder.rst
|
|
||||||
additional-services.rst
|
|
||||||
launch-instance.rst
|
launch-instance.rst
|
||||||
common/appendix.rst
|
common/appendix.rst
|
||||||
|
|
||||||
|
@ -53,13 +53,6 @@ Contents
|
|||||||
common/conventions.rst
|
common/conventions.rst
|
||||||
overview.rst
|
overview.rst
|
||||||
environment.rst
|
environment.rst
|
||||||
keystone.rst
|
|
||||||
glance.rst
|
|
||||||
nova.rst
|
|
||||||
neutron.rst
|
|
||||||
horizon.rst
|
|
||||||
cinder.rst
|
|
||||||
additional-services.rst
|
|
||||||
launch-instance.rst
|
launch-instance.rst
|
||||||
common/appendix.rst
|
common/appendix.rst
|
||||||
|
|
||||||
|
@ -54,13 +54,6 @@ Contents
|
|||||||
common/conventions.rst
|
common/conventions.rst
|
||||||
overview.rst
|
overview.rst
|
||||||
environment.rst
|
environment.rst
|
||||||
keystone.rst
|
|
||||||
glance.rst
|
|
||||||
nova.rst
|
|
||||||
neutron.rst
|
|
||||||
horizon.rst
|
|
||||||
cinder.rst
|
|
||||||
additional-services.rst
|
|
||||||
launch-instance.rst
|
launch-instance.rst
|
||||||
common/appendix.rst
|
common/appendix.rst
|
||||||
|
|
||||||
|
@ -52,13 +52,6 @@ Contents
|
|||||||
common/conventions.rst
|
common/conventions.rst
|
||||||
overview.rst
|
overview.rst
|
||||||
environment.rst
|
environment.rst
|
||||||
keystone.rst
|
|
||||||
glance.rst
|
|
||||||
nova.rst
|
|
||||||
neutron.rst
|
|
||||||
horizon.rst
|
|
||||||
cinder.rst
|
|
||||||
additional-services.rst
|
|
||||||
launch-instance.rst
|
launch-instance.rst
|
||||||
common/appendix.rst
|
common/appendix.rst
|
||||||
|
|
||||||
|
@ -1,197 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the OpenStack
|
|
||||||
Identity service, code-named keystone, on the controller node. For
|
|
||||||
scalability purposes, this configuration deploys Fernet tokens and
|
|
||||||
the Apache HTTP server to handle requests.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Identity service, you must
|
|
||||||
create a database.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Create the ``keystone`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE keystone;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Grant proper access to the ``keystone`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
|
|
||||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
|
|
||||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``KEYSTONE_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
#. Exit the database access client.
|
|
||||||
|
|
||||||
.. _keystone-install-configure-debian:
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This guide uses the Apache HTTP server with ``mod_wsgi`` to serve
|
|
||||||
Identity service requests on ports 5000 and 35357. By default, the
|
|
||||||
keystone service still listens on these ports. The package handles
|
|
||||||
all of the Apache configuration for you (including the activation of
|
|
||||||
the ``mod_wsgi`` apache2 module and keystone configuration in Apache).
|
|
||||||
|
|
||||||
#. Run the following command to install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/keystone/keystone.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``KEYSTONE_DBPASS`` with the password you chose for the database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[token]`` section, configure the Fernet token provider:
|
|
||||||
|
|
||||||
.. path /etc/keystone/keystone.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[token]
|
|
||||||
# ...
|
|
||||||
provider = fernet
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. Populate the Identity service database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "keystone-manage db_sync" keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
4. Initialize Fernet key repositories:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
|
|
||||||
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
5. Bootstrap the Identity service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
|
|
||||||
--bootstrap-admin-url http://controller:35357/v3/ \
|
|
||||||
--bootstrap-internal-url http://controller:5000/v3/ \
|
|
||||||
--bootstrap-public-url http://controller:5000/v3/ \
|
|
||||||
--bootstrap-region-id RegionOne
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``ADMIN_PASS`` with a suitable password for an administrative user.
|
|
||||||
|
|
||||||
Configure the Apache HTTP server
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Edit the ``/etc/apache2/apache2.conf`` file and configure the
|
|
||||||
``ServerName`` option to reference the controller node:
|
|
||||||
|
|
||||||
.. path /etc/apache2/apache2.conf
|
|
||||||
.. code-block:: apache
|
|
||||||
|
|
||||||
ServerName controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Debian package will perform the below operations for you:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# a2enmod wsgi
|
|
||||||
# a2ensite wsgi-keystone.conf
|
|
||||||
# invoke-rc.d apache2 restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Finalize the installation
|
|
||||||
-------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Configure the administrative account
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ export OS_USERNAME=admin
|
|
||||||
$ export OS_PASSWORD=ADMIN_PASS
|
|
||||||
$ export OS_PROJECT_NAME=admin
|
|
||||||
$ export OS_USER_DOMAIN_NAME=Default
|
|
||||||
$ export OS_PROJECT_DOMAIN_NAME=Default
|
|
||||||
$ export OS_AUTH_URL=http://controller:35357/v3
|
|
||||||
$ export OS_IDENTITY_API_VERSION=3
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``ADMIN_PASS`` with the password used in the
|
|
||||||
``keystone-manage bootstrap`` command in `keystone-install-configure-debian`_.
|
|
@ -1,261 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the OpenStack
|
|
||||||
Identity service, code-named keystone, on the controller node. For
|
|
||||||
scalability purposes, this configuration deploys Fernet tokens and
|
|
||||||
the Apache HTTP server to handle requests.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Identity service, you must
|
|
||||||
create a database.
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Before you begin, ensure you have the most recent version of
|
|
||||||
``python-pyasn1`` `installed <https://pypi.python.org/pypi/pyasn1>`_.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Create the ``keystone`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE keystone;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Grant proper access to the ``keystone`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
|
|
||||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
|
|
||||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``KEYSTONE_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
#. Exit the database access client.
|
|
||||||
|
|
||||||
.. _keystone-install-configure-obs:
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This guide uses the Apache HTTP server with ``mod_wsgi`` to serve
|
|
||||||
Identity service requests on ports 5000 and 35357. By default, the
|
|
||||||
keystone service still listens on these ports. Therefore, this guide
|
|
||||||
manually disables the keystone service.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Starting with the Newton release, SUSE OpenStack packages are shipping
|
|
||||||
with the upstream default configuration files. For example
|
|
||||||
``/etc/keystone/keystone.conf``, with customizations in
|
|
||||||
``/etc/keystone/keystone.conf.d/010-keystone.conf``. While the
|
|
||||||
following instructions modify the default configuration file, adding a
|
|
||||||
new file in ``/etc/keystone/keystone.conf.d`` achieves the same
|
|
||||||
result.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Run the following command to install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install openstack-keystone apache2-mod_wsgi
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/keystone/keystone.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``KEYSTONE_DBPASS`` with the password you chose for the database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[token]`` section, configure the Fernet token provider:
|
|
||||||
|
|
||||||
.. path /etc/keystone/keystone.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[token]
|
|
||||||
# ...
|
|
||||||
provider = fernet
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. Populate the Identity service database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "keystone-manage db_sync" keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
4. Initialize Fernet key repositories:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
|
|
||||||
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
5. Bootstrap the Identity service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
|
|
||||||
--bootstrap-admin-url http://controller:35357/v3/ \
|
|
||||||
--bootstrap-internal-url http://controller:5000/v3/ \
|
|
||||||
--bootstrap-public-url http://controller:5000/v3/ \
|
|
||||||
--bootstrap-region-id RegionOne
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``ADMIN_PASS`` with a suitable password for an administrative user.
|
|
||||||
|
|
||||||
Configure the Apache HTTP server
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Edit the ``/etc/sysconfig/apache2`` file and configure the
|
|
||||||
``APACHE_SERVERNAME`` option to reference the controller node:
|
|
||||||
|
|
||||||
.. path /etc/sysconfig/apache2
|
|
||||||
.. code-block:: shell
|
|
||||||
|
|
||||||
APACHE_SERVERNAME="controller"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the ``/etc/apache2/conf.d/wsgi-keystone.conf`` file
|
|
||||||
with the following content:
|
|
||||||
|
|
||||||
.. path /etc/apache2/conf.d/wsgi-keystone.conf
|
|
||||||
.. code-block:: apache
|
|
||||||
|
|
||||||
Listen 5000
|
|
||||||
Listen 35357
|
|
||||||
|
|
||||||
<VirtualHost *:5000>
|
|
||||||
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
|
|
||||||
WSGIProcessGroup keystone-public
|
|
||||||
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
|
|
||||||
WSGIApplicationGroup %{GLOBAL}
|
|
||||||
WSGIPassAuthorization On
|
|
||||||
ErrorLogFormat "%{cu}t %M"
|
|
||||||
ErrorLog /var/log/apache2/keystone.log
|
|
||||||
CustomLog /var/log/apache2/keystone_access.log combined
|
|
||||||
|
|
||||||
<Directory /usr/bin>
|
|
||||||
Require all granted
|
|
||||||
</Directory>
|
|
||||||
</VirtualHost>
|
|
||||||
|
|
||||||
<VirtualHost *:35357>
|
|
||||||
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
|
|
||||||
WSGIProcessGroup keystone-admin
|
|
||||||
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
|
|
||||||
WSGIApplicationGroup %{GLOBAL}
|
|
||||||
WSGIPassAuthorization On
|
|
||||||
ErrorLogFormat "%{cu}t %M"
|
|
||||||
ErrorLog /var/log/apache2/keystone.log
|
|
||||||
CustomLog /var/log/apache2/keystone_access.log combined
|
|
||||||
|
|
||||||
<Directory /usr/bin>
|
|
||||||
Require all granted
|
|
||||||
</Directory>
|
|
||||||
</VirtualHost>
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Recursively change the ownership of the ``/etc/keystone`` directory:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# chown -R keystone:keystone /etc/keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Finalize the installation
|
|
||||||
-------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Start the Apache HTTP service and configure it to start when the system
|
|
||||||
boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable apache2.service
|
|
||||||
# systemctl start apache2.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Configure the administrative account
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ export OS_USERNAME=admin
|
|
||||||
$ export OS_PASSWORD=ADMIN_PASS
|
|
||||||
$ export OS_PROJECT_NAME=admin
|
|
||||||
$ export OS_USER_DOMAIN_NAME=Default
|
|
||||||
$ export OS_PROJECT_DOMAIN_NAME=Default
|
|
||||||
$ export OS_AUTH_URL=http://controller:35357/v3
|
|
||||||
$ export OS_IDENTITY_API_VERSION=3
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``ADMIN_PASS`` with the password used in the
|
|
||||||
``keystone-manage bootstrap`` command in `keystone-install-configure-obs`_.
|
|
@ -1,203 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the OpenStack
|
|
||||||
Identity service, code-named keystone, on the controller node. For
|
|
||||||
scalability purposes, this configuration deploys Fernet tokens and
|
|
||||||
the Apache HTTP server to handle requests.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Identity service, you must
|
|
||||||
create a database.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
2. Create the ``keystone`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE keystone;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Grant proper access to the ``keystone`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
|
|
||||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
|
|
||||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``KEYSTONE_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
#. Exit the database access client.
|
|
||||||
|
|
||||||
.. _keystone-install-configure-rdo:
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This guide uses the Apache HTTP server with ``mod_wsgi`` to serve
|
|
||||||
Identity service requests on ports 5000 and 35357. By default, the
|
|
||||||
keystone service still listens on these ports. Therefore, this guide
|
|
||||||
manually disables the keystone service.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Run the following command to install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install openstack-keystone httpd mod_wsgi
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/keystone/keystone.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``KEYSTONE_DBPASS`` with the password you chose for the database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[token]`` section, configure the Fernet token provider:
|
|
||||||
|
|
||||||
.. path /etc/keystone/keystone.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[token]
|
|
||||||
# ...
|
|
||||||
provider = fernet
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. Populate the Identity service database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "keystone-manage db_sync" keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
4. Initialize Fernet key repositories:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
|
|
||||||
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
5. Bootstrap the Identity service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
|
|
||||||
--bootstrap-admin-url http://controller:35357/v3/ \
|
|
||||||
--bootstrap-internal-url http://controller:5000/v3/ \
|
|
||||||
--bootstrap-public-url http://controller:5000/v3/ \
|
|
||||||
--bootstrap-region-id RegionOne
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``ADMIN_PASS`` with a suitable password for an administrative user.
|
|
||||||
|
|
||||||
Configure the Apache HTTP server
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
#. Edit the ``/etc/httpd/conf/httpd.conf`` file and configure the
|
|
||||||
``ServerName`` option to reference the controller node:
|
|
||||||
|
|
||||||
.. path /etc/httpd/conf/httpd
|
|
||||||
.. code-block:: apache
|
|
||||||
|
|
||||||
ServerName controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create a link to the ``/usr/share/keystone/wsgi-keystone.conf`` file:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Finalize the installation
|
|
||||||
-------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Start the Apache HTTP service and configure it to start when the system
|
|
||||||
boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable httpd.service
|
|
||||||
# systemctl start httpd.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Configure the administrative account
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ export OS_USERNAME=admin
|
|
||||||
$ export OS_PASSWORD=ADMIN_PASS
|
|
||||||
$ export OS_PROJECT_NAME=admin
|
|
||||||
$ export OS_USER_DOMAIN_NAME=Default
|
|
||||||
$ export OS_PROJECT_DOMAIN_NAME=Default
|
|
||||||
$ export OS_AUTH_URL=http://controller:35357/v3
|
|
||||||
$ export OS_IDENTITY_API_VERSION=3
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``ADMIN_PASS`` with the password used in the
|
|
||||||
``keystone-manage bootstrap`` command in `keystone-install-configure-rdo`_.
|
|
@ -1,193 +0,0 @@
|
|||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the OpenStack
|
|
||||||
Identity service, code-named keystone, on the controller node. For
|
|
||||||
scalability purposes, this configuration deploys Fernet tokens and
|
|
||||||
the Apache HTTP server to handle requests.
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you install and configure the Identity service, you must
|
|
||||||
create a database.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# mysql
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Create the ``keystone`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> CREATE DATABASE keystone;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Grant proper access to the ``keystone`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
|
|
||||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
|
|
||||||
IDENTIFIED BY 'KEYSTONE_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``KEYSTONE_DBPASS`` with a suitable password.
|
|
||||||
|
|
||||||
#. Exit the database access client.
|
|
||||||
|
|
||||||
.. _keystone-install-configure-ubuntu:
|
|
||||||
|
|
||||||
Install and configure components
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This guide uses the Apache HTTP server with ``mod_wsgi`` to serve
|
|
||||||
Identity service requests on ports 5000 and 35357. By default, the
|
|
||||||
keystone service still listens on these ports. The package handles
|
|
||||||
all of the Apache configuration for you (including the activation of
|
|
||||||
the ``mod_wsgi`` apache2 module and keystone configuration in Apache).
|
|
||||||
|
|
||||||
#. Run the following command to install the packages:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Edit the ``/etc/keystone/keystone.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/keystone/keystone.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``KEYSTONE_DBPASS`` with the password you chose for the database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[token]`` section, configure the Fernet token provider:
|
|
||||||
|
|
||||||
.. path /etc/keystone/keystone.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[token]
|
|
||||||
# ...
|
|
||||||
provider = fernet
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. Populate the Identity service database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "keystone-manage db_sync" keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
4. Initialize Fernet key repositories:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
|
|
||||||
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
5. Bootstrap the Identity service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# keystone-manage bootstrap --bootstrap-password ADMIN_PASS \
|
|
||||||
--bootstrap-admin-url http://controller:35357/v3/ \
|
|
||||||
--bootstrap-internal-url http://controller:5000/v3/ \
|
|
||||||
--bootstrap-public-url http://controller:5000/v3/ \
|
|
||||||
--bootstrap-region-id RegionOne
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``ADMIN_PASS`` with a suitable password for an administrative user.
|
|
||||||
|
|
||||||
Configure the Apache HTTP server
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Edit the ``/etc/apache2/apache2.conf`` file and configure the
|
|
||||||
``ServerName`` option to reference the controller node:
|
|
||||||
|
|
||||||
.. path /etc/apache2/apache2.conf
|
|
||||||
.. code-block:: apache
|
|
||||||
|
|
||||||
ServerName controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Finalize the installation
|
|
||||||
-------------------------
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Apache service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service apache2 restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Configure the administrative account
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ export OS_USERNAME=admin
|
|
||||||
$ export OS_PASSWORD=ADMIN_PASS
|
|
||||||
$ export OS_PROJECT_NAME=admin
|
|
||||||
$ export OS_USER_DOMAIN_NAME=Default
|
|
||||||
$ export OS_PROJECT_DOMAIN_NAME=Default
|
|
||||||
$ export OS_AUTH_URL=http://controller:35357/v3
|
|
||||||
$ export OS_IDENTITY_API_VERSION=3
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``ADMIN_PASS`` with the password used in the
|
|
||||||
``keystone-manage bootstrap`` command in `keystone-install-configure-ubuntu`_.
|
|
@ -1,14 +0,0 @@
|
|||||||
.. _keystone-install:
|
|
||||||
|
|
||||||
Install and configure
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
This section describes how to install and configure the OpenStack
|
|
||||||
Identity service, code-named keystone, on the controller node. For
|
|
||||||
scalability purposes, this configuration deploys Fernet tokens and
|
|
||||||
the Apache HTTP server to handle requests.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:glob:
|
|
||||||
|
|
||||||
keystone-install-*
|
|
@ -1,96 +0,0 @@
|
|||||||
Create OpenStack client environment scripts
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The previous section used a combination of environment variables and
|
|
||||||
command options to interact with the Identity service via the
|
|
||||||
``openstack`` client. To increase efficiency of client operations,
|
|
||||||
OpenStack supports simple client environment scripts also known as
|
|
||||||
OpenRC files. These scripts typically contain common options for
|
|
||||||
all clients, but also support unique options. For more information, see the
|
|
||||||
`OpenStack End User Guide <https://docs.openstack.org/user-guide/common/
|
|
||||||
cli_set_environment_variables_using_openstack_rc.html>`_.
|
|
||||||
|
|
||||||
Creating the scripts
|
|
||||||
--------------------
|
|
||||||
|
|
||||||
Create client environment scripts for the ``admin`` and ``demo``
|
|
||||||
projects and users. Future portions of this guide reference these
|
|
||||||
scripts to load appropriate credentials for client operations.
|
|
||||||
|
|
||||||
#. Create and edit the ``admin-openrc`` file and add the following content:
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The OpenStack client also supports using a ``clouds.yaml`` file.
|
|
||||||
For more information, see
|
|
||||||
the `os-client-config <http://docs.openstack.org/developer/os-client-config/>`_.
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
export OS_PROJECT_DOMAIN_NAME=Default
|
|
||||||
export OS_USER_DOMAIN_NAME=Default
|
|
||||||
export OS_PROJECT_NAME=admin
|
|
||||||
export OS_USERNAME=admin
|
|
||||||
export OS_PASSWORD=ADMIN_PASS
|
|
||||||
export OS_AUTH_URL=http://controller:35357/v3
|
|
||||||
export OS_IDENTITY_API_VERSION=3
|
|
||||||
export OS_IMAGE_API_VERSION=2
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``ADMIN_PASS`` with the password you chose
|
|
||||||
for the ``admin`` user in the Identity service.
|
|
||||||
|
|
||||||
#. Create and edit the ``demo-openrc`` file and add the following content:
|
|
||||||
|
|
||||||
.. code-block:: bash
|
|
||||||
|
|
||||||
export OS_PROJECT_DOMAIN_NAME=Default
|
|
||||||
export OS_USER_DOMAIN_NAME=Default
|
|
||||||
export OS_PROJECT_NAME=demo
|
|
||||||
export OS_USERNAME=demo
|
|
||||||
export OS_PASSWORD=DEMO_PASS
|
|
||||||
export OS_AUTH_URL=http://controller:5000/v3
|
|
||||||
export OS_IDENTITY_API_VERSION=3
|
|
||||||
export OS_IMAGE_API_VERSION=2
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``DEMO_PASS`` with the password you chose
|
|
||||||
for the ``demo`` user in the Identity service.
|
|
||||||
|
|
||||||
Using the scripts
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
To run clients as a specific project and user, you can simply load
|
|
||||||
the associated client environment script prior to running them.
|
|
||||||
For example:
|
|
||||||
|
|
||||||
#. Load the ``admin-openrc`` file to populate
|
|
||||||
environment variables with the location of the Identity service
|
|
||||||
and the ``admin`` project and user credentials:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Request an authentication token:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack token issue
|
|
||||||
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| expires | 2016-02-12T20:44:35.659723Z |
|
|
||||||
| id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl |
|
|
||||||
| | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |
|
|
||||||
| | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E |
|
|
||||||
| project_id | 343d245e850143a096806dfaefa9afdc |
|
|
||||||
| user_id | ac3377633149401296f6c0d92d79dc16 |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
@ -1,114 +0,0 @@
|
|||||||
Create a domain, projects, users, and roles
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The Identity service provides authentication services for each OpenStack
|
|
||||||
service. The authentication service uses a combination of :term:`domains
|
|
||||||
<domain>`, :term:`projects<project>`, :term:`users<user>`, and
|
|
||||||
:term:`roles<role>`.
|
|
||||||
|
|
||||||
#. This guide uses a service project that contains a unique user for each
|
|
||||||
service that you add to your environment. Create the ``service``
|
|
||||||
project:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack project create --domain default \
|
|
||||||
--description "Service Project" service
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | Service Project |
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 24ac7f19cd944f4cba1d77469b2a73ed |
|
|
||||||
| is_domain | False |
|
|
||||||
| name | service |
|
|
||||||
| parent_id | default |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Regular (non-admin) tasks should use an unprivileged project and user.
|
|
||||||
As an example, this guide creates the ``demo`` project and user.
|
|
||||||
|
|
||||||
* Create the ``demo`` project:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack project create --domain default \
|
|
||||||
--description "Demo Project" demo
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | Demo Project |
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | 231ad6e7ebba47d6a1e57e1cc07ae446 |
|
|
||||||
| is_domain | False |
|
|
||||||
| name | demo |
|
|
||||||
| parent_id | default |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Do not repeat this step when creating additional users for this
|
|
||||||
project.
|
|
||||||
|
|
||||||
* Create the ``demo`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default \
|
|
||||||
--password-prompt demo
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | aeda23aa78f44e859900e22c24817832 |
|
|
||||||
| name | demo |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Create the ``user`` role:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role create user
|
|
||||||
|
|
||||||
+-----------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-----------+----------------------------------+
|
|
||||||
| domain_id | None |
|
|
||||||
| id | 997ce8d05fc143ac97d83fdfb5998552 |
|
|
||||||
| name | user |
|
|
||||||
+-----------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``user`` role to the ``demo`` project and user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project demo --user demo user
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
You can repeat this procedure to create additional projects and
|
|
||||||
users.
|
|
@ -1,74 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify operation of the Identity service before installing other
|
|
||||||
services.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these commands on the controller node.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD``
|
|
||||||
environment variable:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ unset OS_AUTH_URL OS_PASSWORD
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. As the ``admin`` user, request an authentication token:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack --os-auth-url http://controller:35357/v3 \
|
|
||||||
--os-project-domain-name Default --os-user-domain-name Default \
|
|
||||||
--os-project-name admin --os-username admin token issue
|
|
||||||
|
|
||||||
Password:
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| expires | 2016-02-12T20:14:07.056119Z |
|
|
||||||
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
|
|
||||||
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
|
|
||||||
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
|
|
||||||
| project_id | 343d245e850143a096806dfaefa9afdc |
|
|
||||||
| user_id | ac3377633149401296f6c0d92d79dc16 |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command uses the password for the ``admin`` user.
|
|
||||||
|
|
||||||
4. As the ``demo`` user, request an authentication token:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack --os-auth-url http://controller:5000/v3 \
|
|
||||||
--os-project-domain-name Default --os-user-domain-name Default \
|
|
||||||
--os-project-name demo --os-username demo token issue
|
|
||||||
|
|
||||||
Password:
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| expires | 2016-02-12T20:15:39.014479Z |
|
|
||||||
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
|
|
||||||
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
|
|
||||||
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
|
|
||||||
| project_id | ed0b60bf607743088218b0a533d5943f |
|
|
||||||
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command uses the password for the ``demo``
|
|
||||||
user and API port 5000 which only allows regular (non-admin)
|
|
||||||
access to the Identity service API.
|
|
@ -1,83 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify operation of the Identity service before installing other
|
|
||||||
services.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these commands on the controller node.
|
|
||||||
|
|
||||||
|
|
||||||
#. For security reasons, disable the temporary authentication
|
|
||||||
token mechanism:
|
|
||||||
|
|
||||||
Edit the ``/etc/keystone/keystone-paste.ini``
|
|
||||||
file and remove ``admin_token_auth`` from the
|
|
||||||
``[pipeline:public_api]``, ``[pipeline:admin_api]``,
|
|
||||||
and ``[pipeline:api_v3]`` sections.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD``
|
|
||||||
environment variable:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ unset OS_AUTH_URL OS_PASSWORD
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. As the ``admin`` user, request an authentication token:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack --os-auth-url http://controller:35357/v3 \
|
|
||||||
--os-project-domain-name Default --os-user-domain-name Default \
|
|
||||||
--os-project-name admin --os-username admin token issue
|
|
||||||
|
|
||||||
Password:
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| expires | 2016-02-12T20:14:07.056119Z |
|
|
||||||
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
|
|
||||||
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
|
|
||||||
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
|
|
||||||
| project_id | 343d245e850143a096806dfaefa9afdc |
|
|
||||||
| user_id | ac3377633149401296f6c0d92d79dc16 |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command uses the password for the ``admin`` user.
|
|
||||||
|
|
||||||
4. As the ``demo`` user, request an authentication token:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack --os-auth-url http://controller:5000/v3 \
|
|
||||||
--os-project-domain-name Default --os-user-domain-name Default \
|
|
||||||
--os-project-name demo --os-username demo token issue
|
|
||||||
|
|
||||||
Password:
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| expires | 2016-02-12T20:15:39.014479Z |
|
|
||||||
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
|
|
||||||
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
|
|
||||||
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
|
|
||||||
| project_id | ed0b60bf607743088218b0a533d5943f |
|
|
||||||
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command uses the password for the ``demo``
|
|
||||||
user and API port 5000 which only allows regular (non-admin)
|
|
||||||
access to the Identity service API.
|
|
@ -1,83 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify operation of the Identity service before installing other
|
|
||||||
services.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these commands on the controller node.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. For security reasons, disable the temporary authentication
|
|
||||||
token mechanism:
|
|
||||||
|
|
||||||
Edit the ``/etc/keystone/keystone-paste.ini``
|
|
||||||
file and remove ``admin_token_auth`` from the
|
|
||||||
``[pipeline:public_api]``, ``[pipeline:admin_api]``,
|
|
||||||
and ``[pipeline:api_v3]`` sections.
|
|
||||||
|
|
||||||
|
|
||||||
2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD``
|
|
||||||
environment variable:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ unset OS_AUTH_URL OS_PASSWORD
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. As the ``admin`` user, request an authentication token:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack --os-auth-url http://controller:35357/v3 \
|
|
||||||
--os-project-domain-name Default --os-user-domain-name Default \
|
|
||||||
--os-project-name admin --os-username admin token issue
|
|
||||||
|
|
||||||
Password:
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| expires | 2016-02-12T20:14:07.056119Z |
|
|
||||||
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
|
|
||||||
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
|
|
||||||
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
|
|
||||||
| project_id | 343d245e850143a096806dfaefa9afdc |
|
|
||||||
| user_id | ac3377633149401296f6c0d92d79dc16 |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command uses the password for the ``admin`` user.
|
|
||||||
|
|
||||||
4. As the ``demo`` user, request an authentication token:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack --os-auth-url http://controller:5000/v3 \
|
|
||||||
--os-project-domain-name Default --os-user-domain-name Default \
|
|
||||||
--os-project-name demo --os-username demo token issue
|
|
||||||
|
|
||||||
Password:
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| expires | 2016-02-12T20:15:39.014479Z |
|
|
||||||
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
|
|
||||||
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
|
|
||||||
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
|
|
||||||
| project_id | ed0b60bf607743088218b0a533d5943f |
|
|
||||||
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command uses the password for the ``demo``
|
|
||||||
user and API port 5000 which only allows regular (non-admin)
|
|
||||||
access to the Identity service API.
|
|
@ -1,83 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify operation of the Identity service before installing other
|
|
||||||
services.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these commands on the controller node.
|
|
||||||
|
|
||||||
|
|
||||||
#. For security reasons, disable the temporary authentication
|
|
||||||
token mechanism:
|
|
||||||
|
|
||||||
Edit the ``/etc/keystone/keystone-paste.ini``
|
|
||||||
file and remove ``admin_token_auth`` from the
|
|
||||||
``[pipeline:public_api]``, ``[pipeline:admin_api]``,
|
|
||||||
and ``[pipeline:api_v3]`` sections.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
2. Unset the temporary ``OS_AUTH_URL`` and ``OS_PASSWORD``
|
|
||||||
environment variable:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ unset OS_AUTH_URL OS_PASSWORD
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
3. As the ``admin`` user, request an authentication token:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack --os-auth-url http://controller:35357/v3 \
|
|
||||||
--os-project-domain-name Default --os-user-domain-name Default \
|
|
||||||
--os-project-name admin --os-username admin token issue
|
|
||||||
|
|
||||||
Password:
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| expires | 2016-02-12T20:14:07.056119Z |
|
|
||||||
| id | gAAAAABWvi7_B8kKQD9wdXac8MoZiQldmjEO643d-e_j-XXq9AmIegIbA7UHGPv |
|
|
||||||
| | atnN21qtOMjCFWX7BReJEQnVOAj3nclRQgAYRsfSU_MrsuWb4EDtnjU7HEpoBb4 |
|
|
||||||
| | o6ozsA_NmFWEpLeKy0uNn_WeKbAhYygrsmQGA49dclHVnz-OMVLiyM9ws |
|
|
||||||
| project_id | 343d245e850143a096806dfaefa9afdc |
|
|
||||||
| user_id | ac3377633149401296f6c0d92d79dc16 |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command uses the password for the ``admin`` user.
|
|
||||||
|
|
||||||
4. As the ``demo`` user, request an authentication token:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack --os-auth-url http://controller:5000/v3 \
|
|
||||||
--os-project-domain-name Default --os-user-domain-name Default \
|
|
||||||
--os-project-name demo --os-username demo token issue
|
|
||||||
|
|
||||||
Password:
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
| expires | 2016-02-12T20:15:39.014479Z |
|
|
||||||
| id | gAAAAABWvi9bsh7vkiby5BpCCnc-JkbGhm9wH3fabS_cY7uabOubesi-Me6IGWW |
|
|
||||||
| | yQqNegDDZ5jw7grI26vvgy1J5nCVwZ_zFRqPiz_qhbq29mgbQLglbkq6FQvzBRQ |
|
|
||||||
| | JcOzq3uwhzNxszJWmzGC7rJE_H0A_a3UFhqv8M4zMRYSbS2YF0MyFmp_U |
|
|
||||||
| project_id | ed0b60bf607743088218b0a533d5943f |
|
|
||||||
| user_id | 58126687cbcc4888bfa9ab73a2256f27 |
|
|
||||||
+------------+-----------------------------------------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command uses the password for the ``demo``
|
|
||||||
user and API port 5000 which only allows regular (non-admin)
|
|
||||||
access to the Identity service API.
|
|
@ -1,14 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Verify operation of the Identity service before installing other
|
|
||||||
services.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these commands on the controller node.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:glob:
|
|
||||||
|
|
||||||
keystone-verify-*
|
|
@ -1,11 +0,0 @@
|
|||||||
================
|
|
||||||
Identity service
|
|
||||||
================
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
common/get-started-identity.rst
|
|
||||||
keystone-install.rst
|
|
||||||
keystone-users.rst
|
|
||||||
keystone-verify.rst
|
|
||||||
keystone-openrc.rst
|
|
@ -25,7 +25,7 @@ Create virtual networks
|
|||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
Create virtual networks for the networking option that you chose
|
Create virtual networks for the networking option that you chose
|
||||||
in :ref:`networking`. If you chose option 1, create only the provider
|
when configuring Neutron. If you chose option 1, create only the provider
|
||||||
network. If you chose option 2, create the provider and self-service
|
network. If you chose option 2, create the provider and self-service
|
||||||
networks.
|
networks.
|
||||||
|
|
||||||
|
@ -1,146 +0,0 @@
|
|||||||
Install and configure compute node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The compute node handles connectivity and :term:`security groups <security
|
|
||||||
group>` for instances.
|
|
||||||
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install neutron-linuxbridge-agent
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Configure the common component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
The Networking common component configuration includes the
|
|
||||||
authentication mechanism, message queue, and plug-in.
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, comment out any ``connection`` options
|
|
||||||
because compute nodes do not directly access the database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
|
|
||||||
account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Configure networking options
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
Choose the same networking option that you chose for the controller node to
|
|
||||||
configure services specific to it. Afterwards, return here and proceed to
|
|
||||||
:ref:`neutron-compute-compute-debian`.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
neutron-compute-install-option1.rst
|
|
||||||
neutron-compute-install-option2.rst
|
|
||||||
|
|
||||||
.. _neutron-compute-compute-debian:
|
|
||||||
|
|
||||||
Configure the Compute service to use the Networking service
|
|
||||||
-----------------------------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[neutron]`` section, configure access parameters:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[neutron]
|
|
||||||
# ...
|
|
||||||
url = http://controller:9696
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Compute service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service nova-compute restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Restart the Linux bridge agent:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service neutron-linuxbridge-agent restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,161 +0,0 @@
|
|||||||
Install and configure compute node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The compute node handles connectivity and :term:`security groups <security
|
|
||||||
group>` for instances.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install --no-recommends \
|
|
||||||
openstack-neutron-linuxbridge-agent bridge-utils
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
Configure the common component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
The Networking common component configuration includes the
|
|
||||||
authentication mechanism, message queue, and plug-in.
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, comment out any ``connection`` options
|
|
||||||
because compute nodes do not directly access the database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
|
|
||||||
account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Configure networking options
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
Choose the same networking option that you chose for the controller node to
|
|
||||||
configure services specific to it. Afterwards, return here and proceed to
|
|
||||||
:ref:`neutron-compute-compute-obs`.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
neutron-compute-install-option1.rst
|
|
||||||
neutron-compute-install-option2.rst
|
|
||||||
|
|
||||||
.. _neutron-compute-compute-obs:
|
|
||||||
|
|
||||||
Configure the Compute service to use the Networking service
|
|
||||||
-----------------------------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[neutron]`` section, configure access parameters:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[neutron]
|
|
||||||
# ...
|
|
||||||
url = http://controller:9696
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. The Networking service initialization scripts expect the variable
|
|
||||||
``NEUTRON_PLUGIN_CONF`` in the ``/etc/sysconfig/neutron`` file to
|
|
||||||
reference the ML2 plug-in configuration file. Ensure that the
|
|
||||||
``/etc/sysconfig/neutron`` file contains the following:
|
|
||||||
|
|
||||||
.. path /etc/sysconfig/neutron
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
NEUTRON_PLUGIN_CONF="/etc/neutron/plugins/ml2/ml2_conf.ini"
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Restart the Compute service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl restart openstack-nova-compute.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Start the Linux Bridge agent and configure it to start when the
|
|
||||||
system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable openstack-neutron-linuxbridge-agent.service
|
|
||||||
# systemctl start openstack-neutron-linuxbridge-agent.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
@ -1,53 +0,0 @@
|
|||||||
Networking Option 1: Provider networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Configure the Networking components on a *compute* node.
|
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|
||||||
networking infrastructure for instances and handles security groups.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
|
||||||
complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the provider virtual network to the
|
|
||||||
provider physical network interface:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[linux_bridge]
|
|
||||||
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
|
|
||||||
provider physical network interface. See :ref:`environment-networking`
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[vxlan]
|
|
||||||
enable_vxlan = false
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_security_group = true
|
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Return to *Networking compute node configuration*
|
|
@ -1,64 +0,0 @@
|
|||||||
Networking Option 2: Self-service networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Configure the Networking components on a *compute* node.
|
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|
||||||
networking infrastructure for instances and handles security groups.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
|
||||||
complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the provider virtual network to the
|
|
||||||
provider physical network interface:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[linux_bridge]
|
|
||||||
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
|
|
||||||
provider physical network interface. See :ref:`environment-networking`
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
|
||||||
IP address of the physical network interface that handles overlay
|
|
||||||
networks, and enable layer-2 population:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[vxlan]
|
|
||||||
enable_vxlan = true
|
|
||||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
|
||||||
l2_population = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
|
||||||
underlying physical network interface that handles overlay networks. The
|
|
||||||
example architecture uses the management interface to tunnel traffic to
|
|
||||||
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
|
|
||||||
the management IP address of the compute node. See
|
|
||||||
:ref:`environment-networking` for more information.
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_security_group = true
|
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Return to *Networking compute node configuration*.
|
|
@ -1,164 +0,0 @@
|
|||||||
Install and configure compute node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The compute node handles connectivity and :term:`security groups <security
|
|
||||||
group>` for instances.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
.. todo:
|
|
||||||
|
|
||||||
https://bugzilla.redhat.com/show_bug.cgi?id=1334626
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install openstack-neutron-linuxbridge ebtables ipset
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Configure the common component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
The Networking common component configuration includes the
|
|
||||||
authentication mechanism, message queue, and plug-in.
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, comment out any ``connection`` options
|
|
||||||
because compute nodes do not directly access the database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
|
|
||||||
account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[oslo_concurrency]
|
|
||||||
# ...
|
|
||||||
lock_path = /var/lib/neutron/tmp
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Configure networking options
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
Choose the same networking option that you chose for the controller node to
|
|
||||||
configure services specific to it. Afterwards, return here and proceed to
|
|
||||||
:ref:`neutron-compute-compute-rdo`.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
neutron-compute-install-option1.rst
|
|
||||||
neutron-compute-install-option2.rst
|
|
||||||
|
|
||||||
.. _neutron-compute-compute-rdo:
|
|
||||||
|
|
||||||
Configure the Compute service to use the Networking service
|
|
||||||
-----------------------------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[neutron]`` section, configure access parameters:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[neutron]
|
|
||||||
# ...
|
|
||||||
url = http://controller:9696
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Compute service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl restart openstack-nova-compute.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Start the Linux bridge agent and configure it to start when the
|
|
||||||
system boots:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable neutron-linuxbridge-agent.service
|
|
||||||
# systemctl start neutron-linuxbridge-agent.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1,146 +0,0 @@
|
|||||||
Install and configure compute node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
The compute node handles connectivity and :term:`security groups <security
|
|
||||||
group>` for instances.
|
|
||||||
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install neutron-linuxbridge-agent
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Configure the common component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
The Networking common component configuration includes the
|
|
||||||
authentication mechanism, message queue, and plug-in.
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, comment out any ``connection`` options
|
|
||||||
because compute nodes do not directly access the database.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the ``openstack``
|
|
||||||
account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Configure networking options
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
Choose the same networking option that you chose for the controller node to
|
|
||||||
configure services specific to it. Afterwards, return here and proceed to
|
|
||||||
:ref:`neutron-compute-compute-ubuntu`.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
neutron-compute-install-option1.rst
|
|
||||||
neutron-compute-install-option2.rst
|
|
||||||
|
|
||||||
.. _neutron-compute-compute-ubuntu:
|
|
||||||
|
|
||||||
Configure the Compute service to use the Networking service
|
|
||||||
-----------------------------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[neutron]`` section, configure access parameters:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[neutron]
|
|
||||||
# ...
|
|
||||||
url = http://controller:9696
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Restart the Compute service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service nova-compute restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Restart the Linux bridge agent:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service neutron-linuxbridge-agent restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,9 +0,0 @@
|
|||||||
Install and configure compute node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
neutron-compute-install-debian
|
|
||||||
neutron-compute-install-obs
|
|
||||||
neutron-compute-install-rdo
|
|
||||||
neutron-compute-install-ubuntu
|
|
@ -1,54 +0,0 @@
|
|||||||
Networking (neutron) concepts
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
OpenStack Networking (neutron) manages all networking facets for the
|
|
||||||
Virtual Networking Infrastructure (VNI) and the access layer aspects
|
|
||||||
of the Physical Networking Infrastructure (PNI) in your OpenStack
|
|
||||||
environment. OpenStack Networking enables projects to create advanced
|
|
||||||
virtual network topologies which may include services such as a
|
|
||||||
:term:`firewall`, a :term:`load balancer`, and a
|
|
||||||
:term:`virtual private network (VPN)`.
|
|
||||||
|
|
||||||
Networking provides networks, subnets, and routers as object abstractions.
|
|
||||||
Each abstraction has functionality that mimics its physical counterpart:
|
|
||||||
networks contain subnets, and routers route traffic between different
|
|
||||||
subnets and networks.
|
|
||||||
|
|
||||||
Any given Networking set up has at least one external network. Unlike
|
|
||||||
the other networks, the external network is not merely a virtually
|
|
||||||
defined network. Instead, it represents a view into a slice of the
|
|
||||||
physical, external network accessible outside the OpenStack
|
|
||||||
installation. IP addresses on the external network are accessible by
|
|
||||||
anybody physically on the outside network.
|
|
||||||
|
|
||||||
In addition to external networks, any Networking set up has one or more
|
|
||||||
internal networks. These software-defined networks connect directly to
|
|
||||||
the VMs. Only the VMs on any given internal network, or those on subnets
|
|
||||||
connected through interfaces to a similar router, can access VMs connected
|
|
||||||
to that network directly.
|
|
||||||
|
|
||||||
For the outside network to access VMs, and vice versa, routers between
|
|
||||||
the networks are needed. Each router has one gateway that is connected
|
|
||||||
to an external network and one or more interfaces connected to internal
|
|
||||||
networks. Like a physical router, subnets can access machines on other
|
|
||||||
subnets that are connected to the same router, and machines can access the
|
|
||||||
outside network through the gateway for the router.
|
|
||||||
|
|
||||||
Additionally, you can allocate IP addresses on external networks to
|
|
||||||
ports on the internal network. Whenever something is connected to a
|
|
||||||
subnet, that connection is called a port. You can associate external
|
|
||||||
network IP addresses with ports to VMs. This way, entities on the
|
|
||||||
outside network can access VMs.
|
|
||||||
|
|
||||||
Networking also supports *security groups*. Security groups enable
|
|
||||||
administrators to define firewall rules in groups. A VM can belong to
|
|
||||||
one or more security groups, and Networking applies the rules in those
|
|
||||||
security groups to block or unblock ports, port ranges, or traffic types
|
|
||||||
for that VM.
|
|
||||||
|
|
||||||
Each plug-in that Networking uses has its own concepts. While not vital
|
|
||||||
to operating the VNI and OpenStack environment, understanding these
|
|
||||||
concepts can help you set up Networking. All Networking installations
|
|
||||||
use a core plug-in and a security group plug-in (or just the No-Op
|
|
||||||
security group plug-in). Additionally, Firewall-as-a-Service (FWaaS) and
|
|
||||||
Load-Balancer-as-a-Service (LBaaS) plug-ins are available.
|
|
@ -1,314 +0,0 @@
|
|||||||
Install and configure controller node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you configure the OpenStack Networking (neutron) service, you
|
|
||||||
must create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``neutron`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)] CREATE DATABASE neutron;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``neutron`` database, replacing
|
|
||||||
``NEUTRON_DBPASS`` with a suitable password:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
|
|
||||||
IDENTIFIED BY 'NEUTRON_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
|
|
||||||
IDENTIFIED BY 'NEUTRON_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to admin-only CLI
|
|
||||||
commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create the ``neutron`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt neutron
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | fdb0f541e28141719b6a43c8944bf1fb |
|
|
||||||
| name | neutron |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``neutron`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user neutron admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``neutron`` service entity:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name neutron \
|
|
||||||
--description "OpenStack Networking" network
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Networking |
|
|
||||||
| enabled | True |
|
|
||||||
| id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| name | neutron |
|
|
||||||
| type | network |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the Networking service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network public http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network internal http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 09753b537ac74422a68d2d791cf3714f |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network admin http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 1ee14289c9374dffb5db92a5c112fc4e |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure networking options
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
You can deploy the Networking service using one of two architectures
|
|
||||||
represented by options 1 and 2.
|
|
||||||
|
|
||||||
Option 1 deploys the simplest possible architecture that only supports
|
|
||||||
attaching instances to provider (external) networks. No self-service (private)
|
|
||||||
networks, routers, or floating IP addresses. Only the ``admin`` or other
|
|
||||||
privileged user can manage provider networks.
|
|
||||||
|
|
||||||
Option 2 augments option 1 with layer-3 services that support attaching
|
|
||||||
instances to self-service networks. The ``demo`` or other unprivileged
|
|
||||||
user can manage self-service networks including routers that provide
|
|
||||||
connectivity between self-service and provider networks. Additionally,
|
|
||||||
floating IP addresses provide connectivity to instances using self-service
|
|
||||||
networks from external networks such as the Internet.
|
|
||||||
|
|
||||||
Self-service networks typically use overlay networks. Overlay network
|
|
||||||
protocols such as VXLAN include additional headers that increase overhead
|
|
||||||
and decrease space available for the payload or user data. Without knowledge
|
|
||||||
of the virtual network infrastructure, instances attempt to send packets
|
|
||||||
using the default Ethernet :term:`maximum transmission unit (MTU)` of 1500
|
|
||||||
bytes. The Networking service automatically provides the correct MTU value
|
|
||||||
to instances via DHCP. However, some cloud images do not use DHCP or ignore
|
|
||||||
the DHCP MTU option and require configuration using metadata or a script.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Option 2 also supports attaching instances to provider networks.
|
|
||||||
|
|
||||||
Choose one of the following networking options to configure services
|
|
||||||
specific to it. Afterwards, return here and proceed to
|
|
||||||
:ref:`neutron-controller-metadata-agent-debian`.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
neutron-controller-install-option1.rst
|
|
||||||
neutron-controller-install-option2.rst
|
|
||||||
|
|
||||||
.. _neutron-controller-metadata-agent-debian:
|
|
||||||
|
|
||||||
Configure the metadata agent
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
The :term:`metadata agent <Metadata agent>` provides configuration information
|
|
||||||
such as credentials to instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the metadata host and shared
|
|
||||||
secret:
|
|
||||||
|
|
||||||
.. path /etc/neutron/metadata_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
nova_metadata_ip = controller
|
|
||||||
metadata_proxy_shared_secret = METADATA_SECRET
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
|
|
||||||
|
|
||||||
Configure the Compute service to use the Networking service
|
|
||||||
-----------------------------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
|
|
||||||
|
|
||||||
* In the ``[neutron]`` section, configure access parameters, enable the
|
|
||||||
metadata proxy, and configure the secret:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[neutron]
|
|
||||||
# ...
|
|
||||||
url = http://controller:9696
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
service_metadata_proxy = true
|
|
||||||
metadata_proxy_shared_secret = METADATA_SECRET
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
|
|
||||||
proxy.
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Populate the database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
|
|
||||||
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Database population occurs later for Networking because the script
|
|
||||||
requires complete server and plug-in configuration files.
|
|
||||||
|
|
||||||
#. Restart the Compute API service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service nova-api restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Restart the Networking services.
|
|
||||||
|
|
||||||
For both networking options:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service neutron-server restart
|
|
||||||
# service neutron-linuxbridge-agent restart
|
|
||||||
# service neutron-dhcp-agent restart
|
|
||||||
# service neutron-metadata-agent restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
For networking option 2, also restart the layer-3 service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service neutron-l3-agent restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,319 +0,0 @@
|
|||||||
Install and configure controller node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you configure the OpenStack Networking (neutron) service, you
|
|
||||||
must create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``neutron`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)] CREATE DATABASE neutron;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``neutron`` database, replacing
|
|
||||||
``NEUTRON_DBPASS`` with a suitable password:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
|
|
||||||
IDENTIFIED BY 'NEUTRON_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
|
|
||||||
IDENTIFIED BY 'NEUTRON_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to admin-only CLI
|
|
||||||
commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create the ``neutron`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt neutron
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | fdb0f541e28141719b6a43c8944bf1fb |
|
|
||||||
| name | neutron |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``neutron`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user neutron admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``neutron`` service entity:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name neutron \
|
|
||||||
--description "OpenStack Networking" network
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Networking |
|
|
||||||
| enabled | True |
|
|
||||||
| id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| name | neutron |
|
|
||||||
| type | network |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the Networking service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network public http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network internal http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 09753b537ac74422a68d2d791cf3714f |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network admin http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 1ee14289c9374dffb5db92a5c112fc4e |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure networking options
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
You can deploy the Networking service using one of two architectures
|
|
||||||
represented by options 1 and 2.
|
|
||||||
|
|
||||||
Option 1 deploys the simplest possible architecture that only supports
|
|
||||||
attaching instances to provider (external) networks. No self-service (private)
|
|
||||||
networks, routers, or floating IP addresses. Only the ``admin`` or other
|
|
||||||
privileged user can manage provider networks.
|
|
||||||
|
|
||||||
Option 2 augments option 1 with layer-3 services that support attaching
|
|
||||||
instances to self-service networks. The ``demo`` or other unprivileged
|
|
||||||
user can manage self-service networks including routers that provide
|
|
||||||
connectivity between self-service and provider networks. Additionally,
|
|
||||||
floating IP addresses provide connectivity to instances using self-service
|
|
||||||
networks from external networks such as the Internet.
|
|
||||||
|
|
||||||
Self-service networks typically use overlay networks. Overlay network
|
|
||||||
protocols such as VXLAN include additional headers that increase overhead
|
|
||||||
and decrease space available for the payload or user data. Without knowledge
|
|
||||||
of the virtual network infrastructure, instances attempt to send packets
|
|
||||||
using the default Ethernet :term:`maximum transmission unit (MTU)` of 1500
|
|
||||||
bytes. The Networking service automatically provides the correct MTU value
|
|
||||||
to instances via DHCP. However, some cloud images do not use DHCP or ignore
|
|
||||||
the DHCP MTU option and require configuration using metadata or a script.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Option 2 also supports attaching instances to provider networks.
|
|
||||||
|
|
||||||
Choose one of the following networking options to configure services
|
|
||||||
specific to it. Afterwards, return here and proceed to
|
|
||||||
:ref:`neutron-controller-metadata-agent-obs`.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
neutron-controller-install-option1.rst
|
|
||||||
neutron-controller-install-option2.rst
|
|
||||||
|
|
||||||
.. _neutron-controller-metadata-agent-obs:
|
|
||||||
|
|
||||||
Configure the metadata agent
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
The :term:`metadata agent <Metadata agent>` provides configuration information
|
|
||||||
such as credentials to instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the metadata host and shared
|
|
||||||
secret:
|
|
||||||
|
|
||||||
.. path /etc/neutron/metadata_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
nova_metadata_ip = controller
|
|
||||||
metadata_proxy_shared_secret = METADATA_SECRET
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
|
|
||||||
|
|
||||||
Configure the Compute service to use the Networking service
|
|
||||||
-----------------------------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
|
|
||||||
|
|
||||||
* In the ``[neutron]`` section, configure access parameters, enable the
|
|
||||||
metadata proxy, and configure the secret:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[neutron]
|
|
||||||
# ...
|
|
||||||
url = http://controller:9696
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
service_metadata_proxy = true
|
|
||||||
metadata_proxy_shared_secret = METADATA_SECRET
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
|
|
||||||
proxy.
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
SLES enables apparmor by default and restricts dnsmasq. You need to
|
|
||||||
either completely disable apparmor or disable only the dnsmasq
|
|
||||||
profile:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# ln -s /etc/apparmor.d/usr.sbin.dnsmasq /etc/apparmor.d/disable/
|
|
||||||
# systemctl restart apparmor
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Restart the Compute API service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl restart openstack-nova-api.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Start the Networking services and configure them to start when the system
|
|
||||||
boots.
|
|
||||||
|
|
||||||
For both networking options:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable openstack-neutron.service \
|
|
||||||
openstack-neutron-linuxbridge-agent.service \
|
|
||||||
openstack-neutron-dhcp-agent.service \
|
|
||||||
openstack-neutron-metadata-agent.service
|
|
||||||
# systemctl start openstack-neutron.service \
|
|
||||||
openstack-neutron-linuxbridge-agent.service \
|
|
||||||
openstack-neutron-dhcp-agent.service \
|
|
||||||
openstack-neutron-metadata-agent.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
For networking option 2, also enable and start the layer-3 service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable openstack-neutron-l3-agent.service
|
|
||||||
# systemctl start openstack-neutron-l3-agent.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
@ -1,287 +0,0 @@
|
|||||||
Networking Option 1: Provider networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install and configure the Networking components on the *controller* node.
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install neutron-server neutron-linuxbridge-agent \
|
|
||||||
neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the server component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
The Networking server component configuration includes the database,
|
|
||||||
authentication mechanism, message queue, topology change notifications,
|
|
||||||
and plug-in.
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
|
||||||
database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
|
||||||
plug-in and disable additional plug-ins:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
core_plugin = ml2
|
|
||||||
service_plugins =
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
|
||||||
notify Compute of network topology changes:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
notify_nova_on_port_status_changes = true
|
|
||||||
notify_nova_on_port_data_changes = true
|
|
||||||
|
|
||||||
[nova]
|
|
||||||
# ...
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = nova
|
|
||||||
password = NOVA_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Configure the Modular Layer 2 (ML2) plug-in
|
|
||||||
-------------------------------------------
|
|
||||||
|
|
||||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
|
||||||
and switching) virtual networking infrastructure for instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable flat and VLAN networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
type_drivers = flat,vlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, disable self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
tenant_network_types =
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
mechanism_drivers = linuxbridge
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
After you configure the ML2 plug-in, removing values in the
|
|
||||||
``type_drivers`` option can lead to database inconsistency.
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
extension_drivers = port_security
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_flat]`` section, configure the provider virtual
|
|
||||||
network as a flat network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_flat]
|
|
||||||
# ...
|
|
||||||
flat_networks = provider
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
|
||||||
efficiency of security group rules:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_ipset = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|
||||||
networking infrastructure for instances and handles security groups.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
|
||||||
complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the provider virtual network to the
|
|
||||||
provider physical network interface:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[linux_bridge]
|
|
||||||
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
|
|
||||||
provider physical network interface. See :ref:`environment-networking`
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[vxlan]
|
|
||||||
enable_vxlan = false
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_security_group = true
|
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the DHCP agent
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
|
||||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
|
|
||||||
networks can access metadata over the network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/dhcp_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
|
||||||
enable_isolated_metadata = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Return to *Networking controller node configuration*.
|
|
@ -1,289 +0,0 @@
|
|||||||
Networking Option 1: Provider networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install and configure the Networking components on the *controller* node.
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install --no-recommends openstack-neutron \
|
|
||||||
openstack-neutron-server openstack-neutron-linuxbridge-agent \
|
|
||||||
openstack-neutron-dhcp-agent openstack-neutron-metadata-agent \
|
|
||||||
bridge-utils
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the server component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
The Networking server component configuration includes the database,
|
|
||||||
authentication mechanism, message queue, topology change notifications,
|
|
||||||
and plug-in.
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
|
||||||
database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
|
||||||
plug-in and disable additional plug-ins:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
core_plugin = ml2
|
|
||||||
service_plugins =
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
|
||||||
notify Compute of network topology changes:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
notify_nova_on_port_status_changes = true
|
|
||||||
notify_nova_on_port_data_changes = true
|
|
||||||
|
|
||||||
[nova]
|
|
||||||
# ...
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = nova
|
|
||||||
password = NOVA_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Configure the Modular Layer 2 (ML2) plug-in
|
|
||||||
-------------------------------------------
|
|
||||||
|
|
||||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
|
||||||
and switching) virtual networking infrastructure for instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable flat and VLAN networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
type_drivers = flat,vlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, disable self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
tenant_network_types =
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
mechanism_drivers = linuxbridge
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
After you configure the ML2 plug-in, removing values in the
|
|
||||||
``type_drivers`` option can lead to database inconsistency.
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
extension_drivers = port_security
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_flat]`` section, configure the provider virtual
|
|
||||||
network as a flat network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_flat]
|
|
||||||
# ...
|
|
||||||
flat_networks = provider
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
|
||||||
efficiency of security group rules:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_ipset = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|
||||||
networking infrastructure for instances and handles security groups.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
|
||||||
complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the provider virtual network to the
|
|
||||||
provider physical network interface:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[linux_bridge]
|
|
||||||
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
|
|
||||||
provider physical network interface. See :ref:`environment-networking`
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[vxlan]
|
|
||||||
enable_vxlan = false
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_security_group = true
|
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the DHCP agent
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
|
||||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
|
|
||||||
networks can access metadata over the network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/dhcp_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
|
||||||
enable_isolated_metadata = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Return to *Networking controller node configuration*.
|
|
@ -1,299 +0,0 @@
|
|||||||
Networking Option 1: Provider networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install and configure the Networking components on the *controller* node.
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install openstack-neutron openstack-neutron-ml2 \
|
|
||||||
openstack-neutron-linuxbridge ebtables
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the server component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
The Networking server component configuration includes the database,
|
|
||||||
authentication mechanism, message queue, topology change notifications,
|
|
||||||
and plug-in.
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
|
||||||
database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
|
||||||
plug-in and disable additional plug-ins:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
core_plugin = ml2
|
|
||||||
service_plugins =
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
|
||||||
notify Compute of network topology changes:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
notify_nova_on_port_status_changes = true
|
|
||||||
notify_nova_on_port_data_changes = true
|
|
||||||
|
|
||||||
[nova]
|
|
||||||
# ...
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = nova
|
|
||||||
password = NOVA_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[oslo_concurrency]
|
|
||||||
# ...
|
|
||||||
lock_path = /var/lib/neutron/tmp
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the Modular Layer 2 (ML2) plug-in
|
|
||||||
-------------------------------------------
|
|
||||||
|
|
||||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
|
||||||
and switching) virtual networking infrastructure for instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable flat and VLAN networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
type_drivers = flat,vlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, disable self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
tenant_network_types =
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
mechanism_drivers = linuxbridge
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
After you configure the ML2 plug-in, removing values in the
|
|
||||||
``type_drivers`` option can lead to database inconsistency.
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
extension_drivers = port_security
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_flat]`` section, configure the provider virtual
|
|
||||||
network as a flat network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_flat]
|
|
||||||
# ...
|
|
||||||
flat_networks = provider
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
|
||||||
efficiency of security group rules:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_ipset = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|
||||||
networking infrastructure for instances and handles security groups.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
|
||||||
complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the provider virtual network to the
|
|
||||||
provider physical network interface:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[linux_bridge]
|
|
||||||
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
|
|
||||||
provider physical network interface. See :ref:`environment-networking`
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[vxlan]
|
|
||||||
enable_vxlan = false
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_security_group = true
|
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the DHCP agent
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
|
||||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
|
|
||||||
networks can access metadata over the network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/dhcp_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
|
||||||
enable_isolated_metadata = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Return to *Networking controller node configuration*.
|
|
@ -1,288 +0,0 @@
|
|||||||
Networking Option 1: Provider networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install and configure the Networking components on the *controller* node.
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install neutron-server neutron-plugin-ml2 \
|
|
||||||
neutron-linuxbridge-agent neutron-dhcp-agent \
|
|
||||||
neutron-metadata-agent
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the server component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
The Networking server component configuration includes the database,
|
|
||||||
authentication mechanism, message queue, topology change notifications,
|
|
||||||
and plug-in.
|
|
||||||
|
|
||||||
.. include:: shared/note_configuration_vary_by_distribution.rst
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
|
||||||
database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
|
||||||
plug-in and disable additional plug-ins:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
core_plugin = ml2
|
|
||||||
service_plugins =
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
|
||||||
notify Compute of network topology changes:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
notify_nova_on_port_status_changes = true
|
|
||||||
notify_nova_on_port_data_changes = true
|
|
||||||
|
|
||||||
[nova]
|
|
||||||
# ...
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = nova
|
|
||||||
password = NOVA_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Configure the Modular Layer 2 (ML2) plug-in
|
|
||||||
-------------------------------------------
|
|
||||||
|
|
||||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
|
||||||
and switching) virtual networking infrastructure for instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable flat and VLAN networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
type_drivers = flat,vlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, disable self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
tenant_network_types =
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the Linux bridge mechanism:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
mechanism_drivers = linuxbridge
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
After you configure the ML2 plug-in, removing values in the
|
|
||||||
``type_drivers`` option can lead to database inconsistency.
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
extension_drivers = port_security
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_flat]`` section, configure the provider virtual
|
|
||||||
network as a flat network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_flat]
|
|
||||||
# ...
|
|
||||||
flat_networks = provider
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
|
||||||
efficiency of security group rules:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_ipset = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|
||||||
networking infrastructure for instances and handles security groups.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
|
||||||
complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the provider virtual network to the
|
|
||||||
provider physical network interface:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[linux_bridge]
|
|
||||||
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
|
|
||||||
provider physical network interface. See :ref:`environment-networking`
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, disable VXLAN overlay networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[vxlan]
|
|
||||||
enable_vxlan = false
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_security_group = true
|
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the DHCP agent
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
|
||||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
|
|
||||||
networks can access metadata over the network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/dhcp_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
|
||||||
enable_isolated_metadata = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Return to *Networking controller node configuration*.
|
|
@ -1,9 +0,0 @@
|
|||||||
Networking Option 1: Provider networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install and configure the Networking components on the *controller* node.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:glob:
|
|
||||||
|
|
||||||
neutron-controller-install-option1-*
|
|
@ -1,335 +0,0 @@
|
|||||||
Networking Option 2: Self-service networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install and configure the Networking components on the *controller* node.
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. .. code-block:: console
|
|
||||||
|
|
||||||
# apt install neutron-server neutron-linuxbridge-agent \
|
|
||||||
neutron-dhcp-agent neutron-metadata-agent neutron-l3-agent
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
Configure the server component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
|
||||||
database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
|
||||||
plug-in, router service, and overlapping IP addresses:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
core_plugin = ml2
|
|
||||||
service_plugins = router
|
|
||||||
allow_overlapping_ips = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
|
||||||
notify Compute of network topology changes:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
notify_nova_on_port_status_changes = true
|
|
||||||
notify_nova_on_port_data_changes = true
|
|
||||||
|
|
||||||
[nova]
|
|
||||||
# ...
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = nova
|
|
||||||
password = NOVA_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Configure the Modular Layer 2 (ML2) plug-in
|
|
||||||
-------------------------------------------
|
|
||||||
|
|
||||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
|
||||||
and switching) virtual networking infrastructure for instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
type_drivers = flat,vlan,vxlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable VXLAN self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
tenant_network_types = vxlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
|
|
||||||
mechanisms:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
mechanism_drivers = linuxbridge,l2population
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
After you configure the ML2 plug-in, removing values in the
|
|
||||||
``type_drivers`` option can lead to database inconsistency.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Linux bridge agent only supports VXLAN overlay networks.
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
extension_drivers = port_security
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_flat]`` section, configure the provider virtual
|
|
||||||
network as a flat network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_flat]
|
|
||||||
# ...
|
|
||||||
flat_networks = provider
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
|
|
||||||
range for self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_vxlan]
|
|
||||||
# ...
|
|
||||||
vni_ranges = 1:1000
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
|
||||||
efficiency of security group rules:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_ipset = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|
||||||
networking infrastructure for instances and handles security groups.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
|
||||||
complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the provider virtual network to the
|
|
||||||
provider physical network interface:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[linux_bridge]
|
|
||||||
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
|
|
||||||
provider physical network interface. See :ref:`environment-networking`
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
|
||||||
IP address of the physical network interface that handles overlay
|
|
||||||
networks, and enable layer-2 population:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[vxlan]
|
|
||||||
enable_vxlan = true
|
|
||||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
|
||||||
l2_population = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
|
||||||
underlying physical network interface that handles overlay networks. The
|
|
||||||
example architecture uses the management interface to tunnel traffic to
|
|
||||||
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
|
|
||||||
the management IP address of the controller node. See
|
|
||||||
:ref:`environment-networking` for more information.
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_security_group = true
|
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the layer-3 agent
|
|
||||||
---------------------------
|
|
||||||
|
|
||||||
The :term:`Layer-3 (L3) agent` provides routing and NAT services for
|
|
||||||
self-service virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
|
|
||||||
and external network bridge:
|
|
||||||
|
|
||||||
.. path /etc/neutron/l3_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the DHCP agent
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
|
||||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
|
|
||||||
networks can access metadata over the network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/dhcp_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
|
||||||
enable_isolated_metadata = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Return to *Networking controller node configuration*.
|
|
@ -1,337 +0,0 @@
|
|||||||
Networking Option 2: Self-service networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install and configure the Networking components on the *controller* node.
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# zypper install --no-recommends openstack-neutron \
|
|
||||||
openstack-neutron-server openstack-neutron-linuxbridge-agent \
|
|
||||||
openstack-neutron-l3-agent openstack-neutron-dhcp-agent \
|
|
||||||
openstack-neutron-metadata-agent bridge-utils
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Configure the server component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
|
||||||
database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
|
||||||
plug-in, router service, and overlapping IP addresses:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
core_plugin = ml2
|
|
||||||
service_plugins = router
|
|
||||||
allow_overlapping_ips = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
|
||||||
notify Compute of network topology changes:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
notify_nova_on_port_status_changes = true
|
|
||||||
notify_nova_on_port_data_changes = true
|
|
||||||
|
|
||||||
[nova]
|
|
||||||
# ...
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = nova
|
|
||||||
password = NOVA_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Configure the Modular Layer 2 (ML2) plug-in
|
|
||||||
-------------------------------------------
|
|
||||||
|
|
||||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
|
||||||
and switching) virtual networking infrastructure for instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
type_drivers = flat,vlan,vxlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable VXLAN self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
tenant_network_types = vxlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
|
|
||||||
mechanisms:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
mechanism_drivers = linuxbridge,l2population
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
After you configure the ML2 plug-in, removing values in the
|
|
||||||
``type_drivers`` option can lead to database inconsistency.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Linux bridge agent only supports VXLAN overlay networks.
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
extension_drivers = port_security
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_flat]`` section, configure the provider virtual
|
|
||||||
network as a flat network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_flat]
|
|
||||||
# ...
|
|
||||||
flat_networks = provider
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
|
|
||||||
range for self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_vxlan]
|
|
||||||
# ...
|
|
||||||
vni_ranges = 1:1000
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
|
||||||
efficiency of security group rules:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_ipset = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|
||||||
networking infrastructure for instances and handles security groups.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
|
||||||
complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the provider virtual network to the
|
|
||||||
provider physical network interface:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[linux_bridge]
|
|
||||||
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
|
|
||||||
provider physical network interface. See :ref:`environment-networking`
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
|
||||||
IP address of the physical network interface that handles overlay
|
|
||||||
networks, and enable layer-2 population:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[vxlan]
|
|
||||||
enable_vxlan = true
|
|
||||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
|
||||||
l2_population = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
|
||||||
underlying physical network interface that handles overlay networks. The
|
|
||||||
example architecture uses the management interface to tunnel traffic to
|
|
||||||
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
|
|
||||||
the management IP address of the controller node. See
|
|
||||||
:ref:`environment-networking` for more information.
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_security_group = true
|
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the layer-3 agent
|
|
||||||
---------------------------
|
|
||||||
|
|
||||||
The :term:`Layer-3 (L3) agent` provides routing and NAT services for
|
|
||||||
self-service virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
|
|
||||||
and external network bridge:
|
|
||||||
|
|
||||||
.. path /etc/neutron/l3_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the DHCP agent
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
|
||||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
|
|
||||||
networks can access metadata over the network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/dhcp_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
|
||||||
enable_isolated_metadata = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Return to *Networking controller node configuration*.
|
|
@ -1,347 +0,0 @@
|
|||||||
Networking Option 2: Self-service networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install and configure the Networking components on the *controller* node.
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# yum install openstack-neutron openstack-neutron-ml2 \
|
|
||||||
openstack-neutron-linuxbridge ebtables
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Configure the server component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
|
||||||
database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
|
||||||
plug-in, router service, and overlapping IP addresses:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
core_plugin = ml2
|
|
||||||
service_plugins = router
|
|
||||||
allow_overlapping_ips = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
|
||||||
notify Compute of network topology changes:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
notify_nova_on_port_status_changes = true
|
|
||||||
notify_nova_on_port_data_changes = true
|
|
||||||
|
|
||||||
[nova]
|
|
||||||
# ...
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = nova
|
|
||||||
password = NOVA_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
|
|
||||||
* In the ``[oslo_concurrency]`` section, configure the lock path:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[oslo_concurrency]
|
|
||||||
# ...
|
|
||||||
lock_path = /var/lib/neutron/tmp
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the Modular Layer 2 (ML2) plug-in
|
|
||||||
-------------------------------------------
|
|
||||||
|
|
||||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
|
||||||
and switching) virtual networking infrastructure for instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
type_drivers = flat,vlan,vxlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable VXLAN self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
tenant_network_types = vxlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
|
|
||||||
mechanisms:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
mechanism_drivers = linuxbridge,l2population
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
After you configure the ML2 plug-in, removing values in the
|
|
||||||
``type_drivers`` option can lead to database inconsistency.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Linux bridge agent only supports VXLAN overlay networks.
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
extension_drivers = port_security
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_flat]`` section, configure the provider virtual
|
|
||||||
network as a flat network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_flat]
|
|
||||||
# ...
|
|
||||||
flat_networks = provider
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
|
|
||||||
range for self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_vxlan]
|
|
||||||
# ...
|
|
||||||
vni_ranges = 1:1000
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
|
||||||
efficiency of security group rules:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_ipset = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|
||||||
networking infrastructure for instances and handles security groups.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
|
||||||
complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the provider virtual network to the
|
|
||||||
provider physical network interface:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[linux_bridge]
|
|
||||||
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
|
|
||||||
provider physical network interface. See :ref:`environment-networking`
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
|
||||||
IP address of the physical network interface that handles overlay
|
|
||||||
networks, and enable layer-2 population:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[vxlan]
|
|
||||||
enable_vxlan = true
|
|
||||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
|
||||||
l2_population = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
|
||||||
underlying physical network interface that handles overlay networks. The
|
|
||||||
example architecture uses the management interface to tunnel traffic to
|
|
||||||
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
|
|
||||||
the management IP address of the controller node. See
|
|
||||||
:ref:`environment-networking` for more information.
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_security_group = true
|
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the layer-3 agent
|
|
||||||
---------------------------
|
|
||||||
|
|
||||||
The :term:`Layer-3 (L3) agent` provides routing and NAT services for
|
|
||||||
self-service virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
|
|
||||||
and external network bridge:
|
|
||||||
|
|
||||||
.. path /etc/neutron/l3_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the DHCP agent
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
|
||||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
|
|
||||||
networks can access metadata over the network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/dhcp_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
|
||||||
enable_isolated_metadata = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Return to *Networking controller node configuration*.
|
|
@ -1,336 +0,0 @@
|
|||||||
Networking Option 2: Self-service networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install and configure the Networking components on the *controller* node.
|
|
||||||
|
|
||||||
Install the components
|
|
||||||
----------------------
|
|
||||||
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# apt install neutron-server neutron-plugin-ml2 \
|
|
||||||
neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent \
|
|
||||||
neutron-metadata-agent
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
Configure the server component
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/neutron.conf`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[database]`` section, configure database access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[database]
|
|
||||||
# ...
|
|
||||||
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_DBPASS`` with the password you chose for the
|
|
||||||
database.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other ``connection`` options in the
|
|
||||||
``[database]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, enable the Modular Layer 2 (ML2)
|
|
||||||
plug-in, router service, and overlapping IP addresses:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
core_plugin = ml2
|
|
||||||
service_plugins = router
|
|
||||||
allow_overlapping_ips = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure ``RabbitMQ``
|
|
||||||
message queue access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
transport_url = rabbit://openstack:RABBIT_PASS@controller
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``RABBIT_PASS`` with the password you chose for the
|
|
||||||
``openstack`` account in RabbitMQ.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[keystone_authtoken]`` sections, configure
|
|
||||||
Identity service access:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
auth_strategy = keystone
|
|
||||||
|
|
||||||
[keystone_authtoken]
|
|
||||||
# ...
|
|
||||||
auth_uri = http://controller:5000
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
memcached_servers = controller:11211
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Comment out or remove any other options in the
|
|
||||||
``[keystone_authtoken]`` section.
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` and ``[nova]`` sections, configure Networking to
|
|
||||||
notify Compute of network topology changes:
|
|
||||||
|
|
||||||
.. path /etc/neutron/neutron.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
notify_nova_on_port_status_changes = true
|
|
||||||
notify_nova_on_port_data_changes = true
|
|
||||||
|
|
||||||
[nova]
|
|
||||||
# ...
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = nova
|
|
||||||
password = NOVA_PASS
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NOVA_PASS`` with the password you chose for the ``nova``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Configure the Modular Layer 2 (ML2) plug-in
|
|
||||||
-------------------------------------------
|
|
||||||
|
|
||||||
The ML2 plug-in uses the Linux bridge mechanism to build layer-2 (bridging
|
|
||||||
and switching) virtual networking infrastructure for instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/ml2_conf.ini`` file and complete the
|
|
||||||
following actions:
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable flat, VLAN, and VXLAN networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
type_drivers = flat,vlan,vxlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable VXLAN self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
tenant_network_types = vxlan
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the Linux bridge and layer-2 population
|
|
||||||
mechanisms:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
mechanism_drivers = linuxbridge,l2population
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. warning::
|
|
||||||
|
|
||||||
After you configure the ML2 plug-in, removing values in the
|
|
||||||
``type_drivers`` option can lead to database inconsistency.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
The Linux bridge agent only supports VXLAN overlay networks.
|
|
||||||
|
|
||||||
* In the ``[ml2]`` section, enable the port security extension driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2]
|
|
||||||
# ...
|
|
||||||
extension_drivers = port_security
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_flat]`` section, configure the provider virtual
|
|
||||||
network as a flat network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_flat]
|
|
||||||
# ...
|
|
||||||
flat_networks = provider
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[ml2_type_vxlan]`` section, configure the VXLAN network identifier
|
|
||||||
range for self-service networks:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[ml2_type_vxlan]
|
|
||||||
# ...
|
|
||||||
vni_ranges = 1:1000
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable :term:`ipset` to increase
|
|
||||||
efficiency of security group rules:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/ml2_conf.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_ipset = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the Linux bridge agent
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
The Linux bridge agent builds layer-2 (bridging and switching) virtual
|
|
||||||
networking infrastructure for instances and handles security groups.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/plugins/ml2/linuxbridge_agent.ini`` file and
|
|
||||||
complete the following actions:
|
|
||||||
|
|
||||||
* In the ``[linux_bridge]`` section, map the provider virtual network to the
|
|
||||||
provider physical network interface:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[linux_bridge]
|
|
||||||
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``PROVIDER_INTERFACE_NAME`` with the name of the underlying
|
|
||||||
provider physical network interface. See :ref:`environment-networking`
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
* In the ``[vxlan]`` section, enable VXLAN overlay networks, configure the
|
|
||||||
IP address of the physical network interface that handles overlay
|
|
||||||
networks, and enable layer-2 population:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[vxlan]
|
|
||||||
enable_vxlan = true
|
|
||||||
local_ip = OVERLAY_INTERFACE_IP_ADDRESS
|
|
||||||
l2_population = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with the IP address of the
|
|
||||||
underlying physical network interface that handles overlay networks. The
|
|
||||||
example architecture uses the management interface to tunnel traffic to
|
|
||||||
the other nodes. Therefore, replace ``OVERLAY_INTERFACE_IP_ADDRESS`` with
|
|
||||||
the management IP address of the controller node. See
|
|
||||||
:ref:`environment-networking` for more information.
|
|
||||||
|
|
||||||
* In the ``[securitygroup]`` section, enable security groups and
|
|
||||||
configure the Linux bridge :term:`iptables` firewall driver:
|
|
||||||
|
|
||||||
.. path /etc/neutron/plugins/ml2/linuxbridge_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[securitygroup]
|
|
||||||
# ...
|
|
||||||
enable_security_group = true
|
|
||||||
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the layer-3 agent
|
|
||||||
---------------------------
|
|
||||||
|
|
||||||
The :term:`Layer-3 (L3) agent` provides routing and NAT services for
|
|
||||||
self-service virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/l3_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver
|
|
||||||
and external network bridge:
|
|
||||||
|
|
||||||
.. path /etc/neutron/l3_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure the DHCP agent
|
|
||||||
------------------------
|
|
||||||
|
|
||||||
The :term:`DHCP agent` provides DHCP services for virtual networks.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/dhcp_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the Linux bridge interface driver,
|
|
||||||
Dnsmasq DHCP driver, and enable isolated metadata so instances on provider
|
|
||||||
networks can access metadata over the network:
|
|
||||||
|
|
||||||
.. path /etc/neutron/dhcp_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
interface_driver = linuxbridge
|
|
||||||
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
|
|
||||||
enable_isolated_metadata = true
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Return to *Networking controller node configuration*.
|
|
@ -1,9 +0,0 @@
|
|||||||
Networking Option 2: Self-service networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Install and configure the Networking components on the *controller* node.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:glob:
|
|
||||||
|
|
||||||
neutron-controller-install-option2-*
|
|
@ -1,329 +0,0 @@
|
|||||||
Install and configure controller node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you configure the OpenStack Networking (neutron) service, you
|
|
||||||
must create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ mysql -u root -p
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``neutron`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)] CREATE DATABASE neutron;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``neutron`` database, replacing
|
|
||||||
``NEUTRON_DBPASS`` with a suitable password:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
|
|
||||||
IDENTIFIED BY 'NEUTRON_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
|
|
||||||
IDENTIFIED BY 'NEUTRON_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to admin-only CLI
|
|
||||||
commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create the ``neutron`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt neutron
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | fdb0f541e28141719b6a43c8944bf1fb |
|
|
||||||
| name | neutron |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``neutron`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user neutron admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``neutron`` service entity:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name neutron \
|
|
||||||
--description "OpenStack Networking" network
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Networking |
|
|
||||||
| enabled | True |
|
|
||||||
| id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| name | neutron |
|
|
||||||
| type | network |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the Networking service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network public http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network internal http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 09753b537ac74422a68d2d791cf3714f |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network admin http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 1ee14289c9374dffb5db92a5c112fc4e |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure networking options
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
You can deploy the Networking service using one of two architectures
|
|
||||||
represented by options 1 and 2.
|
|
||||||
|
|
||||||
Option 1 deploys the simplest possible architecture that only supports
|
|
||||||
attaching instances to provider (external) networks. No self-service (private)
|
|
||||||
networks, routers, or floating IP addresses. Only the ``admin`` or other
|
|
||||||
privileged user can manage provider networks.
|
|
||||||
|
|
||||||
Option 2 augments option 1 with layer-3 services that support attaching
|
|
||||||
instances to self-service networks. The ``demo`` or other unprivileged
|
|
||||||
user can manage self-service networks including routers that provide
|
|
||||||
connectivity between self-service and provider networks. Additionally,
|
|
||||||
floating IP addresses provide connectivity to instances using self-service
|
|
||||||
networks from external networks such as the Internet.
|
|
||||||
|
|
||||||
Self-service networks typically use overlay networks. Overlay network
|
|
||||||
protocols such as VXLAN include additional headers that increase overhead
|
|
||||||
and decrease space available for the payload or user data. Without knowledge
|
|
||||||
of the virtual network infrastructure, instances attempt to send packets
|
|
||||||
using the default Ethernet :term:`maximum transmission unit (MTU)` of 1500
|
|
||||||
bytes. The Networking service automatically provides the correct MTU value
|
|
||||||
to instances via DHCP. However, some cloud images do not use DHCP or ignore
|
|
||||||
the DHCP MTU option and require configuration using metadata or a script.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Option 2 also supports attaching instances to provider networks.
|
|
||||||
|
|
||||||
Choose one of the following networking options to configure services
|
|
||||||
specific to it. Afterwards, return here and proceed to
|
|
||||||
:ref:`neutron-controller-metadata-agent-rdo`.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
neutron-controller-install-option1.rst
|
|
||||||
neutron-controller-install-option2.rst
|
|
||||||
|
|
||||||
.. _neutron-controller-metadata-agent-rdo:
|
|
||||||
|
|
||||||
Configure the metadata agent
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
The :term:`metadata agent <Metadata agent>` provides configuration information
|
|
||||||
such as credentials to instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the metadata host and shared
|
|
||||||
secret:
|
|
||||||
|
|
||||||
.. path /etc/neutron/metadata_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
nova_metadata_ip = controller
|
|
||||||
metadata_proxy_shared_secret = METADATA_SECRET
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
|
|
||||||
|
|
||||||
Configure the Compute service to use the Networking service
|
|
||||||
-----------------------------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
|
|
||||||
|
|
||||||
* In the ``[neutron]`` section, configure access parameters, enable the
|
|
||||||
metadata proxy, and configure the secret:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[neutron]
|
|
||||||
# ...
|
|
||||||
url = http://controller:9696
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
service_metadata_proxy = true
|
|
||||||
metadata_proxy_shared_secret = METADATA_SECRET
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
|
|
||||||
proxy.
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
#. The Networking service initialization scripts expect a symbolic link
|
|
||||||
``/etc/neutron/plugin.ini`` pointing to the ML2 plug-in configuration
|
|
||||||
file, ``/etc/neutron/plugins/ml2/ml2_conf.ini``. If this symbolic
|
|
||||||
link does not exist, create it using the following command:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Populate the database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
|
|
||||||
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Database population occurs later for Networking because the script
|
|
||||||
requires complete server and plug-in configuration files.
|
|
||||||
|
|
||||||
#. Restart the Compute API service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl restart openstack-nova-api.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Start the Networking services and configure them to start when the system
|
|
||||||
boots.
|
|
||||||
|
|
||||||
For both networking options:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable neutron-server.service \
|
|
||||||
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
|
|
||||||
neutron-metadata-agent.service
|
|
||||||
# systemctl start neutron-server.service \
|
|
||||||
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
|
|
||||||
neutron-metadata-agent.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
For networking option 2, also enable and start the layer-3 service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# systemctl enable neutron-l3-agent.service
|
|
||||||
# systemctl start neutron-l3-agent.service
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
@ -1,314 +0,0 @@
|
|||||||
Install and configure controller node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
Prerequisites
|
|
||||||
-------------
|
|
||||||
|
|
||||||
Before you configure the OpenStack Networking (neutron) service, you
|
|
||||||
must create a database, service credentials, and API endpoints.
|
|
||||||
|
|
||||||
#. To create the database, complete these steps:
|
|
||||||
|
|
||||||
|
|
||||||
* Use the database access client to connect to the database
|
|
||||||
server as the ``root`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# mysql
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
* Create the ``neutron`` database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)] CREATE DATABASE neutron;
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Grant proper access to the ``neutron`` database, replacing
|
|
||||||
``NEUTRON_DBPASS`` with a suitable password:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' \
|
|
||||||
IDENTIFIED BY 'NEUTRON_DBPASS';
|
|
||||||
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
|
|
||||||
IDENTIFIED BY 'NEUTRON_DBPASS';
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Exit the database access client.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to admin-only CLI
|
|
||||||
commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. To create the service credentials, complete these steps:
|
|
||||||
|
|
||||||
* Create the ``neutron`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack user create --domain default --password-prompt neutron
|
|
||||||
|
|
||||||
User Password:
|
|
||||||
Repeat User Password:
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
| domain_id | default |
|
|
||||||
| enabled | True |
|
|
||||||
| id | fdb0f541e28141719b6a43c8944bf1fb |
|
|
||||||
| name | neutron |
|
|
||||||
| options | {} |
|
|
||||||
| password_expires_at | None |
|
|
||||||
+---------------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
* Add the ``admin`` role to the ``neutron`` user:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack role add --project service --user neutron admin
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
This command provides no output.
|
|
||||||
|
|
||||||
* Create the ``neutron`` service entity:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack service create --name neutron \
|
|
||||||
--description "OpenStack Networking" network
|
|
||||||
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
| description | OpenStack Networking |
|
|
||||||
| enabled | True |
|
|
||||||
| id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| name | neutron |
|
|
||||||
| type | network |
|
|
||||||
+-------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Create the Networking service API endpoints:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network public http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 85d80a6d02fc4b7683f611d7fc1493a3 |
|
|
||||||
| interface | public |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network internal http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 09753b537ac74422a68d2d791cf3714f |
|
|
||||||
| interface | internal |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
$ openstack endpoint create --region RegionOne \
|
|
||||||
network admin http://controller:9696
|
|
||||||
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| Field | Value |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
| enabled | True |
|
|
||||||
| id | 1ee14289c9374dffb5db92a5c112fc4e |
|
|
||||||
| interface | admin |
|
|
||||||
| region | RegionOne |
|
|
||||||
| region_id | RegionOne |
|
|
||||||
| service_id | f71529314dab4a4d8eca427e701d209e |
|
|
||||||
| service_name | neutron |
|
|
||||||
| service_type | network |
|
|
||||||
| url | http://controller:9696 |
|
|
||||||
+--------------+----------------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Configure networking options
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
You can deploy the Networking service using one of two architectures
|
|
||||||
represented by options 1 and 2.
|
|
||||||
|
|
||||||
Option 1 deploys the simplest possible architecture that only supports
|
|
||||||
attaching instances to provider (external) networks. No self-service (private)
|
|
||||||
networks, routers, or floating IP addresses. Only the ``admin`` or other
|
|
||||||
privileged user can manage provider networks.
|
|
||||||
|
|
||||||
Option 2 augments option 1 with layer-3 services that support attaching
|
|
||||||
instances to self-service networks. The ``demo`` or other unprivileged
|
|
||||||
user can manage self-service networks including routers that provide
|
|
||||||
connectivity between self-service and provider networks. Additionally,
|
|
||||||
floating IP addresses provide connectivity to instances using self-service
|
|
||||||
networks from external networks such as the Internet.
|
|
||||||
|
|
||||||
Self-service networks typically use overlay networks. Overlay network
|
|
||||||
protocols such as VXLAN include additional headers that increase overhead
|
|
||||||
and decrease space available for the payload or user data. Without knowledge
|
|
||||||
of the virtual network infrastructure, instances attempt to send packets
|
|
||||||
using the default Ethernet :term:`maximum transmission unit (MTU)` of 1500
|
|
||||||
bytes. The Networking service automatically provides the correct MTU value
|
|
||||||
to instances via DHCP. However, some cloud images do not use DHCP or ignore
|
|
||||||
the DHCP MTU option and require configuration using metadata or a script.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Option 2 also supports attaching instances to provider networks.
|
|
||||||
|
|
||||||
Choose one of the following networking options to configure services
|
|
||||||
specific to it. Afterwards, return here and proceed to
|
|
||||||
:ref:`neutron-controller-metadata-agent-ubuntu`.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
:maxdepth: 1
|
|
||||||
|
|
||||||
neutron-controller-install-option1.rst
|
|
||||||
neutron-controller-install-option2.rst
|
|
||||||
|
|
||||||
.. _neutron-controller-metadata-agent-ubuntu:
|
|
||||||
|
|
||||||
Configure the metadata agent
|
|
||||||
----------------------------
|
|
||||||
|
|
||||||
The :term:`metadata agent <Metadata agent>` provides configuration information
|
|
||||||
such as credentials to instances.
|
|
||||||
|
|
||||||
* Edit the ``/etc/neutron/metadata_agent.ini`` file and complete the following
|
|
||||||
actions:
|
|
||||||
|
|
||||||
* In the ``[DEFAULT]`` section, configure the metadata host and shared
|
|
||||||
secret:
|
|
||||||
|
|
||||||
.. path /etc/neutron/metadata_agent.ini
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[DEFAULT]
|
|
||||||
# ...
|
|
||||||
nova_metadata_ip = controller
|
|
||||||
metadata_proxy_shared_secret = METADATA_SECRET
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``METADATA_SECRET`` with a suitable secret for the metadata proxy.
|
|
||||||
|
|
||||||
Configure the Compute service to use the Networking service
|
|
||||||
-----------------------------------------------------------
|
|
||||||
|
|
||||||
* Edit the ``/etc/nova/nova.conf`` file and perform the following actions:
|
|
||||||
|
|
||||||
* In the ``[neutron]`` section, configure access parameters, enable the
|
|
||||||
metadata proxy, and configure the secret:
|
|
||||||
|
|
||||||
.. path /etc/nova/nova.conf
|
|
||||||
.. code-block:: ini
|
|
||||||
|
|
||||||
[neutron]
|
|
||||||
# ...
|
|
||||||
url = http://controller:9696
|
|
||||||
auth_url = http://controller:35357
|
|
||||||
auth_type = password
|
|
||||||
project_domain_name = default
|
|
||||||
user_domain_name = default
|
|
||||||
region_name = RegionOne
|
|
||||||
project_name = service
|
|
||||||
username = neutron
|
|
||||||
password = NEUTRON_PASS
|
|
||||||
service_metadata_proxy = true
|
|
||||||
metadata_proxy_shared_secret = METADATA_SECRET
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
Replace ``NEUTRON_PASS`` with the password you chose for the ``neutron``
|
|
||||||
user in the Identity service.
|
|
||||||
|
|
||||||
Replace ``METADATA_SECRET`` with the secret you chose for the metadata
|
|
||||||
proxy.
|
|
||||||
|
|
||||||
Finalize installation
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
#. Populate the database:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
|
|
||||||
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Database population occurs later for Networking because the script
|
|
||||||
requires complete server and plug-in configuration files.
|
|
||||||
|
|
||||||
#. Restart the Compute API service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service nova-api restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. Restart the Networking services.
|
|
||||||
|
|
||||||
For both networking options:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service neutron-server restart
|
|
||||||
# service neutron-linuxbridge-agent restart
|
|
||||||
# service neutron-dhcp-agent restart
|
|
||||||
# service neutron-metadata-agent restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
For networking option 2, also restart the layer-3 service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
# service neutron-l3-agent restart
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
@ -1,9 +0,0 @@
|
|||||||
Install and configure controller node
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
neutron-controller-install-debian
|
|
||||||
neutron-controller-install-obs
|
|
||||||
neutron-controller-install-rdo
|
|
||||||
neutron-controller-install-ubuntu
|
|
@ -1,7 +0,0 @@
|
|||||||
==========
|
|
||||||
Next steps
|
|
||||||
==========
|
|
||||||
|
|
||||||
Your OpenStack environment now includes the core components necessary
|
|
||||||
to launch a basic instance. You can :ref:`launch-instance` or add more
|
|
||||||
OpenStack services to your environment.
|
|
@ -1,22 +0,0 @@
|
|||||||
Networking Option 1: Provider networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
* List agents to verify successful launch of the neutron agents:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack network agent list
|
|
||||||
|
|
||||||
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
|
|
||||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
|
||||||
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
|
|
||||||
| 0400c2f6-4d3b-44bc-89fa-99093432f3bf | Metadata agent | controller | None | True | UP | neutron-metadata-agent |
|
|
||||||
| 83cf853d-a2f2-450a-99d7-e9c6fc08f4c3 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent |
|
|
||||||
| ec302e51-6101-43cf-9f19-88a78613cbee | Linux bridge agent | compute | None | True | UP | neutron-linuxbridge-agent |
|
|
||||||
| fcb9bc6e-22b1-43bc-9054-272dd517d025 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent |
|
|
||||||
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
The output should indicate three agents on the controller node and one
|
|
||||||
agent on each compute node.
|
|
@ -1,23 +0,0 @@
|
|||||||
Networking Option 2: Self-service networks
|
|
||||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
* List agents to verify successful launch of the neutron agents:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack network agent list
|
|
||||||
|
|
||||||
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
|
|
||||||
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
|
|
||||||
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
|
|
||||||
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | None | True | UP | neutron-metadata-agent |
|
|
||||||
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent |
|
|
||||||
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent |
|
|
||||||
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | nova | True | UP | neutron-l3-agent |
|
|
||||||
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent |
|
|
||||||
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
The output should indicate four agents on the controller node and one
|
|
||||||
agent on each compute node.
|
|
@ -1,128 +0,0 @@
|
|||||||
Verify operation
|
|
||||||
~~~~~~~~~~~~~~~~
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Perform these commands on the controller node.
|
|
||||||
|
|
||||||
#. Source the ``admin`` credentials to gain access to admin-only CLI
|
|
||||||
commands:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ . admin-openrc
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
#. List loaded extensions to verify successful launch of the
|
|
||||||
``neutron-server`` process:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack extension list --network
|
|
||||||
|
|
||||||
+---------------------------+---------------------------+----------------------------+
|
|
||||||
| Name | Alias | Description |
|
|
||||||
+---------------------------+---------------------------+----------------------------+
|
|
||||||
| Default Subnetpools | default-subnetpools | Provides ability to mark |
|
|
||||||
| | | and use a subnetpool as |
|
|
||||||
| | | the default |
|
|
||||||
| Availability Zone | availability_zone | The availability zone |
|
|
||||||
| | | extension. |
|
|
||||||
| Network Availability Zone | network_availability_zone | Availability zone support |
|
|
||||||
| | | for network. |
|
|
||||||
| Port Binding | binding | Expose port bindings of a |
|
|
||||||
| | | virtual port to external |
|
|
||||||
| | | application |
|
|
||||||
| agent | agent | The agent management |
|
|
||||||
| | | extension. |
|
|
||||||
| Subnet Allocation | subnet_allocation | Enables allocation of |
|
|
||||||
| | | subnets from a subnet pool |
|
|
||||||
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among |
|
|
||||||
| | | dhcp agents |
|
|
||||||
| Tag support | tag | Enables to set tag on |
|
|
||||||
| | | resources. |
|
|
||||||
| Neutron external network | external-net | Adds external network |
|
|
||||||
| | | attribute to network |
|
|
||||||
| | | resource. |
|
|
||||||
| Neutron Service Flavors | flavors | Flavor specification for |
|
|
||||||
| | | Neutron advanced services |
|
|
||||||
| Network MTU | net-mtu | Provides MTU attribute for |
|
|
||||||
| | | a network resource. |
|
|
||||||
| Network IP Availability | network-ip-availability | Provides IP availability |
|
|
||||||
| | | data for each network and |
|
|
||||||
| | | subnet. |
|
|
||||||
| Quota management support | quotas | Expose functions for |
|
|
||||||
| | | quotas management per |
|
|
||||||
| | | tenant |
|
|
||||||
| Provider Network | provider | Expose mapping of virtual |
|
|
||||||
| | | networks to physical |
|
|
||||||
| | | networks |
|
|
||||||
| Multi Provider Network | multi-provider | Expose mapping of virtual |
|
|
||||||
| | | networks to multiple |
|
|
||||||
| | | physical networks |
|
|
||||||
| Address scope | address-scope | Address scopes extension. |
|
|
||||||
| Subnet service types | subnet-service-types | Provides ability to set |
|
|
||||||
| | | the subnet service_types |
|
|
||||||
| | | field |
|
|
||||||
| Resource timestamps | standard-attr-timestamp | Adds created_at and |
|
|
||||||
| | | updated_at fields to all |
|
|
||||||
| | | Neutron resources that |
|
|
||||||
| | | have Neutron standard |
|
|
||||||
| | | attributes. |
|
|
||||||
| Neutron Service Type | service-type | API for retrieving service |
|
|
||||||
| Management | | providers for Neutron |
|
|
||||||
| | | advanced services |
|
|
||||||
| Tag support for | tag-ext | Extends tag support to |
|
|
||||||
| resources: subnet, | | more L2 and L3 resources. |
|
|
||||||
| subnetpool, port, router | | |
|
|
||||||
| Neutron Extra DHCP opts | extra_dhcp_opt | Extra options |
|
|
||||||
| | | configuration for DHCP. |
|
|
||||||
| | | For example PXE boot |
|
|
||||||
| | | options to DHCP clients |
|
|
||||||
| | | can be specified (e.g. |
|
|
||||||
| | | tftp-server, server-ip- |
|
|
||||||
| | | address, bootfile-name) |
|
|
||||||
| Resource revision numbers | standard-attr-revisions | This extension will |
|
|
||||||
| | | display the revision |
|
|
||||||
| | | number of neutron |
|
|
||||||
| | | resources. |
|
|
||||||
| Pagination support | pagination | Extension that indicates |
|
|
||||||
| | | that pagination is |
|
|
||||||
| | | enabled. |
|
|
||||||
| Sorting support | sorting | Extension that indicates |
|
|
||||||
| | | that sorting is enabled. |
|
|
||||||
| security-group | security-group | The security groups |
|
|
||||||
| | | extension. |
|
|
||||||
| RBAC Policies | rbac-policies | Allows creation and |
|
|
||||||
| | | modification of policies |
|
|
||||||
| | | that control tenant access |
|
|
||||||
| | | to resources. |
|
|
||||||
| standard-attr-description | standard-attr-description | Extension to add |
|
|
||||||
| | | descriptions to standard |
|
|
||||||
| | | attributes |
|
|
||||||
| Port Security | port-security | Provides port security |
|
|
||||||
| Allowed Address Pairs | allowed-address-pairs | Provides allowed address |
|
|
||||||
| | | pairs |
|
|
||||||
| project_id field enabled | project-id | Extension that indicates |
|
|
||||||
| | | that project_id field is |
|
|
||||||
| | | enabled. |
|
|
||||||
+---------------------------+---------------------------+----------------------------+
|
|
||||||
|
|
||||||
.. end
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
Actual output may differ slightly from this example.
|
|
||||||
|
|
||||||
|
|
||||||
You can perform further testing of your networking using the
|
|
||||||
`neutron-sanity-check command line client <https://docs.openstack.org/cli-reference/neutron-sanity-check.html>`_.
|
|
||||||
|
|
||||||
Use the verification section for the networking option that you chose to
|
|
||||||
deploy.
|
|
||||||
|
|
||||||
.. toctree::
|
|
||||||
|
|
||||||
neutron-verify-option1.rst
|
|
||||||
neutron-verify-option2.rst
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user