From 08f0c4aa465d6ed46ab4f140f454a9186f5ee86d Mon Sep 17 00:00:00 2001 From: Gudrun Wolfgram Date: Thu, 25 Feb 2016 16:16:54 -0600 Subject: [PATCH] Cloud Admin Guide CLI chapter Moving (copying) Admin User Guide CLI content to the Cloud Admin Guide as a part of the reorganization goal. This patch does not include any new or original content. This patch is Part 1 to create a new command- line client section for Admin Users in the Cloud Admin Guide, as disucssed in the User Guide Specialty team meetings. 1) Creating a new CLI section with cli.rst 2) Moving the non common files from user-guide-admin to admin-guide-cloud, along with their sub files. (Rename user-guide-admin files to user cli_ prefix in admin- guide-cloud.) manage_projects_users_and_roles.rst nova_cli_manage_projects_security.rst cli_manage_services.rst cli_manage_shares.rst cli_manage_flavors.rst cli_admin_manage_environment.rst cli_set_quotas.rst analyzing-log-files-with-swift-cli.rst cli_cinder_scheduling.rst 3) Attempt updates to several links. Change-Id: I97f4ced4f5033c7e0f3bf00c410288a75699d110 Implements: blueprint user-guides-reorganised --- doc/admin-guide-cloud/source/cli.rst | 16 + .../source/cli_admin_manage_environment.rst | 16 + .../source/cli_admin_manage_ip_addresses.rst | 107 ++++++ .../source/cli_admin_manage_stacks.rst | 35 ++ .../cli_analyzing-log-files-with-swift.rst | 210 +++++++++++ .../source/cli_cinder_quotas.rst | 163 ++++++++ .../source/cli_cinder_scheduling.rst | 53 +++ .../source/cli_keystone_manage_services.rst | 155 ++++++++ .../source/cli_manage_flavors.rst | 150 ++++++++ .../cli_manage_projects_users_and_roles.rst | 351 ++++++++++++++++++ .../source/cli_manage_services.rst | 9 + .../source/cli_manage_shares.rst | 40 ++ .../source/cli_networking_advanced_quotas.rst | 321 ++++++++++++++++ .../source/cli_nova_evacuate.rst | 51 +++ .../cli_nova_manage_projects_security.rst | 206 ++++++++++ .../source/cli_nova_manage_services.rst | 77 ++++ .../source/cli_nova_migrate.rst | 78 ++++ .../source/cli_nova_numa_libvirt.rst | 24 ++ .../source/cli_nova_specify_host.rst | 36 ++ .../source/cli_set_compute_quotas.rst | 299 +++++++++++++++ .../source/cli_set_quotas.rst | 54 +++ doc/admin-guide-cloud/source/index.rst | 1 + 22 files changed, 2452 insertions(+) create mode 100644 doc/admin-guide-cloud/source/cli.rst create mode 100644 doc/admin-guide-cloud/source/cli_admin_manage_environment.rst create mode 100644 doc/admin-guide-cloud/source/cli_admin_manage_ip_addresses.rst create mode 100644 doc/admin-guide-cloud/source/cli_admin_manage_stacks.rst create mode 100644 doc/admin-guide-cloud/source/cli_analyzing-log-files-with-swift.rst create mode 100644 doc/admin-guide-cloud/source/cli_cinder_quotas.rst create mode 100644 doc/admin-guide-cloud/source/cli_cinder_scheduling.rst create mode 100644 doc/admin-guide-cloud/source/cli_keystone_manage_services.rst create mode 100644 doc/admin-guide-cloud/source/cli_manage_flavors.rst create mode 100644 doc/admin-guide-cloud/source/cli_manage_projects_users_and_roles.rst create mode 100644 doc/admin-guide-cloud/source/cli_manage_services.rst create mode 100644 doc/admin-guide-cloud/source/cli_manage_shares.rst create mode 100644 doc/admin-guide-cloud/source/cli_networking_advanced_quotas.rst create mode 100644 doc/admin-guide-cloud/source/cli_nova_evacuate.rst create mode 100644 doc/admin-guide-cloud/source/cli_nova_manage_projects_security.rst create mode 100644 doc/admin-guide-cloud/source/cli_nova_manage_services.rst create mode 100644 doc/admin-guide-cloud/source/cli_nova_migrate.rst create mode 100644 doc/admin-guide-cloud/source/cli_nova_numa_libvirt.rst create mode 100644 doc/admin-guide-cloud/source/cli_nova_specify_host.rst create mode 100644 doc/admin-guide-cloud/source/cli_set_compute_quotas.rst create mode 100644 doc/admin-guide-cloud/source/cli_set_quotas.rst diff --git a/doc/admin-guide-cloud/source/cli.rst b/doc/admin-guide-cloud/source/cli.rst new file mode 100644 index 0000000000..24f5b66bc1 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli.rst @@ -0,0 +1,16 @@ +============================== +OpenStack command-line clients +============================== + +.. toctree:: + :maxdepth: 2 + + cli_manage_projects_users_and_roles.rst + cli_nova_manage_projects_security.rst + cli_manage_services.rst + cli_manage_shares.rst + cli_manage_flavors.rst + cli_admin_manage_environment.rst + cli_set_quotas.rst + cli_analyzing-log-files-with-swift.rst + cli_cinder_scheduling.rst diff --git a/doc/admin-guide-cloud/source/cli_admin_manage_environment.rst b/doc/admin-guide-cloud/source/cli_admin_manage_environment.rst new file mode 100644 index 0000000000..3b2ff78a21 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_admin_manage_environment.rst @@ -0,0 +1,16 @@ +================================ +Manage the OpenStack environment +================================ + +This section includes tasks specific to the OpenStack environment. + +.. toctree:: + :maxdepth: 2 + + cli_nova_specify_host.rst + cli_nova_numa_libvirt.rst + cli_nova_evacuate.rst + cli_nova_migrate.rst + cli_admin_manage_ip_addresses.rst + cli_admin_manage_stacks.rst + common/nova_show_usage_statistics_for_hosts_instances.rst diff --git a/doc/admin-guide-cloud/source/cli_admin_manage_ip_addresses.rst b/doc/admin-guide-cloud/source/cli_admin_manage_ip_addresses.rst new file mode 100644 index 0000000000..02a4a7e365 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_admin_manage_ip_addresses.rst @@ -0,0 +1,107 @@ +=================== +Manage IP addresses +=================== + +Each instance has a private, fixed IP address that is assigned when +the instance is launched. In addition, an instance can have a public +or floating IP address. Private IP addresses are used for +communication between instances, and public IP addresses are used +for communication with networks outside the cloud, including the +Internet. + +- By default, both administrative and end users can associate floating IP + addresses with projects and instances. You can change user permissions for + managing IP addresses by updating the ``/etc/nova/policy.json`` + file. For basic floating-IP procedures, refer to the ``Manage IP + Addresses`` section in the `OpenStack End User Guide `_. + +- For details on creating public networks using OpenStack Networking + (``neutron``), refer to `Advanced features through API extensions + `_. + No floating IP addresses are created by default in OpenStack Networking. + +As an administrator using legacy networking (``nova-network``), you +can use the following bulk commands to list, create, and delete ranges +of floating IP addresses. These addresses can then be associated with +instances by end users. + +List addresses for all projects +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To list all floating IP addresses for all projects, run: + +.. code-block:: console + + $ nova floating-ip-bulk-list + +------------+---------------+---------------+--------+-----------+ + | project_id | address | instance_uuid | pool | interface | + +------------+---------------+---------------+--------+-----------+ + | None | 172.24.4.225 | None | public | eth0 | + | None | 172.24.4.226 | None | public | eth0 | + | None | 172.24.4.227 | None | public | eth0 | + | None | 172.24.4.228 | None | public | eth0 | + | None | 172.24.4.229 | None | public | eth0 | + | None | 172.24.4.230 | None | public | eth0 | + | None | 172.24.4.231 | None | public | eth0 | + | None | 172.24.4.232 | None | public | eth0 | + | None | 172.24.4.233 | None | public | eth0 | + | None | 172.24.4.234 | None | public | eth0 | + | None | 172.24.4.235 | None | public | eth0 | + | None | 172.24.4.236 | None | public | eth0 | + | None | 172.24.4.237 | None | public | eth0 | + | None | 172.24.4.238 | None | public | eth0 | + | None | 192.168.253.1 | None | test | eth0 | + | None | 192.168.253.2 | None | test | eth0 | + | None | 192.168.253.3 | None | test | eth0 | + | None | 192.168.253.4 | None | test | eth0 | + | None | 192.168.253.5 | None | test | eth0 | + | None | 192.168.253.6 | None | test | eth0 | + +------------+---------------+---------------+--------+-----------+ + +Bulk create floating IP addresses +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To create a range of floating IP addresses, run: + +.. code-block:: console + + $ nova floating-ip-bulk-create [--pool POOL_NAME] [--interface INTERFACE] RANGE_TO_CREATE + +For example: + +.. code-block:: console + + $ nova floating-ip-bulk-create --pool test 192.168.1.56/29 + +By default, ``floating-ip-bulk-create`` uses the +``public`` pool and ``eth0`` interface values. + +.. note:: + + You should use a range of free IP addresses that is valid for your + network. If you are not sure, at least try to avoid the DHCP address + range: + + - Pick a small range (/29 gives an 8 address range, 6 of + which will be usable). + + - Use :command:`nmap` to check a range's availability. For example, + 192.168.1.56/29 represents a small range of addresses + (192.168.1.56-63, with 57-62 usable), and you could run the + command :command:`nmap -sn 192.168.1.56/29` to check whether the entire + range is currently unused. + +Bulk delete floating IP addresses +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To delete a range of floating IP addresses, run: + +.. code-block:: console + + $ nova floating-ip-bulk-delete RANGE_TO_DELETE + +For example: + +.. code-block:: console + + $ nova floating-ip-bulk-delete 192.168.1.56/29 diff --git a/doc/admin-guide-cloud/source/cli_admin_manage_stacks.rst b/doc/admin-guide-cloud/source/cli_admin_manage_stacks.rst new file mode 100644 index 0000000000..8daeb8c3fa --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_admin_manage_stacks.rst @@ -0,0 +1,35 @@ +====================================== +Launch and manage stacks using the CLI +====================================== + +The Orchestration service provides a template-based +orchestration engine. Administrators can use the orchestration engine +to create and manage Openstack cloud infrastructure resources. For +example, an administrator can define storage, networking, instances, +and applications to use as a repeatable running environment. + +Templates are used to create stacks, which are collections +of resources. For example, a stack might include instances, +floating IPs, volumes, security groups, or users. +The Orchestration service offers access to all OpenStack +core services through a single modular template, with additional +orchestration capabilities such as auto-scaling and basic +high availability. + +For information about: + +- basic creation and deletion of Orchestration stacks, refer + to the `OpenStack End User Guide `_ + +- **heat** CLI commands, see the `OpenStack Command Line Interface Reference + `_ + +As an administrator, you can also carry out stack functions +on behalf of your users. For example, to resume, suspend, +or delete a stack, run: + +.. code-block:: console + + $ heat action-resume stackID + $ heat action-suspend stackID + $ heat stack-delete stackID diff --git a/doc/admin-guide-cloud/source/cli_analyzing-log-files-with-swift.rst b/doc/admin-guide-cloud/source/cli_analyzing-log-files-with-swift.rst new file mode 100644 index 0000000000..bf6843a1b1 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_analyzing-log-files-with-swift.rst @@ -0,0 +1,210 @@ +================= +Analyze log files +================= + +Use the swift command-line client for Object Storage to analyze log files. + +The swift client is simple to use, scalable, and flexible. + +Use the swift client :option:`-o` or :option:`-output` option to get +short answers to questions about logs. + +You can use the :option:`-o` or :option:`--output` option with a single object +download to redirect the command output to a specific file or to STDOUT +(``-``). The ability to redirect the output to STDOUT enables you to +pipe (``|``) data without saving it to disk first. + +Upload and analyze log files +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +#. This example assumes that ``logtest`` directory contains the + following log files. + + .. code-block:: console + + 2010-11-16-21_access.log + 2010-11-16-22_access.log + 2010-11-15-21_access.log + 2010-11-15-22_access.log + + + Each file uses the following line format. + + .. code-block:: console + + Nov 15 21:53:52 lucid64 proxy-server - 127.0.0.1 15/Nov/2010/22/53/52 DELETE /v1/AUTH_cd4f57824deb4248a533f2c28bf156d3/2eefc05599d44df38a7f18b0b42ffedd HTTP/1.0 204 - \ + - test%3Atester%2CAUTH_tkcdab3c6296e249d7b7e2454ee57266ff - - - txaba5984c-aac7-460e-b04b-afc43f0c6571 - 0.0432 + + +#. Change into the ``logtest`` directory: + + .. code-block:: console + + $ cd logtest + +#. Upload the log files into the ``logtest`` container: + + .. code-block:: console + + $ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing upload logtest *.log + + .. code-block:: console + + 2010-11-16-21_access.log + 2010-11-16-22_access.log + 2010-11-15-21_access.log + 2010-11-15-22_access.log + +#. Get statistics for the account: + + .. code-block:: console + + $ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing \ + -q stat + + .. code-block:: console + + Account: AUTH_cd4f57824deb4248a533f2c28bf156d3 + Containers: 1 + Objects: 4 + Bytes: 5888268 + +#. Get statistics for the ``logtest`` container: + + .. code-block:: console + + $ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing \ + stat logtest + + .. code-block:: console + + Account: AUTH_cd4f57824deb4248a533f2c28bf156d3 + Container: logtest + Objects: 4 + Bytes: 5864468 + Read ACL: + Write ACL: + +#. List all objects in the logtest container: + + .. code-block:: console + + $ swift -A http:///swift-auth.com:11000/v1.0 -U test:tester -K testing \ + list logtest + + .. code-block:: console + + 2010-11-15-21_access.log + 2010-11-15-22_access.log + 2010-11-16-21_access.log + 2010-11-16-22_access.log + +Download and analyze an object +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +This example uses the :option:`-o` option and a hyphen (``-``) to get +information about an object. + +Use the :command:`swift download` command to download the object. On this +command, stream the output to ``awk`` to break down requests by return +code and the date ``2200 on November 16th, 2010``. + +Using the log line format, find the request type in column 9 and the +return code in column 12. + +After ``awk`` processes the output, it pipes it to ``sort`` and ``uniq +-c`` to sum up the number of occurrences for each request type and +return code combination. + +#. Download an object: + + .. code-block:: console + + $ swift -A http://swift-auth.com:11000/v1.0 -U test:tester -K testing \ + download -o - logtest 2010-11-16-22_access.log | awk '{ print \ + $9"-"$12}' | sort | uniq -c + + .. code-block:: console + + 805 DELETE-204 + 12 DELETE-404 + 2 DELETE-409 + 723 GET-200 + 142 GET-204 + 74 GET-206 + 80 GET-304 + 34 GET-401 + 5 GET-403 + 18 GET-404 + 166 GET-412 + 2 GET-416 + 50 HEAD-200 + 17 HEAD-204 + 20 HEAD-401 + 8 HEAD-404 + 30 POST-202 + 25 POST-204 + 22 POST-400 + 6 POST-404 + 842 PUT-201 + 2 PUT-202 + 32 PUT-400 + 4 PUT-403 + 4 PUT-404 + 2 PUT-411 + 6 PUT-412 + 6 PUT-413 + 2 PUT-422 + 8 PUT-499 + +#. Discover how many PUT requests are in each log file. + + Use a bash for loop with awk and swift with the :option:`-o` or + :option:`--output` option and a hyphen (``-``) to discover how many + PUT requests are in each log file. + + Run the :command:`swift list` command to list objects in the logtest + container. Then, for each item in the list, run the + :command:`swift download -o -` command. Pipe the output into grep to + filter the PUT requests. Finally, pipe into ``wc -l`` to count the lines. + + .. code-block:: console + + $ for f in `swift -A http://swift-auth.com:11000/v1.0 -U test:tester \ + -K testing list logtest` ; \ + do echo -ne "PUTS - " ; swift -A \ + http://swift-auth.com:11000/v1.0 -U test:tester \ + -K testing download -o - logtest $f | grep PUT | wc -l ; \ + done + + .. code-block:: console + + 2010-11-15-21_access.log - PUTS - 402 + 2010-11-15-22_access.log - PUTS - 1091 + 2010-11-16-21_access.log - PUTS - 892 + 2010-11-16-22_access.log - PUTS - 910 + +#. List the object names that begin with a specified string. + +#. Run the :command:`swift list -p 2010-11-15` command to list objects + in the logtest container that begin with the ``2010-11-15`` string. + +#. For each item in the list, run the :command:`swift download -o -` command. + +#. Pipe the output to :command:`grep` and :command:`wc`. + Use the :command:`echo` command to display the object name. + + .. code-block:: console + + $ for f in `swift -A http://swift-auth.com:11000/v1.0 -U test:tester \ + -K testing list -p 2010-11-15 logtest` ; \ + do echo -ne "$f - PUTS - " ; swift -A \ + http://127.0.0.1:11000/v1.0 -U test:tester \ + -K testing download -o - logtest $f | grep PUT | wc -l ; \ + done + + .. code-block:: console + + 2010-11-15-21_access.log - PUTS - 402 + 2010-11-15-22_access.log - PUTS - 910 + diff --git a/doc/admin-guide-cloud/source/cli_cinder_quotas.rst b/doc/admin-guide-cloud/source/cli_cinder_quotas.rst new file mode 100644 index 0000000000..b7c174e8dc --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_cinder_quotas.rst @@ -0,0 +1,163 @@ +=================================== +Manage Block Storage service quotas +=================================== + +As an administrative user, you can update the OpenStack Block +Storage service quotas for a project. You can also update the quota +defaults for a new project. + +**Block Storage quotas** + +=================== ============================================= + Property name Defines the number of +=================== ============================================= + gigabytes Volume gigabytes allowed for each project. + snapshots Volume snapshots allowed for each project. + volumes Volumes allowed for each project. +=================== ============================================= + +View Block Storage quotas +~~~~~~~~~~~~~~~~~~~~~~~~~ + +Administrative users can view Block Storage service quotas. + +#. Obtain the project ID. + + For example: + + .. code-block:: console + + $ project_id=$(openstack project show -f value -c id PROJECT_NAME) + +#. List the default quotas for a project (tenant): + + .. code-block:: console + + $ cinder quota-defaults PROJECT_ID + + For example: + + .. code-block:: console + + $ cinder quota-defaults $project_id + +-----------+-------+ + | Property | Value | + +-----------+-------+ + | gigabytes | 1000 | + | snapshots | 10 | + | volumes | 10 | + +-----------+-------+ + +#. View Block Storage service quotas for a project (tenant): + + .. code-block:: console + + $ cinder quota-show PROJECT_ID + + For example: + + .. code-block:: console + + $ cinder quota-show $project_id + +-----------+-------+ + | Property | Value | + +-----------+-------+ + | gigabytes | 1000 | + | snapshots | 10 | + | volumes | 10 | + +-----------+-------+ + +#. Show the current usage of a per-project quota: + + .. code-block:: console + + $ cinder quota-usage PROJECT_ID + + For example: + + .. code-block:: console + + $ cinder quota-usage $project_id + +-----------+--------+----------+-------+ + | Type | In_use | Reserved | Limit | + +-----------+--------+----------+-------+ + | gigabytes | 0 | 0 | 1000 | + | snapshots | 0 | 0 | 10 | + | volumes | 0 | 0 | 15 | + +-----------+--------+----------+-------+ + +Edit and update Block Storage service quotas +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Administrative users can edit and update Block Storage +service quotas. + +#. Clear per-project quota limits. + + .. code-block:: console + + $ cinder quota-delete PROJECT_ID + +#. To update a default value for a new project, + update the property in the :guilabel:`cinder.quota` + section of the ``/etc/cinder/cinder.conf`` file. + For more information, see the `Block Storage + Configuration Reference `_. + +#. To update Block Storage service quotas for an existing project (tenant) + + .. code-block:: console + + $ cinder quota-update --QUOTA_NAME QUOTA_VALUE PROJECT_ID + + Replace QUOTA_NAME with the quota that is to be updated, NEW_VALUE + with the required new value, and PROJECT_ID with required project + ID. + + For example: + + .. code-block:: console + + $ cinder quota-update --volumes 15 $project_id + $ cinder quota-show $project_id + +-----------+-------+ + | Property | Value | + +-----------+-------+ + | gigabytes | 1000 | + | snapshots | 10 | + | volumes | 15 | + +-----------+-------+ + + +#. Clear per-project quota limits. + + .. code-block:: console + + $ cinder quota-delete PROJECT_ID + +Remove a service +~~~~~~~~~~~~~~~~ + +#. Determine the binary and host of the service you want to remove. + + .. code-block:: console + + $ cinder service-list + +------------------+----------------------+------+---------+-------+----------------------------+-----------------+ + | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | + +------------------+----------------------+------+---------+-------+----------------------------+-----------------+ + | cinder-scheduler | devstack | nova | enabled | up | 2015-10-13T15:21:48.000000 | - | + | cinder-volume | devstack@lvmdriver-1 | nova | enabled | up | 2015-10-13T15:21:52.000000 | - | + +------------------+----------------------+------+---------+-------+----------------------------+-----------------+ + +#. Disable the service. + + .. code-block:: console + + $ cinder service-disable HOST_NAME BINARY_NAME + +#. Remove the service from the database. + + .. code-block:: console + + $ cinder-manage service remove BINARY_NAME HOST_NAME diff --git a/doc/admin-guide-cloud/source/cli_cinder_scheduling.rst b/doc/admin-guide-cloud/source/cli_cinder_scheduling.rst new file mode 100644 index 0000000000..a1566fabe8 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_cinder_scheduling.rst @@ -0,0 +1,53 @@ +=============================== +Manage Block Storage scheduling +=============================== + +As an administrative user, you have some control over which volume +back end your volumes reside on. You can specify affinity or +anti-affinity between two volumes. Affinity between volumes means +that they are stored on the same back end, whereas anti-affinity +means that they are stored on different back ends. + +For information on how to set up multiple back ends for Cinder, +refer to the guide for `Configuring multiple-storage back ends +`__. + +Example Usages +~~~~~~~~~~~~~~ + +#. Create a new volume on the same back end as Volume_A: + + .. code-block:: console + + $ cinder create --hint same_host=Volume_A-UUID SIZE + +#. Create a new volume on a different back end than Volume_A: + + .. code-block:: console + + $ cinder create --hint different_host=Volume_A-UUID SIZE + +#. Create a new volume on the same back end as Volume_A and Volume_B: + + .. code-block:: console + + $ cinder create --hint same_host=Volume_A-UUID --hint same_host=Volume_B-UUID SIZE + + Or: + + .. code-block:: console + + $ cinder create --hint same_host="[Volume_A-UUID, Volume_B-UUID]" SIZE + +#. Create a new volume on a different back end than both Volume_A and + Volume_B: + + .. code-block:: console + + $ cinder create --hint different_host=Volume_A-UUID --hint different_host=Volume_B-UUID SIZE + + Or: + + .. code-block:: console + + $ cinder create --hint different_host="[Volume_A-UUID, Volume_B-UUID]" SIZE diff --git a/doc/admin-guide-cloud/source/cli_keystone_manage_services.rst b/doc/admin-guide-cloud/source/cli_keystone_manage_services.rst new file mode 100644 index 0000000000..606ada118a --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_keystone_manage_services.rst @@ -0,0 +1,155 @@ +============================================ +Create and manage services and service users +============================================ + +The Identity service enables you to define services, as +follows: + +- Service catalog template. The Identity service acts + as a service catalog of endpoints for other OpenStack + services. The ``/etc/default_catalog.templates`` + template file defines the endpoints for services. When + the Identity service uses a template file back end, + any changes that are made to the endpoints are cached. + These changes do not persist when you restart the + service or reboot the machine. +- An SQL back end for the catalog service. When the + Identity service is online, you must add the services + to the catalog. When you deploy a system for + production, use the SQL back end. + +The ``auth_token`` middleware supports the +use of either a shared secret or users for each +service. + +To authenticate users against the Identity service, you must +create a service user for each OpenStack service. For example, +create a service user for the Compute, Block Storage, and +Networking services. + +To configure the OpenStack services with service users, +create a project for all services and create users for each +service. Assign the admin role to each service user and +project pair. This role enables users to validate tokens and +authenticate and authorize other user requests. + +Create a service +~~~~~~~~~~~~~~~~ + +#. List the available services: + + .. code-block:: console + + $ openstack service list + +----------------------------------+----------+------------+ + | ID | Name | Type | + +----------------------------------+----------+------------+ + | 9816f1faaa7c4842b90fb4821cd09223 | cinder | volume | + | 1250f64f31e34dcd9a93d35a075ddbe1 | cinderv2 | volumev2 | + | da8cf9f8546b4a428c43d5e032fe4afc | ec2 | ec2 | + | 5f105eeb55924b7290c8675ad7e294ae | glance | image | + | dcaa566e912e4c0e900dc86804e3dde0 | keystone | identity | + | 4a715cfbc3664e9ebf388534ff2be76a | nova | compute | + | 1aed4a6cf7274297ba4026cf5d5e96c5 | novav21 | computev21 | + | bed063c790634c979778551f66c8ede9 | neutron | network | + | 6feb2e0b98874d88bee221974770e372 | s3 | s3 | + +----------------------------------+----------+------------+ + +#. To create a service, run this command: + + .. code-block:: console + + $ openstack service create --name SERVICE_NAME --description SERVICE_DESCRIPTION SERVICE_TYPE + + The arguments are: + - ``service_name``: the unique name of the new service. + - ``service_type``: the service type, such as ``identity``, + ``compute``, ``network``, ``image``, ``object-store`` + or any other service identifier string. + - ``service_description``: the description of the service. + + For example, to create a ``swift`` service of type + ``object-store``, run this command: + + .. code-block:: console + + $ openstack service create --name swift --description "object store service" object-store + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | object store service | + | enabled | True | + | id | 84c23f4b942c44c38b9c42c5e517cd9a | + | name | swift | + | type | object-store | + +-------------+----------------------------------+ + +#. To get details for a service, run this command: + + .. code-block:: console + + $ openstack service show SERVICE_TYPE|SERVICE_NAME|SERVICE_ID + + For example: + + .. code-block:: console + + $ openstack service show object-store + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | object store service | + | enabled | True | + | id | 84c23f4b942c44c38b9c42c5e517cd9a | + | name | swift | + | type | object-store | + +-------------+----------------------------------+ + +Create service users +~~~~~~~~~~~~~~~~~~~~ + +#. Create a project for the service users. + Typically, this project is named ``service``, + but choose any name you like: + + .. code-block:: console + + $ openstack project create service + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | None | + | enabled | True | + | id | 3e9f3f5399624b2db548d7f871bd5322 | + | name | service | + +-------------+----------------------------------+ + +#. Create service users for the relevant services for your + deployment. + +#. Assign the admin role to the user-project pair. + + .. code-block:: console + + $ openstack role add --project service --user SERVICE_USER_NAME admin + +-------+----------------------------------+ + | Field | Value | + +-------+----------------------------------+ + | id | 233109e756c1465292f31e7662b429b1 | + | name | admin | + +-------+----------------------------------+ + +Delete a service +~~~~~~~~~~~~~~~~ + +To delete a specified service, specify its ID. + +.. code-block:: console + + $ openstack service delete SERVICE_TYPE|SERVICE_NAME|SERVICE_ID + +For example: + +.. code-block:: console + + $ openstack service delete object-store diff --git a/doc/admin-guide-cloud/source/cli_manage_flavors.rst b/doc/admin-guide-cloud/source/cli_manage_flavors.rst new file mode 100644 index 0000000000..05630283b2 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_manage_flavors.rst @@ -0,0 +1,150 @@ +============== +Manage flavors +============== + +In OpenStack, flavors define the compute, memory, and +storage capacity of nova computing instances. To put it +simply, a flavor is an available hardware configuration for a +server. It defines the ``size`` of a virtual server +that can be launched. + +.. note:: + + Flavors can also determine on which compute host a flavor + can be used to launch an instance. For information + about customizing flavors, refer to `Flavors + `_. + +A flavor consists of the following parameters: + +Flavor ID + Unique ID (integer or UUID) for the new flavor. If + specifying 'auto', a UUID will be automatically generated. + +Name + Name for the new flavor. + +VCPUs + Number of virtual CPUs to use. + +Memory MB + Amount of RAM to use (in megabytes). + +Root Disk GB + Amount of disk space (in gigabytes) to use for + the root (/) partition. + +Ephemeral Disk GB + Amount of disk space (in gigabytes) to use for + the ephemeral partition. If unspecified, the value + is 0 by default. + Ephemeral disks offer machine local disk storage + linked to the lifecycle of a VM instance. When a + VM is terminated, all data on the ephemeral disk + is lost. Ephemeral disks are not included in any + snapshots. + +Swap + Amount of swap space (in megabytes) to use. If + unspecified, the value is 0 by default. + +The default flavors are: + +============ ========= =============== =============== + Flavor VCPUs Disk (in GB) RAM (in MB) +============ ========= =============== =============== + m1.tiny 1 1 512 + m1.small 1 20 2048 + m1.medium 2 40 4096 + m1.large 4 80 8192 + m1.xlarge 8 160 16384 +============ ========= =============== =============== + +You can create and manage flavors with the nova +**flavor-*** commands provided by the ``python-novaclient`` +package. + +Create a flavor +~~~~~~~~~~~~~~~ + +#. List flavors to show the ID and name, the amount + of memory, the amount of disk space for the root + partition and for the ephemeral partition, the + swap, and the number of virtual CPUs for each + flavor: + + .. code-block:: console + + $ nova flavor-list + +#. To create a flavor, specify a name, ID, RAM + size, disk size, and the number of VCPUs for the + flavor, as follows: + + .. code-block:: console + + $ nova flavor-create FLAVOR_NAME FLAVOR_ID RAM_IN_MB ROOT_DISK_IN_GB NUMBER_OF_VCPUS + + .. note:: + + Unique ID (integer or UUID) for the new flavor. If + specifying 'auto', a UUID will be automatically generated. + + Here is an example with additional optional + parameters filled in that creates a public ``extra + tiny`` flavor that automatically gets an ID + assigned, with 256 MB memory, no disk space, and + one VCPU. The rxtx-factor indicates the slice of + bandwidth that the instances with this flavor can + use (through the Virtual Interface (vif) creation + in the hypervisor): + + .. code-block:: console + + $ nova flavor-create --is-public true m1.extra_tiny auto 256 0 1 --rxtx-factor .1 + +#. If an individual user or group of users needs a custom + flavor that you do not want other tenants to have access to, + you can change the flavor's access to make it a private flavor. + See `Private Flavors in the OpenStack Operations Guide `_. + + For a list of optional parameters, run this command: + + .. code-block:: console + + $ nova help flavor-create + +#. After you create a flavor, assign it to a + project by specifying the flavor name or ID and + the tenant ID: + + .. code-block:: console + + $ nova flavor-access-add FLAVOR TENANT_ID + +#. In addition, you can set or unset ``extra_spec`` for the existing flavor. + The ``extra_spec`` metadata keys can influence the instance directly when + it is launched. If a flavor sets the + ``extra_spec key/value quota:vif_outbound_peak=65536``, the instance's + outbound peak bandwidth I/O should be LTE 512 Mbps. There are several + aspects that can work for an instance including ``CPU limits``, + ``Disk tuning``, ``Bandwidth I/O``, ``Watchdog behavior``, and + ``Random-number generator``. + For information about supporting metadata keys, see + `Flavors + `__. + + For a list of optional parameters, run this command: + + .. code-block:: console + + $ nova help flavor-key + +Delete a flavor +~~~~~~~~~~~~~~~ + +Delete a specified flavor, as follows: + +.. code-block:: console + + $ nova flavor-delete FLAVOR_ID diff --git a/doc/admin-guide-cloud/source/cli_manage_projects_users_and_roles.rst b/doc/admin-guide-cloud/source/cli_manage_projects_users_and_roles.rst new file mode 100644 index 0000000000..1929076dd3 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_manage_projects_users_and_roles.rst @@ -0,0 +1,351 @@ +================================= +Manage projects, users, and roles +================================= + +As a cloud administrator, you manage projects, users, and +roles. Projects are organizational units in the cloud to which +you can assign users. Projects are also known as *tenants* or +*accounts*. Users can be members of one or more projects. Roles +define which actions users can perform. You assign roles to +user-project pairs. + +You can define actions for OpenStack service roles in the +``/etc/PROJECT/policy.json`` files. For example, define actions for +Compute service roles in the ``/etc/nova/policy.json`` file. + +You can manage projects, users, and roles independently from each other. + +During cloud set up, the operator defines at least one project, user, +and role. + +You can add, update, and delete projects and users, assign users to +one or more projects, and change or remove the assignment. To enable or +temporarily disable a project or user, update that project or user. +You can also change quotas at the project level. + +Before you can delete a user account, you must remove the user account +from its primary project. + +Before you can run client commands, you must download and +source an OpenStack RC file. See `Download and source the OpenStack RC file +`_. + +Projects +~~~~~~~~ + +A project is a group of zero or more users. In Compute, a project owns +virtual machines. In Object Storage, a project owns containers. Users +can be associated with more than one project. Each project and user +pairing can have a role associated with it. + +List projects +^^^^^^^^^^^^^ + +List all projects with their ID, name, and whether they are +enabled or disabled: + +.. code-block:: console + + $ openstack project list + +----------------------------------+--------------------+ + | id | name | + +----------------------------------+--------------------+ + | f7ac731cc11f40efbc03a9f9e1d1d21f | admin | + | c150ab41f0d9443f8874e32e725a4cc8 | alt_demo | + | a9debfe41a6d4d09a677da737b907d5e | demo | + | 9208739195a34c628c58c95d157917d7 | invisible_to_admin | + | 3943a53dc92a49b2827fae94363851e1 | service | + | 80cab5e1f02045abad92a2864cfd76cb | test_project | + +----------------------------------+--------------------+ + +Create a project +^^^^^^^^^^^^^^^^ + +Create a project named ``new-project``: + +.. code-block:: console + + $ openstack project create --description 'my new project' new-project + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | my new project | + | enabled | True | + | id | 1a4a0618b306462c9830f876b0bd6af2 | + | name | new-project | + +-------------+----------------------------------+ + +Update a project +^^^^^^^^^^^^^^^^ + +Specify the project ID to update a project. You can update the name, +description, and enabled status of a project. + +- To temporarily disable a project: + + .. code-block:: console + + $ openstack project set PROJECT_ID --disable + +- To enable a disabled project: + + .. code-block:: console + + $ openstack project set PROJECT_ID --enable + +- To update the name of a project: + + .. code-block:: console + + $ openstack project set PROJECT_ID --name project-new + +- To verify your changes, show information for the updated project: + + .. code-block:: console + + $ openstack project show PROJECT_ID + +-------------+----------------------------------+ + | Field | Value | + +-------------+----------------------------------+ + | description | my new project | + | enabled | True | + | id | 1a4a0618b306462c9830f876b0bd6af2 | + | name | project-new | + +-------------+----------------------------------+ + +Delete a project +^^^^^^^^^^^^^^^^ + +Specify the project ID to delete a project: + +.. code-block:: console + + $ openstack project delete PROJECT_ID + +Users +~~~~~ + +List users +^^^^^^^^^^ + +List all users: + +.. code-block:: console + + $ openstack user list + +----------------------------------+----------+ + | id | name | + +----------------------------------+----------+ + | 352b37f5c89144d4ad0534139266d51f | admin | + | 86c0de739bcb4802b8dc786921355813 | demo | + | 32ec34aae8ea432e8af560a1cec0e881 | glance | + | 7047fcb7908e420cb36e13bbd72c972c | nova | + +----------------------------------+----------+ + +Create a user +^^^^^^^^^^^^^ + +To create a user, you must specify a name. Optionally, you can +specify a tenant ID, password, and email address. It is recommended +that you include the tenant ID and password because the user cannot +log in to the dashboard without this information. + +Create the ``new-user`` user: + +.. code-block:: console + + $ openstack user create --project new-project --password PASSWORD new-user + +----------+----------------------------------+ + | Field | Value | + +----------+----------------------------------+ + | email | | + | enabled | True | + | id | 6e5140962b424cb9814fb172889d3be2 | + | name | new-user | + | tenantId | new-project | + +----------+----------------------------------+ + +Update a user +^^^^^^^^^^^^^ + +You can update the name, email address, and enabled status for a user. + +- To temporarily disable a user account: + + .. code-block:: console + + $ openstack user set USER_NAME --disable + + If you disable a user account, the user cannot log in to the + dashboard. However, data for the user account is maintained, so you + can enable the user at any time. + +- To enable a disabled user account: + + .. code-block:: console + + $ openstack user set USER_NAME --enable + +- To change the name and description for a user account: + + .. code-block:: console + + $ openstack user set USER_NAME --name user-new --email new-user@example.com + User has been updated. + +Delete a user +^^^^^^^^^^^^^ + +Delete a specified user account: + +.. code-block:: console + + $ openstack user delete USER_NAME + +Roles and role assignments +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +List available roles +^^^^^^^^^^^^^^^^^^^^ + +List the available roles: + +.. code-block:: console + + $ openstack role list + +----------------------------------+---------------+ + | id | name | + +----------------------------------+---------------+ + | 71ccc37d41c8491c975ae72676db687f | Member | + | 149f50a1fe684bfa88dae76a48d26ef7 | ResellerAdmin | + | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | + | 6ecf391421604da985db2f141e46a7c8 | admin | + | deb4fffd123c4d02a907c2c74559dccf | anotherrole | + +----------------------------------+---------------+ + +Create a role +^^^^^^^^^^^^^ + +Users can be members of multiple projects. To assign users to multiple +projects, define a role and assign that role to a user-project pair. + +Create the ``new-role`` role: + +.. code-block:: console + + $ openstack role create new-role + +--------+----------------------------------+ + | Field | Value | + +--------+----------------------------------+ + | id | bef1f95537914b1295da6aa038ef4de6 | + | name | new-role | + +--------+----------------------------------+ + +Assign a role +^^^^^^^^^^^^^ + +To assign a user to a project, you must assign the role to a +user-project pair. To do this, you need the user, role, and project +IDs. + +#. List users and note the user ID you want to assign to the role: + + .. code-block:: console + + $ openstack user list + +----------------------------------+----------+---------+----------------------+ + | id | name | enabled | email | + +----------------------------------+----------+---------+----------------------+ + | 352b37f5c89144d4ad0534139266d51f | admin | True | admin@example.com | + | 981422ec906d4842b2fc2a8658a5b534 | alt_demo | True | alt_demo@example.com | + | 036e22a764ae497992f5fb8e9fd79896 | cinder | True | cinder@example.com | + | 86c0de739bcb4802b8dc786921355813 | demo | True | demo@example.com | + | 32ec34aae8ea432e8af560a1cec0e881 | glance | True | glance@example.com | + | 7047fcb7908e420cb36e13bbd72c972c | nova | True | nova@example.com | + +----------------------------------+----------+---------+----------------------+ + +#. List role IDs and note the role ID you want to assign: + + .. code-block:: console + + $ openstack role list + +----------------------------------+---------------+ + | id | name | + +----------------------------------+---------------+ + | 71ccc37d41c8491c975ae72676db687f | Member | + | 149f50a1fe684bfa88dae76a48d26ef7 | ResellerAdmin | + | 9fe2ff9ee4384b1894a90878d3e92bab | _member_ | + | 6ecf391421604da985db2f141e46a7c8 | admin | + | deb4fffd123c4d02a907c2c74559dccf | anotherrole | + | bef1f95537914b1295da6aa038ef4de6 | new-role | + +----------------------------------+---------------+ + +#. List projects and note the project ID you want to assign to the role: + + .. code-block:: console + + $ openstack project list + +----------------------------------+--------------------+---------+ + | id | name | enabled | + +----------------------------------+--------------------+---------+ + | f7ac731cc11f40efbc03a9f9e1d1d21f | admin | True | + | c150ab41f0d9443f8874e32e725a4cc8 | alt_demo | True | + | a9debfe41a6d4d09a677da737b907d5e | demo | True | + | 9208739195a34c628c58c95d157917d7 | invisible_to_admin | True | + | caa9b4ce7d5c4225aa25d6ff8b35c31f | new-user | True | + | 1a4a0618b306462c9830f876b0bd6af2 | project-new | True | + | 3943a53dc92a49b2827fae94363851e1 | service | True | + | 80cab5e1f02045abad92a2864cfd76cb | test_project | True | + +----------------------------------+--------------------+---------+ + +#. Assign a role to a user-project pair. In this example, assign the + ``new-role`` role to the ``demo`` and ``test-project`` pair: + + .. code-block:: console + + $ openstack role add --user USER_NAME --project TENANT_ID ROLE_NAME + +#. Verify the role assignment: + + .. code-block:: console + + $ openstack role list --user USER_NAME --project TENANT_ID + +--------------+----------+---------------------------+--------------+ + | id | name | user_id | tenant_id | + +--------------+----------+---------------------------+--------------+ + | bef1f9553... | new-role | 86c0de739bcb4802b21355... | 80cab5e1f... | + +--------------+----------+---------------------------+--------------+ + +View role details +^^^^^^^^^^^^^^^^^ + +View details for a specified role: + +.. code-block:: console + + $ openstack role show ROLE_NAME + +----------+----------------------------------+ + | Field | Value | + +----------+----------------------------------+ + | id | bef1f95537914b1295da6aa038ef4de6 | + | name | new-role | + +----------+----------------------------------+ + +Remove a role +^^^^^^^^^^^^^ + +Remove a role from a user-project pair: + +#. Run the :command:`openstack role remove` command: + + .. code-block:: console + + $ openstack role remove --user USER_NAME --project TENANT_ID ROLE_NAME + +#. Verify the role removal: + + .. code-block:: console + + $ openstack role list --user USER_NAME --project TENANT_ID + + If the role was removed, the command output omits the removed role. diff --git a/doc/admin-guide-cloud/source/cli_manage_services.rst b/doc/admin-guide-cloud/source/cli_manage_services.rst new file mode 100644 index 0000000000..fba5b48ea2 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_manage_services.rst @@ -0,0 +1,9 @@ +=============== +Manage services +=============== + +.. toctree:: + :maxdepth: 2 + + cli_keystone_manage_services.rst + cli_nova_manage_services.rst diff --git a/doc/admin-guide-cloud/source/cli_manage_shares.rst b/doc/admin-guide-cloud/source/cli_manage_shares.rst new file mode 100644 index 0000000000..0e81340b00 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_manage_shares.rst @@ -0,0 +1,40 @@ +.. _share: + +============= +Manage shares +============= + +A share is provided by file storage. You can give access to a share to +instances. To create and manage shares, use ``manila`` client commands. + +Migrate a share +~~~~~~~~~~~~~~~ + +As an administrator, you can migrate a share with its data from one +location to another in a manner that is transparent to users and +workloads. + +Possible use cases for data migration include: + +- Bring down a physical storage device for maintenance without + disrupting workloads. + +- Modify the properties of a share. + +- Free up space in a thinly-provisioned back end. + +Migrate a share with the :command:`manila migrate` command, as shown in the +following example: + +.. code-block:: console + + $ manila migrate shareID destinationHost --force-host-copy True|False + +In this example, :option:`--force-host-copy True` forces the generic +host-based migration mechanism and bypasses any driver optimizations. +``destinationHost`` is in this format ``host#pool`` which includes +destination host and pool. + +.. note:: + + If the user is not an administrator, the migration fails. diff --git a/doc/admin-guide-cloud/source/cli_networking_advanced_quotas.rst b/doc/admin-guide-cloud/source/cli_networking_advanced_quotas.rst new file mode 100644 index 0000000000..ff2a51bc10 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_networking_advanced_quotas.rst @@ -0,0 +1,321 @@ +================================ +Manage Networking service quotas +================================ + +A quota limits the number of available resources. A default +quota might be enforced for all tenants. When you try to create +more resources than the quota allows, an error occurs: + +.. code-block:: ini + + $ neutron net-create test_net + Quota exceeded for resources: ['network'] + +Per-tenant quota configuration is also supported by the quota +extension API. See :ref:`cfg_quotas_per_tenant` for details. + +Basic quota configuration +~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the Networking default quota mechanism, all tenants have +the same quota values, such as the number of resources that a +tenant can create. + +The quota value is defined in the OpenStack Networking +``neutron.conf`` configuration file. To disable quotas for +a specific resource, such as network, subnet, +or port, remove a corresponding item from ``quota_items``. +This example shows the default quota values: + +.. code-block:: ini + + [quotas] + # resource name(s) that are supported in quota features + quota_items = network,subnet,port + + # number of networks allowed per tenant, and minus means unlimited + quota_network = 10 + + # number of subnets allowed per tenant, and minus means unlimited + quota_subnet = 10 + + # number of ports allowed per tenant, and minus means unlimited + quota_port = 50 + + # default driver to use for quota checks + quota_driver = neutron.quota.ConfDriver + +OpenStack Networking also supports quotas for L3 resources: +router and floating IP. Add these lines to the +``quotas`` section in the ``neutron.conf`` file: + +.. code-block:: ini + + [quotas] + # number of routers allowed per tenant, and minus means unlimited + quota_router = 10 + + # number of floating IPs allowed per tenant, and minus means unlimited + quota_floatingip = 50 + +.. note:: + + The ``quota_items`` option does not affect these quotas. + +OpenStack Networking also supports quotas for security group +resources: number of security groups and the number of rules for +each security group. Add these lines to the +``quotas`` section in the ``neutron.conf`` file: + +.. code-block:: ini + + [quotas] + # number of security groups per tenant, and minus means unlimited + quota_security_group = 10 + + # number of security rules allowed per tenant, and minus means unlimited + quota_security_group_rule = 100 + +.. note:: + + The ``quota_items`` option does not affect these quotas. + +.. _cfg_quotas_per_tenant: + +Configure per-tenant quotas +~~~~~~~~~~~~~~~~~~~~~~~~~~~ +OpenStack Networking also supports per-tenant quota limit by +quota extension API. + +Use these commands to manage per-tenant quotas: + +neutron quota-delete + Delete defined quotas for a specified tenant + +neutron quota-list + Lists defined quotas for all tenants + +neutron quota-show + Shows quotas for a specified tenant + +neutron quota-update + Updates quotas for a specified tenant + +Only users with the ``admin`` role can change a quota value. By default, +the default set of quotas are enforced for all tenants, so no +:command:`quota-create` command exists. + +#. Configure Networking to show per-tenant quotas + + Set the ``quota_driver`` option in the ``neutron.conf`` file. + + .. code-block:: ini + + quota_driver = neutron.db.quota_db.DbQuotaDriver + + When you set this option, the output for Networking commands shows ``quotas``. + +#. List Networking extensions. + + To list the Networking extensions, run this command: + + .. code-block:: console + + $ neutron ext-list -c alias -c name + + The command shows the ``quotas`` extension, which provides + per-tenant quota management support. + + .. code-block:: console + + +-----------------+--------------------------+ + | alias | name | + +-----------------+--------------------------+ + | agent_scheduler | Agent Schedulers | + | security-group | security-group | + | binding | Port Binding | + | quotas | Quota management support | + | agent | agent | + | provider | Provider Network | + | router | Neutron L3 Router | + | lbaas | LoadBalancing service | + | extraroute | Neutron Extra Route | + +-----------------+--------------------------+ + +#. Show information for the quotas extension. + + To show information for the ``quotas`` extension, run this command: + + .. code-block:: console + + $ neutron ext-show quotas + +-------------+------------------------------------------------------------+ + | Field | Value | + +-------------+------------------------------------------------------------+ + | alias | quotas | + | description | Expose functions for quotas management per tenant | + | links | | + | name | Quota management support | + | namespace | http://docs.openstack.org/network/ext/quotas-sets/api/v2.0 | + | updated | 2012-07-29T10:00:00-00:00 | + +-------------+------------------------------------------------------------+ + + .. note:: + + Only some plug-ins support per-tenant quotas. + Specifically, Open vSwitch, Linux Bridge, and VMware NSX + support them, but new versions of other plug-ins might + bring additional functionality. See the documentation for + each plug-in. + +#. List tenants who have per-tenant quota support. + + The :command:`quota-list` command lists tenants for which the per-tenant + quota is enabled. The command does not list tenants with default + quota support. You must be an administrative user to run this command: + + .. code-block:: console + + $ neutron quota-list + +------------+---------+------+--------+--------+----------------------------------+ + | floatingip | network | port | router | subnet | tenant_id | + +------------+---------+------+--------+--------+----------------------------------+ + | 20 | 5 | 20 | 10 | 5 | 6f88036c45344d9999a1f971e4882723 | + | 25 | 10 | 30 | 10 | 10 | bff5c9455ee24231b5bc713c1b96d422 | + +------------+---------+------+--------+--------+----------------------------------+ + +#. Show per-tenant quota values. + + The :command:`quota-show` command reports the current + set of quota limits for the specified tenant. + Non-administrative users can run this command without the + :option:`--tenant_id` parameter. If per-tenant quota limits are + not enabled for the tenant, the command shows the default + set of quotas. + + .. code-block:: console + + $ neutron quota-show --tenant_id 6f88036c45344d9999a1f971e4882723 + +------------+-------+ + | Field | Value | + +------------+-------+ + | floatingip | 20 | + | network | 5 | + | port | 20 | + | router | 10 | + | subnet | 5 | + +------------+-------+ + + The following command shows the command output for a + non-administrative user. + + .. code-block:: console + + $ neutron quota-show + +------------+-------+ + | Field | Value | + +------------+-------+ + | floatingip | 20 | + | network | 5 | + | port | 20 | + | router | 10 | + | subnet | 5 | + +------------+-------+ + +#. Update quota values for a specified tenant. + + Use the :command:`quota-update` command to + update a quota for a specified tenant. + + .. code-block:: console + + $ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --network 5 + +------------+-------+ + | Field | Value | + +------------+-------+ + | floatingip | 50 | + | network | 5 | + | port | 50 | + | router | 10 | + | subnet | 10 | + +------------+-------+ + + You can update quotas for multiple resources through one + command. + + .. code-block:: console + + $ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 --subnet 5 --port 20 + +------------+-------+ + | Field | Value | + +------------+-------+ + | floatingip | 50 | + | network | 5 | + | port | 20 | + | router | 10 | + | subnet | 5 | + +------------+-------+ + + To update the limits for an L3 resource such as, router + or floating IP, you must define new values for the quotas + after the ``--`` directive. + + This example updates the limit of the number of floating + IPs for the specified tenant. + + .. code-block:: console + + $ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 -- --floatingip 20 + +------------+-------+ + | Field | Value | + +------------+-------+ + | floatingip | 20 | + | network | 5 | + | port | 20 | + | router | 10 | + | subnet | 5 | + +------------+-------+ + + You can update the limits of multiple resources by + including L2 resources and L3 resource through one + command: + + .. code-block:: console + + $ neutron quota-update --tenant_id 6f88036c45344d9999a1f971e4882723 + --network 3 --subnet 3 --port 3 -- --floatingip 3 --router 3 + +------------+-------+ + | Field | Value | + +------------+-------+ + | floatingip | 3 | + | network | 3 | + | port | 3 | + | router | 3 | + | subnet | 3 | + +------------+-------+ + +#. Delete per-tenant quota values. + + To clear per-tenant quota limits, use the + :command:`quota-delete` command. + + .. code-block:: console + + $ neutron quota-delete --tenant_id 6f88036c45344d9999a1f971e4882723 + Deleted quota: 6f88036c45344d9999a1f971e4882723 + + After you run this command, you can see that quota + values for the tenant are reset to the default values. + + .. code-block:: console + + $ neutron quota-show --tenant_id 6f88036c45344d9999a1f971e4882723 + +------------+-------+ + | Field | Value | + +------------+-------+ + | floatingip | 50 | + | network | 10 | + | port | 50 | + | router | 10 | + | subnet | 10 | + +------------+-------+ diff --git a/doc/admin-guide-cloud/source/cli_nova_evacuate.rst b/doc/admin-guide-cloud/source/cli_nova_evacuate.rst new file mode 100644 index 0000000000..6532dbfad7 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_nova_evacuate.rst @@ -0,0 +1,51 @@ +================== +Evacuate instances +================== + +If a hardware malfunction or other error causes a cloud compute node to fail, +you can evacuate instances to make them available again. You can optionally +include the target host on the :command:`evacuate` command. If you omit the +host, the scheduler chooses the target host. + +To preserve user data on the server disk, configure shared storage on the +target host. When you evacuate the instance, Compute detects whether shared +storage is available on the target host. Also, you must validate that the +current VM host is not operational. Otherwise, the evacuation fails. + +#. To find a host for the evacuated instance, list all hosts: + + .. code-block:: console + + $ nova host-list + +#. Evacuate the instance. You can use the :option:`--password PWD` option + to pass the instance password to the command. If you do not specify a + password, the command generates and prints one after it finishes + successfully. The following command evacuates a server from a failed host + to HOST_B. + + .. code-block:: console + + $ nova evacuate EVACUATED_SERVER_NAME HOST_B + + The command rebuilds the instance from the original image or volume and + returns a password. The command preserves the original configuration, which + includes the instance ID, name, uid, IP address, and so on. + + .. code-block:: console + + +-----------+--------------+ + | Property | Value | + +-----------+--------------+ + | adminPass | kRAJpErnT4xZ | + +-----------+--------------+ + +#. To preserve the user disk data on the evacuated server, deploy Compute + with a shared file system. To configure your system, see + `Configure migrations `_ + in the `OpenStack Cloud Administrator Guide`. The + following example does not change the password. + + .. code-block:: console + + $ nova evacuate EVACUATED_SERVER_NAME HOST_B --on-shared-storage diff --git a/doc/admin-guide-cloud/source/cli_nova_manage_projects_security.rst b/doc/admin-guide-cloud/source/cli_nova_manage_projects_security.rst new file mode 100644 index 0000000000..065d05c997 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_nova_manage_projects_security.rst @@ -0,0 +1,206 @@ +======================= +Manage project security +======================= + +Security groups are sets of IP filter rules that are applied to all +project instances, which define networking access to the instance. Group +rules are project specific; project members can edit the default rules +for their group and add new rule sets. + +All projects have a ``default`` security group which is applied to any +instance that has no other defined security group. Unless you change the +default, this security group denies all incoming traffic and allows only +outgoing traffic to your instance. + +You can use the ``allow_same_net_traffic`` option in the +``/etc/nova/nova.conf`` file to globally control whether the rules apply +to hosts which share a network. + +If set to: + +- ``True`` (default), hosts on the same subnet are not filtered and are + allowed to pass all types of traffic between them. On a flat network, + this allows all instances from all projects unfiltered communication. + With VLAN networking, this allows access between instances within the + same project. You can also simulate this setting by configuring the + default security group to allow all traffic from the subnet. + +- ``False``, security groups are enforced for all connections. + +Additionally, the number of maximum rules per security group is +controlled by the ``security_group_rules`` and the number of allowed +security groups per project is controlled by the ``security_groups`` +quota (see the `Manage quotas `_ +section). + +List and view current security groups +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +From the command-line you can get a list of security groups for the +project, using the :command:`nova` command: + +#. Ensure your system variables are set for the user and tenant for + which you are checking security group rules. For example: + + .. code-block:: console + + export OS_USERNAME=demo00 + export OS_TENANT_NAME=tenant01 + +#. Output security groups, as follows: + + .. code-block:: console + + $ nova secgroup-list + +---------+-------------+ + | Name | Description | + +---------+-------------+ + | default | default | + | open | all ports | + +---------+-------------+ + +#. View the details of a group, as follows: + + .. code-block:: console + + $ nova secgroup-list-rules groupName + + For example: + + .. code-block:: console + + $ nova secgroup-list-rules open + +-------------+-----------+---------+-----------+--------------+ + | IP Protocol | From Port | To Port | IP Range | Source Group | + +-------------+-----------+---------+-----------+--------------+ + | icmp | -1 | 255 | 0.0.0.0/0 | | + | tcp | 1 | 65535 | 0.0.0.0/0 | | + | udp | 1 | 65535 | 0.0.0.0/0 | | + +-------------+-----------+---------+-----------+--------------+ + + These rules are allow type rules as the default is deny. The first + column is the IP protocol (one of icmp, tcp, or udp). The second and + third columns specify the affected port range. The third column + specifies the IP range in CIDR format. This example shows the full + port range for all protocols allowed from all IPs. + +Create a security group +~~~~~~~~~~~~~~~~~~~~~~~ + +When adding a new security group, you should pick a descriptive but +brief name. This name shows up in brief descriptions of the instances +that use it where the longer description field often does not. For +example, seeing that an instance is using security group "http" is much +easier to understand than "bobs\_group" or "secgrp1". + +#. Ensure your system variables are set for the user and tenant for + which you are creating security group rules. + +#. Add the new security group, as follows: + + .. code-block:: console + + $ nova secgroup-create GroupName Description + + For example: + + .. code-block:: console + + $ nova secgroup-create global_http "Allows Web traffic anywhere on the Internet." + +--------------------------------------+-------------+----------------------------------------------+ + | Id | Name | Description | + +--------------------------------------+-------------+----------------------------------------------+ + | 1578a08c-5139-4f3e-9012-86bd9dd9f23b | global_http | Allows Web traffic anywhere on the Internet. | + +--------------------------------------+-------------+----------------------------------------------+ + +#. Add a new group rule, as follows: + + .. code-block:: console + + $ nova secgroup-add-rule secGroupName ip-protocol from-port to-port CIDR + + The arguments are positional, and the ``from-port`` and ``to-port`` + arguments specify the local port range connections are allowed to + access, not the source and destination ports of the connection. For + example: + + .. code-block:: console + + $ nova secgroup-add-rule global_http tcp 80 80 0.0.0.0/0 + +-------------+-----------+---------+-----------+--------------+ + | IP Protocol | From Port | To Port | IP Range | Source Group | + +-------------+-----------+---------+-----------+--------------+ + | tcp | 80 | 80 | 0.0.0.0/0 | | + +-------------+-----------+---------+-----------+--------------+ + + You can create complex rule sets by creating additional rules. For + example, if you want to pass both HTTP and HTTPS traffic, run: + + .. code-block:: console + + $ nova secgroup-add-rule global_http tcp 443 443 0.0.0.0/0 + +-------------+-----------+---------+-----------+--------------+ + | IP Protocol | From Port | To Port | IP Range | Source Group | + +-------------+-----------+---------+-----------+--------------+ + | tcp | 443 | 443 | 0.0.0.0/0 | | + +-------------+-----------+---------+-----------+--------------+ + + Despite only outputting the newly added rule, this operation is + additive (both rules are created and enforced). + +#. View all rules for the new security group, as follows: + + .. code-block:: console + + $ nova secgroup-list-rules global_http + +-------------+-----------+---------+-----------+--------------+ + | IP Protocol | From Port | To Port | IP Range | Source Group | + +-------------+-----------+---------+-----------+--------------+ + | tcp | 80 | 80 | 0.0.0.0/0 | | + | tcp | 443 | 443 | 0.0.0.0/0 | | + +-------------+-----------+---------+-----------+--------------+ + +Delete a security group +~~~~~~~~~~~~~~~~~~~~~~~ + +#. Ensure your system variables are set for the user and tenant for + which you are deleting a security group. + +#. Delete the new security group, as follows: + + .. code-block:: console + + $ nova secgroup-delete GroupName + + For example: + + .. code-block:: console + + $ nova secgroup-delete global_http + +Create security group rules for a cluster of instances +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Source Groups are a special, dynamic way of defining the CIDR of allowed +sources. The user specifies a Source Group (Security Group name), and +all the user's other Instances using the specified Source Group are +selected dynamically. This alleviates the need for individual rules to +allow each new member of the cluster. + +#. Make sure to set the system variables for the user and tenant for + which you are creating a security group rule. + +#. Add a source group, as follows: + + .. code-block:: console + + $ nova secgroup-add-group-rule secGroupName source-group ip-protocol from-port to-port + + For example: + + .. code-block:: console + + $ nova secgroup-add-group-rule cluster global_http tcp 22 22 + + The ``cluster`` rule allows SSH access from any other instance that + uses the ``global_http`` group. diff --git a/doc/admin-guide-cloud/source/cli_nova_manage_services.rst b/doc/admin-guide-cloud/source/cli_nova_manage_services.rst new file mode 100644 index 0000000000..d7c0b20fab --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_nova_manage_services.rst @@ -0,0 +1,77 @@ +======================= +Manage Compute services +======================= + +You can enable and disable Compute services. The following +examples disable and enable the ``nova-compute`` service. + + +#. List the Compute services: + + .. code-block:: console + + $ nova service-list + +------------------+----------+----------+---------+-------+----------------------------+-----------------+ + | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | + +------------------+----------+----------+---------+-------+----------------------------+-----------------+ + | nova-conductor | devstack | internal | enabled | up | 2013-10-16T00:56:08.000000 | None | + | nova-cert | devstack | internal | enabled | up | 2013-10-16T00:56:09.000000 | None | + | nova-compute | devstack | nova | enabled | up | 2013-10-16T00:56:07.000000 | None | + | nova-network | devstack | internal | enabled | up | 2013-10-16T00:56:06.000000 | None | + | nova-scheduler | devstack | internal | enabled | up | 2013-10-16T00:56:04.000000 | None | + | nova-consoleauth | devstack | internal | enabled | up | 2013-10-16T00:56:07.000000 | None | + +------------------+----------+----------+---------+-------+----------------------------+-----------------+ + +#. Disable a nova service: + + .. code-block:: console + + $ nova service-disable localhost.localdomain nova-compute --reason 'trial log' + +----------+--------------+----------+-------------------+ + | Host | Binary | Status | Disabled Reason | + +----------+--------------+----------+-------------------+ + | devstack | nova-compute | disabled | Trial log | + +----------+--------------+----------+-------------------+ + +#. Check the service list: + + .. code-block:: console + + $ nova service-list + +------------------+----------+----------+---------+-------+----------------------------+------------------+ + | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | + +------------------+----------+----------+---------+-------+----------------------------+------------------+ + | nova-conductor | devstack | internal | enabled | up | 2013-10-16T00:56:48.000000 | None | + | nova-cert | devstack | internal | enabled | up | 2013-10-16T00:56:49.000000 | None | + | nova-compute | devstack | nova | disabled | up | 2013-10-16T00:56:47.000000 | Trial log | + | nova-network | devstack | internal | enabled | up | 2013-10-16T00:56:51.000000 | None | + | nova-scheduler | devstack | internal | enabled | up | 2013-10-16T00:56:44.000000 | None | + | nova-consoleauth | devstack | internal | enabled | up | 2013-10-16T00:56:47.000000 | None | + +------------------+----------+----------+---------+-------+----------------------------+------------------+ + +#. Enable the service: + + .. code-block:: console + + $ nova service-enable localhost.localdomain nova-compute + +----------+--------------+---------+ + | Host | Binary | Status | + +----------+--------------+---------+ + | devstack | nova-compute | enabled | + +----------+--------------+---------+ + +#. Check the service list: + + .. code-block:: console + + $ nova service-list + +------------------+----------+----------+---------+-------+----------------------------+-----------------+ + | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason | + +------------------+----------+----------+---------+-------+----------------------------+-----------------+ + | nova-conductor | devstack | internal | enabled | up | 2013-10-16T00:57:08.000000 | None | + | nova-cert | devstack | internal | enabled | up | 2013-10-16T00:57:09.000000 | None | + | nova-compute | devstack | nova | enabled | up | 2013-10-16T00:57:07.000000 | None | + | nova-network | devstack | internal | enabled | up | 2013-10-16T00:57:11.000000 | None | + | nova-scheduler | devstack | internal | enabled | up | 2013-10-16T00:57:14.000000 | None | + | nova-consoleauth | devstack | internal | enabled | up | 2013-10-16T00:57:07.000000 | None | + +------------------+----------+----------+---------+-------+----------------------------+-----------------+ diff --git a/doc/admin-guide-cloud/source/cli_nova_migrate.rst b/doc/admin-guide-cloud/source/cli_nova_migrate.rst new file mode 100644 index 0000000000..4592db7816 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_nova_migrate.rst @@ -0,0 +1,78 @@ +================================================= +Migrate a single instance to another compute host +================================================= + +When you want to move an instance from one compute host to another, +you can use the :command:`nova migrate` command. The scheduler chooses the +destination compute host based on its settings. This process does +not assume that the instance has shared storage available on the +target host. + +#. To list the VMs you want to migrate, run: + + .. code-block:: console + + $ nova list + +#. After selecting a VM from the list, run this command where :guilabel:`VM_ID` + is set to the ID in the list returned in the previous step: + + .. code-block:: console + + $ nova show VM_ID + +#. Use the :command:`nova migrate` command. + + .. code-block:: console + + $ nova migrate VM_ID + +#. To migrate an instance and watch the status, use this example script: + + .. code-block:: bash + + #!/bin/bash + + # Provide usage + usage() { + echo "Usage: $0 VM_ID" + exit 1 + } + + [[ $# -eq 0 ]] && usage + + # Migrate the VM to an alternate hypervisor + echo -n "Migrating instance to alternate host" + VM_ID=$1 + nova migrate $VM_ID + VM_OUTPUT=`nova show $VM_ID` + VM_STATUS=`echo "$VM_OUTPUT" | grep status | awk '{print $4}'` + while [[ "$VM_STATUS" != "VERIFY_RESIZE" ]]; do + echo -n "." + sleep 2 + VM_OUTPUT=`nova show $VM_ID` + VM_STATUS=`echo "$VM_OUTPUT" | grep status | awk '{print $4}'` + done + nova resize-confirm $VM_ID + echo " instance migrated and resized." + echo; + + # Show the details for the VM + echo "Updated instance details:" + nova show $VM_ID + + # Pause to allow users to examine VM details + read -p "Pausing, press to exit." + +.. note:: + + If you see this error, it means you are either + trying the command with the wrong credentials, + such as a non-admin user, or the ``policy.json`` + file prevents migration for your user: + + ``ERROR (Forbidden): Policy doesn't allow compute_extension:admin_actions:migrate + to be performed. (HTTP 403)`` + +The instance is booted from a new host, but preserves its configuration +including its ID, name, any metadata, IP address, and other properties. diff --git a/doc/admin-guide-cloud/source/cli_nova_numa_libvirt.rst b/doc/admin-guide-cloud/source/cli_nova_numa_libvirt.rst new file mode 100644 index 0000000000..1f7501c74a --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_nova_numa_libvirt.rst @@ -0,0 +1,24 @@ +============================================= +Consider NUMA topology when booting instances +============================================= + +NUMA topology can exist on both the physical hardware of the host, and the +virtual hardware of the instance. OpenStack Compute uses libvirt to tune +instances to take advantage of NUMA topologies. The libvirt driver boot +process looks at the NUMA topology field of both the instance and the host it +is being booted on, and uses that information to generate an appropriate +configuration. + +If the host is NUMA capable, but the instance has not requested a NUMA +topology, Compute attempts to pack the instance into a single cell. +If this fails, though, Compute will not continue to try. + +If the host is NUMA capable, and the instance has requested a specific NUMA +topology, Compute will try to pin the vCPUs of different NUMA cells +on the instance to the corresponding NUMA cells on the host. It will also +expose the NUMA topology of the instance to the guest OS. + +If you want Compute to pin a particular vCPU as part of this process, +set the ``vcpu_pin_set`` parameter in the ``nova.conf`` configuration +file. For more information about the ``vcpu_pin_set`` parameter, see the +Configuration Reference Guide. diff --git a/doc/admin-guide-cloud/source/cli_nova_specify_host.rst b/doc/admin-guide-cloud/source/cli_nova_specify_host.rst new file mode 100644 index 0000000000..4899d8734e --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_nova_specify_host.rst @@ -0,0 +1,36 @@ +========================================= +Select hosts where instances are launched +========================================= + +With the appropriate permissions, you can select which +host instances are launched on and which roles can boot instances +on this host. + +#. To select the host where instances are launched, use + the :option:`--availability_zone ZONE:HOST` parameter on the + :command:`nova boot` command. + + For example: + + .. code-block:: console + + $ nova boot --image --flavor m1.tiny --key_name test --availability-zone nova:server2 + +#. To specify which roles can launch an instance on a + specified host, enable the ``create:forced_host`` option in + the ``policy.json`` file. By default, this option is + enabled for only the admin role. + +#. To view the list of valid compute hosts, use the + :command:`nova hypervisor-list` command. + + .. code-block:: console + + $ nova hypervisor-list + +----+---------------------+ + | ID | Hypervisor hostname | + +----+---------------------+ + | 1 | server2 | + | 2 | server3 | + | 3 | server4 | + +----+---------------------+ diff --git a/doc/admin-guide-cloud/source/cli_set_compute_quotas.rst b/doc/admin-guide-cloud/source/cli_set_compute_quotas.rst new file mode 100644 index 0000000000..973173c6f9 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_set_compute_quotas.rst @@ -0,0 +1,299 @@ +============================= +Manage Compute service quotas +============================= + +As an administrative user, you can use the :command:`nova quota-*` +commands, which are provided by the ``python-novaclient`` +package, to update the Compute service quotas for a specific tenant or +tenant user, as well as update the quota defaults for a new tenant. + +**Compute quota descriptions** + +.. list-table:: + :header-rows: 1 + :widths: 10 40 + + * - Quota name + - Description + * - cores + - Number of instance cores (VCPUs) allowed per tenant. + * - fixed-ips + - Number of fixed IP addresses allowed per tenant. This number + must be equal to or greater than the number of allowed + instances. + * - floating-ips + - Number of floating IP addresses allowed per tenant. + * - injected-file-content-bytes + - Number of content bytes allowed per injected file. + * - injected-file-path-bytes + - Length of injected file path. + * - injected-files + - Number of injected files allowed per tenant. + * - instances + - Number of instances allowed per tenant. + * - key-pairs + - Number of key pairs allowed per user. + * - metadata-items + - Number of metadata items allowed per instance. + * - ram + - Megabytes of instance ram allowed per tenant. + * - security-groups + - Number of security groups per tenant. + * - security-group-rules + - Number of rules per security group. + +View and update Compute quotas for a tenant (project) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To view and update default quota values +--------------------------------------- +#. List all default quotas for all tenants: + + .. code-block:: console + + $ nova quota-defaults + + For example: + + .. code-block:: console + + $ nova quota-defaults + +-----------------------------+-------+ + | Quota | Limit | + +-----------------------------+-------+ + | instances | 10 | + | cores | 20 | + | ram | 51200 | + | floating_ips | 10 | + | fixed_ips | -1 | + | metadata_items | 128 | + | injected_files | 5 | + | injected_file_content_bytes | 10240 | + | injected_file_path_bytes | 255 | + | key_pairs | 100 | + | security_groups | 10 | + | security_group_rules | 20 | + +-----------------------------+-------+ + +#. Update a default value for a new tenant. + + .. code-block:: console + + $ nova quota-class-update --KEY VALUE default + + For example: + + .. code-block:: console + + $ nova quota-class-update --instances 15 default + +To view quota values for an existing tenant (project) +----------------------------------------------------- + +#. Place the tenant ID in a usable variable. + + .. code-block:: console + + $ tenant=$(openstack project show -f value -c id TENANT_NAME) + +#. List the currently set quota values for a tenant. + + .. code-block:: console + + $ nova quota-show --tenant $tenant + + For example: + + .. code-block:: console + + $ nova quota-show --tenant $tenant + +-----------------------------+-------+ + | Quota | Limit | + +-----------------------------+-------+ + | instances | 10 | + | cores | 20 | + | ram | 51200 | + | floating_ips | 10 | + | fixed_ips | -1 | + | metadata_items | 128 | + | injected_files | 5 | + | injected_file_content_bytes | 10240 | + | injected_file_path_bytes | 255 | + | key_pairs | 100 | + | security_groups | 10 | + | security_group_rules | 20 | + +-----------------------------+-------+ + +To update quota values for an existing tenant (project) +------------------------------------------------------- + +#. Obtain the tenant ID. + + .. code-block:: console + + $ tenant=$(openstack project show -f value -c id TENANT_NAME) + +#. Update a particular quota value. + + .. code-block:: console + + $ nova quota-update --QUOTA_NAME QUOTA_VALUE TENANT_ID + + For example: + + .. code-block:: console + + $ nova quota-update --floating-ips 20 $tenant + $ nova quota-show --tenant $tenant + +-----------------------------+-------+ + | Quota | Limit | + +-----------------------------+-------+ + | instances | 10 | + | cores | 20 | + | ram | 51200 | + | floating_ips | 20 | + | fixed_ips | -1 | + | metadata_items | 128 | + | injected_files | 5 | + | injected_file_content_bytes | 10240 | + | injected_file_path_bytes | 255 | + | key_pairs | 100 | + | security_groups | 10 | + | security_group_rules | 20 | + +-----------------------------+-------+ + + .. note:: + + To view a list of options for the :command:`quota-update` command, run: + + .. code-block:: console + + $ nova help quota-update + +View and update Compute quotas for a tenant user +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +To view quota values for a tenant user +-------------------------------------- + +#. Place the user ID in a usable variable. + + .. code-block:: console + + $ tenantUser=$(openstack user show -f value -c id USER_NAME) + +#. Place the user's tenant ID in a usable variable, as follows: + + .. code-block:: console + + $ tenant=$(openstack project show -f value -c id TENANT_NAME) + +#. List the currently set quota values for a tenant user. + + .. code-block:: console + + $ nova quota-show --user $tenantUser --tenant $tenant + + For example: + + .. code-block:: console + + $ nova quota-show --user $tenantUser --tenant $tenant + +-----------------------------+-------+ + | Quota | Limit | + +-----------------------------+-------+ + | instances | 10 | + | cores | 20 | + | ram | 51200 | + | floating_ips | 20 | + | fixed_ips | -1 | + | metadata_items | 128 | + | injected_files | 5 | + | injected_file_content_bytes | 10240 | + | injected_file_path_bytes | 255 | + | key_pairs | 100 | + | security_groups | 10 | + | security_group_rules | 20 | + +-----------------------------+-------+ + +To update quota values for a tenant user +---------------------------------------- + +#. Place the user ID in a usable variable. + + .. code-block:: console + + $ tenantUser=$(openstack user show -f value -c id USER_NAME) + +#. Place the user's tenant ID in a usable variable, as follows: + + .. code-block:: console + + $ tenant=$(openstack project show -f value -c id TENANT_NAME) + +#. Update a particular quota value, as follows: + + .. code-block:: console + + $ nova quota-update --user $tenantUser --QUOTA_NAME QUOTA_VALUE $tenant + + For example: + + .. code-block:: console + + $ nova quota-update --user $tenantUser --floating-ips 12 $tenant + $ nova quota-show --user $tenantUser --tenant $tenant + +-----------------------------+-------+ + | Quota | Limit | + +-----------------------------+-------+ + | instances | 10 | + | cores | 20 | + | ram | 51200 | + | floating_ips | 12 | + | fixed_ips | -1 | + | metadata_items | 128 | + | injected_files | 5 | + | injected_file_content_bytes | 10240 | + | injected_file_path_bytes | 255 | + | key_pairs | 100 | + | security_groups | 10 | + | security_group_rules | 20 | + +-----------------------------+-------+ + + .. note:: + + To view a list of options for the :command:`quota-update` command, run: + + .. code-block:: console + + $ nova help quota-update + +To display the current quota usage for a tenant user +---------------------------------------------------- + +Use :command:`nova absolute-limits` to get a list of the +current quota values and the current quota usage: + +.. code-block:: console + + $ nova absolute-limits --tenant TENANT_NAME + +-------------------------+-------+ + | Name | Value | + +-------------------------+-------+ + | maxServerMeta | 128 | + | maxPersonality | 5 | + | maxImageMeta | 128 | + | maxPersonalitySize | 10240 | + | maxTotalRAMSize | 51200 | + | maxSecurityGroupRules | 20 | + | maxTotalKeypairs | 100 | + | totalRAMUsed | 0 | + | maxSecurityGroups | 10 | + | totalFloatingIpsUsed | 0 | + | totalInstancesUsed | 0 | + | totalSecurityGroupsUsed | 0 | + | maxTotalFloatingIps | 10 | + | maxTotalInstances | 10 | + | totalCoresUsed | 0 | + | maxTotalCores | 20 | + +-------------------------+-------+ diff --git a/doc/admin-guide-cloud/source/cli_set_quotas.rst b/doc/admin-guide-cloud/source/cli_set_quotas.rst new file mode 100644 index 0000000000..920ae3f782 --- /dev/null +++ b/doc/admin-guide-cloud/source/cli_set_quotas.rst @@ -0,0 +1,54 @@ +============= +Manage quotas +============= + +To prevent system capacities from being exhausted without +notification, you can set up quotas. Quotas are operational +limits. For example, the number of gigabytes allowed for each +tenant can be controlled so that cloud resources are optimized. +Quotas can be enforced at both the tenant (or project) +and the tenant-user level. + +Using the command-line interface, you can manage quotas for +the OpenStack Compute service, the OpenStack Block Storage service, +and the OpenStack Networking service. + +The cloud operator typically changes default values because a +tenant requires more than ten volumes or 1 TB on a compute +node. + +.. note:: + + To view all tenants (projects), run: + + .. code-block:: console + + $ openstack project list + +----------------------------------+----------+ + | ID | Name | + +----------------------------------+----------+ + | e66d97ac1b704897853412fc8450f7b9 | admin | + | bf4a37b885fe46bd86e999e50adad1d3 | services | + | 21bd1c7c95234fd28f589b60903606fa | tenant01 | + | f599c5cd1cba4125ae3d7caed08e288c | tenant02 | + +----------------------------------+----------+ + + To display all current users for a tenant, run: + + .. code-block:: console + + $ openstack user list --project PROJECT_NAME + +----------------------------------+--------+ + | ID | Name | + +----------------------------------+--------+ + | ea30aa434ab24a139b0e85125ec8a217 | demo00 | + | 4f8113c1d838467cad0c2f337b3dfded | demo01 | + +----------------------------------+--------+ + + +.. toctree:: + :maxdepth: 2 + + cli_set_compute_quotas.rst + cli_cinder_quotas.rst + cli_networking_advanced_quotas.rst diff --git a/doc/admin-guide-cloud/source/index.rst b/doc/admin-guide-cloud/source/index.rst index bbd7672492..0b05c29ffa 100644 --- a/doc/admin-guide-cloud/source/index.rst +++ b/doc/admin-guide-cloud/source/index.rst @@ -30,6 +30,7 @@ Contents database.rst baremetal.rst orchestration.rst + cli.rst cross_project.rst common/app_support.rst common/glossary.rst