Merge "doc: Remove references to dead VMWare NSX extensions"
This commit is contained in:
commit
96509b2abc
@ -407,219 +407,6 @@ Plug-in specific extensions
|
|||||||
Each vendor can choose to implement additional API extensions to the
|
Each vendor can choose to implement additional API extensions to the
|
||||||
core API. This section describes the extensions for each plug-in.
|
core API. This section describes the extensions for each plug-in.
|
||||||
|
|
||||||
VMware NSX extensions
|
|
||||||
---------------------
|
|
||||||
|
|
||||||
These sections explain NSX plug-in extensions.
|
|
||||||
|
|
||||||
VMware NSX QoS extension
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
|
|
||||||
The VMware NSX QoS extension rate-limits network ports to guarantee a
|
|
||||||
specific amount of bandwidth for each port. This extension, by default,
|
|
||||||
is only accessible by a project with an admin role but is configurable
|
|
||||||
through the ``policy.yaml`` file. To use this extension, create a queue
|
|
||||||
and specify the min/max bandwidth rates (kbps) and optionally set the
|
|
||||||
QoS Marking and DSCP value (if your network fabric uses these values to
|
|
||||||
make forwarding decisions). Once created, you can associate a queue with
|
|
||||||
a network. Then, when ports are created on that network they are
|
|
||||||
automatically created and associated with the specific queue size that
|
|
||||||
was associated with the network. Because one size queue for a every port
|
|
||||||
on a network might not be optimal, a scaling factor from the nova flavor
|
|
||||||
``rxtx_factor`` is passed in from Compute when creating the port to scale
|
|
||||||
the queue.
|
|
||||||
|
|
||||||
Lastly, if you want to set a specific baseline QoS policy for the amount
|
|
||||||
of bandwidth a single port can use (unless a network queue is specified
|
|
||||||
with the network a port is created on) a default queue can be created in
|
|
||||||
Networking which then causes ports created to be associated with a queue
|
|
||||||
of that size times the rxtx scaling factor. Note that after a network or
|
|
||||||
default queue is specified, queues are added to ports that are
|
|
||||||
subsequently created but are not added to existing ports.
|
|
||||||
|
|
||||||
Basic VMware NSX QoS operations
|
|
||||||
'''''''''''''''''''''''''''''''
|
|
||||||
|
|
||||||
This table shows example neutron commands that enable you to complete
|
|
||||||
basic queue operations:
|
|
||||||
|
|
||||||
.. list-table:: **Basic VMware NSX QoS operations**
|
|
||||||
:widths: 30 50
|
|
||||||
:header-rows: 1
|
|
||||||
|
|
||||||
* - Operation
|
|
||||||
- Command
|
|
||||||
* - Creates QoS queue (admin-only).
|
|
||||||
- .. code-block:: console
|
|
||||||
|
|
||||||
$ neutron queue-create --min 10 --max 1000 myqueue
|
|
||||||
* - Associates a queue with a network.
|
|
||||||
- .. code-block:: console
|
|
||||||
|
|
||||||
$ neutron net-create network --queue_id QUEUE_ID
|
|
||||||
* - Creates a default system queue.
|
|
||||||
- .. code-block:: console
|
|
||||||
|
|
||||||
$ neutron queue-create --default True --min 10 --max 2000 default
|
|
||||||
* - Lists QoS queues.
|
|
||||||
- .. code-block:: console
|
|
||||||
|
|
||||||
$ neutron queue-list
|
|
||||||
* - Deletes a QoS queue.
|
|
||||||
- .. code-block:: console
|
|
||||||
|
|
||||||
$ neutron queue-delete QUEUE_ID_OR_NAME
|
|
||||||
|
|
||||||
VMware NSX provider networks extension
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
|
|
||||||
Provider networks can be implemented in different ways by the underlying
|
|
||||||
NSX platform.
|
|
||||||
|
|
||||||
The *FLAT* and *VLAN* network types use bridged transport connectors.
|
|
||||||
These network types enable the attachment of large number of ports. To
|
|
||||||
handle the increased scale, the NSX plug-in can back a single OpenStack
|
|
||||||
Network with a chain of NSX logical switches. You can specify the
|
|
||||||
maximum number of ports on each logical switch in this chain on the
|
|
||||||
``max_lp_per_bridged_ls`` parameter, which has a default value of 5,000.
|
|
||||||
|
|
||||||
The recommended value for this parameter varies with the NSX version
|
|
||||||
running in the back-end, as shown in the following table.
|
|
||||||
|
|
||||||
**Recommended values for max_lp_per_bridged_ls**
|
|
||||||
|
|
||||||
+---------------+---------------------+
|
|
||||||
| NSX version | Recommended Value |
|
|
||||||
+===============+=====================+
|
|
||||||
| 2.x | 64 |
|
|
||||||
+---------------+---------------------+
|
|
||||||
| 3.0.x | 5,000 |
|
|
||||||
+---------------+---------------------+
|
|
||||||
| 3.1.x | 5,000 |
|
|
||||||
+---------------+---------------------+
|
|
||||||
| 3.2.x | 10,000 |
|
|
||||||
+---------------+---------------------+
|
|
||||||
|
|
||||||
In addition to these network types, the NSX plug-in also supports a
|
|
||||||
special *l3_ext* network type, which maps external networks to specific
|
|
||||||
NSX gateway services as discussed in the next section.
|
|
||||||
|
|
||||||
VMware NSX L3 extension
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
|
|
||||||
NSX exposes its L3 capabilities through gateway services which are
|
|
||||||
usually configured out of band from OpenStack. To use NSX with L3
|
|
||||||
capabilities, first create an L3 gateway service in the NSX Manager.
|
|
||||||
Next, in ``/etc/neutron/plugins/vmware/nsx.ini`` set
|
|
||||||
``default_l3_gw_service_uuid`` to this value. By default, routers are
|
|
||||||
mapped to this gateway service.
|
|
||||||
|
|
||||||
VMware NSX L3 extension operations
|
|
||||||
''''''''''''''''''''''''''''''''''
|
|
||||||
|
|
||||||
Create external network and map it to a specific NSX gateway service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack network create public --external --provider-network-type l3_ext \
|
|
||||||
--provider-physical-network L3_GATEWAY_SERVICE_UUID
|
|
||||||
|
|
||||||
Terminate traffic on a specific VLAN from a NSX gateway service:
|
|
||||||
|
|
||||||
.. code-block:: console
|
|
||||||
|
|
||||||
$ openstack network create public --external --provider-network-type l3_ext \
|
|
||||||
--provider-physical-network L3_GATEWAY_SERVICE_UUID --provider-segment VLAN_ID
|
|
||||||
|
|
||||||
Operational status synchronization in the VMware NSX plug-in
|
|
||||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
|
||||||
|
|
||||||
Starting with the Havana release, the VMware NSX plug-in provides an
|
|
||||||
asynchronous mechanism for retrieving the operational status for neutron
|
|
||||||
resources from the NSX back-end; this applies to *network*, *port*, and
|
|
||||||
*router* resources.
|
|
||||||
|
|
||||||
The back-end is polled periodically and the status for every resource is
|
|
||||||
retrieved; then the status in the Networking database is updated only
|
|
||||||
for the resources for which a status change occurred. As operational
|
|
||||||
status is now retrieved asynchronously, performance for ``GET``
|
|
||||||
operations is consistently improved.
|
|
||||||
|
|
||||||
Data to retrieve from the back-end are divided in chunks in order to
|
|
||||||
avoid expensive API requests; this is achieved leveraging NSX APIs
|
|
||||||
response paging capabilities. The minimum chunk size can be specified
|
|
||||||
using a configuration option; the actual chunk size is then determined
|
|
||||||
dynamically according to: total number of resources to retrieve,
|
|
||||||
interval between two synchronization task runs, minimum delay between
|
|
||||||
two subsequent requests to the NSX back-end.
|
|
||||||
|
|
||||||
The operational status synchronization can be tuned or disabled using
|
|
||||||
the configuration options reported in this table; it is however worth
|
|
||||||
noting that the default values work fine in most cases.
|
|
||||||
|
|
||||||
.. list-table:: **Configuration options for tuning operational status synchronization in the NSX plug-in**
|
|
||||||
:widths: 10 10 10 10 35
|
|
||||||
:header-rows: 1
|
|
||||||
|
|
||||||
* - Option name
|
|
||||||
- Group
|
|
||||||
- Default value
|
|
||||||
- Type and constraints
|
|
||||||
- Notes
|
|
||||||
* - ``state_sync_interval``
|
|
||||||
- ``nsx_sync``
|
|
||||||
- 10 seconds
|
|
||||||
- Integer; no constraint.
|
|
||||||
- Interval in seconds between two run of the synchronization task. If the
|
|
||||||
synchronization task takes more than ``state_sync_interval`` seconds to
|
|
||||||
execute, a new instance of the task is started as soon as the other is
|
|
||||||
completed. Setting the value for this option to 0 will disable the
|
|
||||||
synchronization task.
|
|
||||||
* - ``max_random_sync_delay``
|
|
||||||
- ``nsx_sync``
|
|
||||||
- 0 seconds
|
|
||||||
- Integer. Must not exceed ``min_sync_req_delay``
|
|
||||||
- When different from zero, a random delay between 0 and
|
|
||||||
``max_random_sync_delay`` will be added before processing the next
|
|
||||||
chunk.
|
|
||||||
* - ``min_sync_req_delay``
|
|
||||||
- ``nsx_sync``
|
|
||||||
- 1 second
|
|
||||||
- Integer. Must not exceed ``state_sync_interval``.
|
|
||||||
- The value of this option can be tuned according to the observed
|
|
||||||
load on the NSX controllers. Lower values will result in faster
|
|
||||||
synchronization, but might increase the load on the controller cluster.
|
|
||||||
* - ``min_chunk_size``
|
|
||||||
- ``nsx_sync``
|
|
||||||
- 500 resources
|
|
||||||
- Integer; no constraint.
|
|
||||||
- Minimum number of resources to retrieve from the back-end for each
|
|
||||||
synchronization chunk. The expected number of synchronization chunks
|
|
||||||
is given by the ratio between ``state_sync_interval`` and
|
|
||||||
``min_sync_req_delay``. This size of a chunk might increase if the
|
|
||||||
total number of resources is such that more than ``min_chunk_size``
|
|
||||||
resources must be fetched in one chunk with the current number of
|
|
||||||
chunks.
|
|
||||||
* - ``always_read_status``
|
|
||||||
- ``nsx_sync``
|
|
||||||
- False
|
|
||||||
- Boolean; no constraint.
|
|
||||||
- When this option is enabled, the operational status will always be
|
|
||||||
retrieved from the NSX back-end ad every ``GET`` request. In this
|
|
||||||
case it is advisable to disable the synchronization task.
|
|
||||||
|
|
||||||
When running multiple OpenStack Networking server instances, the status
|
|
||||||
synchronization task should not run on every node; doing so sends
|
|
||||||
unnecessary traffic to the NSX back-end and performs unnecessary DB
|
|
||||||
operations. Set the ``state_sync_interval`` configuration option to a
|
|
||||||
non-zero value exclusively on a node designated for back-end status
|
|
||||||
synchronization.
|
|
||||||
|
|
||||||
The ``fields=status`` parameter in Networking API requests always
|
|
||||||
triggers an explicit query to the NSX back end, even when you enable
|
|
||||||
asynchronous state synchronization. For example, ``GET
|
|
||||||
/v2.0/networks/NET_ID?fields=status&fields=name``.
|
|
||||||
|
|
||||||
Big Switch plug-in extensions
|
Big Switch plug-in extensions
|
||||||
-----------------------------
|
-----------------------------
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user