Merge "docs: Deindent code blocks"

This commit is contained in:
Zuul
2023-05-11 12:56:21 +00:00
committed by Gerrit Code Review
10 changed files with 728 additions and 736 deletions

View File

@@ -65,7 +65,7 @@ python3-neutron-dynamic-routing packages). On top of that, "segments" and
"bgp" must be added to the list of plugins in service_plugins. For example
in neutron.conf:
.. code-block:: ini
.. code-block:: ini
[DEFAULT]
service_plugins=router,metering,qos,trunk,segments,bgp
@@ -89,7 +89,7 @@ associated to a dynamic-routing-agent (in our example, the dynamic-routing
agents run on compute 1 and 4). Finally, the peer is added to the BGP speaker,
so the speaker initiates a BGP session to the network equipment.
.. code-block:: console
.. code-block:: console
$ # Create a BGP peer to represent the switch 1,
$ # which runs FRR on 10.1.0.253 with AS 64601
@@ -141,18 +141,17 @@ in each host, according to the rack names. On the compute or network nodes,
this is done in /etc/neutron/plugins/ml2/openvswitch_agent.ini using the
bridge_mappings directive:
.. code-block:: ini
.. code-block:: ini
[ovs]
bridge_mappings = physnet-rack1:br-ex
All of the physical networks created this way must be added in the
configuration of the neutron-server as well (ie: this is used by both
neutron-api and neutron-rpc-server). For example, with 3 racks,
here's how /etc/neutron/plugins/ml2/ml2_conf.ini should look like:
.. code-block:: ini
.. code-block:: ini
[ml2_type_flat]
flat_networks = physnet-rack1,physnet-rack2,physnet-rack3
@@ -160,7 +159,6 @@ here's how /etc/neutron/plugins/ml2/ml2_conf.ini should look like:
[ml2_type_vlan]
network_vlan_ranges = physnet-rack1,physnet-rack2,physnet-rack3
Once this is done, the provider network can be created, using physnet-rack1
as "physical network".
@@ -171,7 +169,7 @@ Setting-up the provider network
Everything that is in the provider network's scope will be advertised through
BGP. Here is how to create the network scope:
.. code-block:: console
.. code-block:: console
$ # Create the address scope
$ openstack address scope create --share --ip-version 4 provider-addr-scope
@@ -179,7 +177,7 @@ BGP. Here is how to create the network scope:
Then, the network can be ceated using the physical network name set above:
.. code-block:: console
.. code-block:: console
$ # Create the provider network that spawns over all racks
$ openstack network create --external --share \
@@ -192,7 +190,7 @@ Then, the network can be ceated using the physical network name set above:
This automatically creates a network AND a segment. Though by default, this
segment has no name, which isn't convenient. This name can be changed though:
.. code-block:: console
.. code-block:: console
$ # Get the network ID:
$ PROVIDER_NETWORK_ID=$(openstack network show provider-network \
@@ -213,7 +211,7 @@ Setting-up the 2nd segment
The 2nd segment, which will be attached to our provider network, is created
this way:
.. code-block:: console
.. code-block:: console
$ # Create the 2nd segment, matching the 2nd rack name
$ openstack network segment create \
@@ -232,7 +230,7 @@ network is in use in the machines. In order to use the address scope, subnet
pools must be used. Here is how to create the subnet pool with the two ranges
to use later when creating the subnets:
.. code-block:: console
.. code-block:: console
$ # Create the provider subnet pool which includes all ranges for all racks
$ openstack subnet pool create \
@@ -247,7 +245,7 @@ Then, this is how to create the two subnets. In this example, we are keeping
the addresses in .1 for the gateway, .2 for the DHCP server, and .253 +.254,
as these addresses will be used by the switches for the BGP announcements:
.. code-block:: console
.. code-block:: console
$ # Create the subnet for the physnet-rack-1, using the segment-rack-1, and
$ # the subnet_service_type network:floatingip_agent_gateway
@@ -285,7 +283,7 @@ This is to be repeated each time a new subnet must be created for floating IPs
and router gateways. First, the range is added in the subnet pool, then the
subnet itself is created:
.. code-block:: console
.. code-block:: console
$ # Add a new prefix in the subnet pool for the floating IPs:
$ openstack subnet pool set \
@@ -312,7 +310,7 @@ The provider network needs to be added to each of the BGP speakers. This means
each time a new rack is setup, the provider network must be added to the 2 BGP
speakers of that rack.
.. code-block:: console
.. code-block:: console
$ # Add the provider network to the BGP speakers.
$ openstack bgp speaker add network \
@@ -332,7 +330,7 @@ This can be done by each customer. A subnet pool isn't mandatory, but it is
nice to have. Typically, the customer network will not be advertized through
BGP (but this can be done if needed).
.. code-block:: console
.. code-block:: console
$ # Create the tenant private network
$ openstack network create tenant-network
@@ -409,7 +407,7 @@ that works (at least with Cumulus switches). Here's how.
In /etc/network/switchd.conf we change this:
.. code-block:: ini
.. code-block:: ini
# configure a route instead of a neighbor with the same ip/mask
#route.route_preferred_over_neigh = FALSE
@@ -417,7 +415,7 @@ In /etc/network/switchd.conf we change this:
and then simply restart switchd:
.. code-block:: console
.. code-block:: console
systemctl restart switchd
@@ -425,7 +423,7 @@ This reboots the switch ASIC of the switch, so it may be a dangerous thing to
do with no switch redundancy (so be careful when doing it). The completely safe
procedure, if having 2 switches per rack, looks like this:
.. code-block:: console
.. code-block:: console
# save clagd priority
OLDPRIO=$(clagctl status | sed -r -n 's/.*Our.*Role: ([0-9]+) 0.*/\1/p')
@@ -449,7 +447,7 @@ If everything goes well, the floating IPs are advertized over BGP through the
provider network. Here is an example with 4 VMs deployed on 2 racks. Neutron
is here picking-up IPs on the segmented network as Nexthop.
.. code-block:: console
.. code-block:: console
$ # Check the advertized routes:
$ openstack bgp speaker list advertised routes \

View File

@@ -57,7 +57,7 @@ To enable the logging service, follow the below steps.
#. On compute/network nodes, add configuration for logging service to
``[network_log]`` in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` and in
``/etc/neutron/l3_agent.ini`` as shown bellow:
``/etc/neutron/l3_agent.ini`` as shown below:
.. code-block:: ini
@@ -259,7 +259,7 @@ The general characteristics of each event will be shown as the following:
* Security event record format:
* Logged data of an ``ACCEPT`` event would look like:
Logged data of an ``ACCEPT`` event would look like:
.. code-block:: console
@@ -274,7 +274,7 @@ The general characteristics of each event will be shown as the following:
TCPOptionNoOperation(kind=1,length=1), TCPOptionWindowScale(kind=3,length=3,shift_cnt=3)],
seq=3284890090,src_port=47825,urgent=0,window_size=14600)
* Logged data of a ``DROP`` event:
Logged data of a ``DROP`` event:
.. code-block:: console
@@ -298,7 +298,7 @@ The general characteristics of each event will be shown as the following:
* Security event record format:
* Logged data of an ``ACCEPT`` event would look like:
Logged data of an ``ACCEPT`` event would look like:
.. code-block:: console
@@ -310,7 +310,7 @@ The general characteristics of each event will be shown as the following:
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00',id=29185,seq=0),type=8)
* Logged data of a ``DROP`` event:
Logged data of a ``DROP`` event:
.. code-block:: console

View File

@@ -166,7 +166,7 @@ network and has access to the private networks of all nodes.
The PCI bus number of the PF (03:00.0) and VFs (03:00.2 .. 03:00.5)
will be used later.
.. code-block::bash
.. code-block:: bash
# lspci | grep Ethernet
03:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
@@ -176,7 +176,6 @@ network and has access to the private networks of all nodes.
03:00.4 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
03:00.5 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
.. code-block:: bash
# ip link show enp3s0f0

View File

@@ -268,7 +268,6 @@ On the network and compute nodes:
[agent]
extensions = fip_qos, gateway_ip_qos
#. As rate limit doesn't work on Open vSwitch's ``internal`` ports,
optionally, as a workaround, to make QoS bandwidth limit work on
router's gateway ports, set ``ovs_use_veth`` to ``True`` in ``DEFAULT``

View File

@@ -16,7 +16,7 @@ For example, in the OVN Northbound database, this is how a VLAN
Provider Network with two segments (VLAN: 100, 200) is related to their
``Logical_Switch`` counterpart:
.. code-block:: bash
.. code-block:: bash
$ ovn-nbctl list logical_switch public
_uuid : 983719e5-4f32-4fb0-926d-46291457ca41
@@ -73,7 +73,7 @@ VLAN tag and are related to a single ``Logical_Switch`` entry. When
node it's running on it will create a patch port to the provider bridge
accordingly to the bridge mappings configuration.
.. code-block:: bash
.. code-block:: bash
compute-1: bridge-mappings = segment-1:br-provider1
compute-2: bridge-mappings = segment-2:br-provider2

View File

@@ -54,7 +54,7 @@ the host to the OVN database by creating the corresponding "Chassis" and
when the process is gracefully stopped, it deletes both registers. These
registers are used by Neutron to control the OVN agents.
.. code-block:: console
.. code-block:: console
$ openstack network agent list -c ID -c "Agent Type" -c Host -c Alive -c State
+--------------------------------------+------------------------------+--------+-------+-------+
@@ -76,7 +76,7 @@ the other one will be down because the "Chassis_Private.nb_cfg_timestamp"
is not updated. In this case, the administrator should manually delete from
the OVN Southbound database the stale registers. For example:
* List the "Chassis" registers, filtering by hostname and name (OVS
* List the "Chassis" registers, filtering by hostname and name (OVS
"system-id"):
.. code-block:: console
@@ -87,15 +87,13 @@ the OVN Southbound database the stale registers. For example:
hostname : u20ovn
name : "ce9a1471-79c1-4472-adfc-9e5ce86eba07"
* Delete the stale "Chassis" register:
* Delete the stale "Chassis" register:
.. code-block:: console
$ sudo ovn-sbctl destroy Chassis ce9a1471-79c1-4472-adfc-9e5ce86eba07
* List the "Chassis_Private" registers, filtering by name:
* List the "Chassis_Private" registers, filtering by name:
.. code-block:: console
@@ -103,14 +101,12 @@ the OVN Southbound database the stale registers. For example:
name : "a55c8d85-2071-4452-92cb-95d15c29bde7"
name : "ce9a1471-79c1-4472-adfc-9e5ce86eba07"
* Delete the stale "Chassis_Private" register:
* Delete the stale "Chassis_Private" register:
.. code-block:: console
$ sudo ovn-sbctl destroy Chassis_Private ce9a1471-79c1-4472-adfc-9e5ce86eba07
If the host name is also updated during the system upgrade, the Neutron
agent list could present entries from different host names, but the older
ones will be down too. The procedure is the same.

View File

@@ -42,7 +42,7 @@ also called as legacy) have the following format; bear in mind that if labels
are shared, then the counters are for all routers of all projects where the
labels were applied.
.. code-block:: json
.. code-block:: json
{
"pkts": "<the number of packets that matched the rules of the labels>",
@@ -129,7 +129,7 @@ legacy mode such as ``bytes``, ``pkts``, ``time``, ``first_update``,
``last_update``, and ``host``. As follows we present an example of JSON message
with all of the possible attributes.
.. code-block:: json
.. code-block:: json
{
"resource_id": "router-f0f745d9a59c47fdbbdd187d718f9e41-label-00c714f1-49c8-462c-8f5d-f05f21e035c7",

View File

@@ -10,7 +10,7 @@ manage affected security group rules. Thus, there is no need for an agent.
It is good to keep in mind that Openstack Security Groups (SG) and their rules
(SGR) map 1:1 into OVN's Port Groups (PG) and Access Control Lists (ACL):
.. code-block:: none
.. code-block:: none
Openstack Security Group <=> OVN Port Group
Openstack Security Group Rule <=> OVN ACL
@@ -50,7 +50,7 @@ https://github.com/ovn-org/ovn/commit/880dca99eaf73db7e783999c29386d03c82093bf
Below is an example of a meter configuration in OVN. You can locate the fair,
unit, burst_size, and rate attributes:
.. code-block:: bash
.. code-block:: bash
$ ovn-nbctl list meter
_uuid : 70c76ba9-f303-471b-9d49-25dee299827f
@@ -78,7 +78,7 @@ Moreover, there are a few attributes in each ACL that makes it able to
provide the networking logging feature. Let's use the example below
to point out the relevant fields:
.. code-block:: none
.. code-block:: none
$ openstack network log create --resource-type security_group \
--resource ${SG} --event ACCEPT logme -f value -c ID
@@ -128,7 +128,7 @@ These are the attributes pertinent to network logging:
If we poked the SGR with packets that match its criteria, the ovn-controller local to where the ACLs
is enforced will log something that looks like this:
.. code-block:: none
.. code-block:: none
2021-02-16T11:59:00.640Z|00045|acl_log(ovn_pinctrl0)|INFO|
name="neutron-2e456c7f-154e-40a8-bb10-f88ba51b90b5",

View File

@@ -14,7 +14,7 @@ load_balancer table for all mappings for a given FIP+protocol. All PFs
for the same FIP+protocol are kept as Virtual IP (VIP) mappings inside a
LB entry. See the diagram below for an example of how that looks like:
.. code-block:: none
.. code-block:: none
VIP:PORT = MEMBER1:MPORT1, MEMBER2:MPORT2
@@ -73,7 +73,7 @@ UDP and TCP port forwarding entries while a given LB entry can either be one
or the other protocol (not both). Based on that, the format used to specify an
LB entry is:
.. code-block:: ini
.. code-block:: ini
pf-floatingip-<NEUTRON_FIP_ID>-<PROTOCOL>
@@ -85,7 +85,7 @@ In order to differentiate a load balancer entry that was created by port
forwarding vs load balancer entries maintained by ovn-octavia-provider, the
external_ids field also has an owner value:
.. code-block:: python
.. code-block:: python
external_ids = {
ovn_const.OVN_DEVICE_OWNER_EXT_ID_KEY: PORT_FORWARDING_PLUGIN,
@@ -97,7 +97,7 @@ external_ids field also has an owner value:
The following registry (API) neutron events trigger the OVN backend to map port
forwarding into LB:
.. code-block:: python
.. code-block:: python
@registry.receives(PORT_FORWARDING_PLUGIN, [events.AFTER_INIT])
def register(self, resource, event, trigger, payload=None):