docs: Deindent code blocks

We had a number of code blocks that were being incorrectly rendered
inside block quotes, which messed with formatting somewhat. Correct
them. This was done using the following script:

  sphinx-build -W -b xml doc/source doc/build/xml
  files=$(find doc/build/xml -name '*.xml' -print)
  for file in $files; do
      if xmllint -xpath "//block_quote/literal_block" "$file" &>/dev/null; then
          echo "$file"
      fi
  done

Note that this also highlighted a file using DOS line endings. This is
corrected.

Change-Id: If63f31bf13c76a185e2c6eebc9b85f9a1f3bbde8
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
This commit is contained in:
Stephen Finucane
2023-05-09 15:57:21 +01:00
parent 272a315109
commit d409296bde
10 changed files with 728 additions and 736 deletions

View File

@@ -65,10 +65,10 @@ python3-neutron-dynamic-routing packages). On top of that, "segments" and
"bgp" must be added to the list of plugins in service_plugins. For example "bgp" must be added to the list of plugins in service_plugins. For example
in neutron.conf: in neutron.conf:
.. code-block:: ini .. code-block:: ini
[DEFAULT] [DEFAULT]
service_plugins=router,metering,qos,trunk,segments,bgp service_plugins=router,metering,qos,trunk,segments,bgp
The BGP agent The BGP agent
@@ -89,36 +89,36 @@ associated to a dynamic-routing-agent (in our example, the dynamic-routing
agents run on compute 1 and 4). Finally, the peer is added to the BGP speaker, agents run on compute 1 and 4). Finally, the peer is added to the BGP speaker,
so the speaker initiates a BGP session to the network equipment. so the speaker initiates a BGP session to the network equipment.
.. code-block:: console .. code-block:: console
$ # Create a BGP peer to represent the switch 1, $ # Create a BGP peer to represent the switch 1,
$ # which runs FRR on 10.1.0.253 with AS 64601 $ # which runs FRR on 10.1.0.253 with AS 64601
$ openstack bgp peer create \ $ openstack bgp peer create \
--peer-ip 10.1.0.253 \ --peer-ip 10.1.0.253 \
--remote-as 64601 \ --remote-as 64601 \
rack1-switch-1 rack1-switch-1
$ # Create a BGP speaker on compute-1 $ # Create a BGP speaker on compute-1
$ BGP_SPEAKER_ID_COMPUTE_1=$(openstack bgp speaker create \ $ BGP_SPEAKER_ID_COMPUTE_1=$(openstack bgp speaker create \
--local-as 64999 --ip-version 4 mycloud-compute-1.example.com \ --local-as 64999 --ip-version 4 mycloud-compute-1.example.com \
--format value -c id) --format value -c id)
$ # Get the agent ID of the dragent running on compute 1 $ # Get the agent ID of the dragent running on compute 1
$ BGP_AGENT_ID_COMPUTE_1=$(openstack network agent list \ $ BGP_AGENT_ID_COMPUTE_1=$(openstack network agent list \
--host mycloud-compute-1.example.com --agent-type bgp \ --host mycloud-compute-1.example.com --agent-type bgp \
--format value -c ID) --format value -c ID)
$ # Add the BGP speaker to the dragent of compute 1 $ # Add the BGP speaker to the dragent of compute 1
$ openstack bgp dragent add speaker \ $ openstack bgp dragent add speaker \
${BGP_AGENT_ID_COMPUTE_1} ${BGP_SPEAKER_ID_COMPUTE_1} ${BGP_AGENT_ID_COMPUTE_1} ${BGP_SPEAKER_ID_COMPUTE_1}
$ # Add the BGP peer to the speaker of compute 1 $ # Add the BGP peer to the speaker of compute 1
$ openstack bgp speaker add peer \ $ openstack bgp speaker add peer \
compute-1.example.com rack1-switch-1 compute-1.example.com rack1-switch-1
$ # Tell the speaker not to advertize tenant networks $ # Tell the speaker not to advertize tenant networks
$ openstack bgp speaker set \ $ openstack bgp speaker set \
--no-advertise-tenant-networks mycloud-compute-1.example.com --no-advertise-tenant-networks mycloud-compute-1.example.com
It is possible to repeat this operation for a 2nd machine on the same rack, It is possible to repeat this operation for a 2nd machine on the same rack,
@@ -141,25 +141,23 @@ in each host, according to the rack names. On the compute or network nodes,
this is done in /etc/neutron/plugins/ml2/openvswitch_agent.ini using the this is done in /etc/neutron/plugins/ml2/openvswitch_agent.ini using the
bridge_mappings directive: bridge_mappings directive:
.. code-block:: ini .. code-block:: ini
[ovs]
bridge_mappings = physnet-rack1:br-ex
[ovs]
bridge_mappings = physnet-rack1:br-ex
All of the physical networks created this way must be added in the All of the physical networks created this way must be added in the
configuration of the neutron-server as well (ie: this is used by both configuration of the neutron-server as well (ie: this is used by both
neutron-api and neutron-rpc-server). For example, with 3 racks, neutron-api and neutron-rpc-server). For example, with 3 racks,
here's how /etc/neutron/plugins/ml2/ml2_conf.ini should look like: here's how /etc/neutron/plugins/ml2/ml2_conf.ini should look like:
.. code-block:: ini .. code-block:: ini
[ml2_type_flat] [ml2_type_flat]
flat_networks = physnet-rack1,physnet-rack2,physnet-rack3 flat_networks = physnet-rack1,physnet-rack2,physnet-rack3
[ml2_type_vlan]
network_vlan_ranges = physnet-rack1,physnet-rack2,physnet-rack3
[ml2_type_vlan]
network_vlan_ranges = physnet-rack1,physnet-rack2,physnet-rack3
Once this is done, the provider network can be created, using physnet-rack1 Once this is done, the provider network can be created, using physnet-rack1
as "physical network". as "physical network".
@@ -171,40 +169,40 @@ Setting-up the provider network
Everything that is in the provider network's scope will be advertised through Everything that is in the provider network's scope will be advertised through
BGP. Here is how to create the network scope: BGP. Here is how to create the network scope:
.. code-block:: console .. code-block:: console
$ # Create the address scope $ # Create the address scope
$ openstack address scope create --share --ip-version 4 provider-addr-scope $ openstack address scope create --share --ip-version 4 provider-addr-scope
Then, the network can be ceated using the physical network name set above: Then, the network can be ceated using the physical network name set above:
.. code-block:: console .. code-block:: console
$ # Create the provider network that spawns over all racks $ # Create the provider network that spawns over all racks
$ openstack network create --external --share \ $ openstack network create --external --share \
--provider-physical-network physnet-rack1 \ --provider-physical-network physnet-rack1 \
--provider-network-type vlan \ --provider-network-type vlan \
--provider-segment 11 \ --provider-segment 11 \
provider-network provider-network
This automatically creates a network AND a segment. Though by default, this This automatically creates a network AND a segment. Though by default, this
segment has no name, which isn't convenient. This name can be changed though: segment has no name, which isn't convenient. This name can be changed though:
.. code-block:: console .. code-block:: console
$ # Get the network ID: $ # Get the network ID:
$ PROVIDER_NETWORK_ID=$(openstack network show provider-network \ $ PROVIDER_NETWORK_ID=$(openstack network show provider-network \
--format value -c id) --format value -c id)
$ # Get the segment ID: $ # Get the segment ID:
$ FIRST_SEGMENT_ID=$(openstack network segment list \ $ FIRST_SEGMENT_ID=$(openstack network segment list \
--format csv -c ID -c Network | \ --format csv -c ID -c Network | \
q -H -d, "SELECT ID FROM - WHERE Network='${PROVIDER_NETWORK_ID}'") q -H -d, "SELECT ID FROM - WHERE Network='${PROVIDER_NETWORK_ID}'")
$ # Set the 1st segment name, matching the rack name $ # Set the 1st segment name, matching the rack name
$ openstack network segment set --name segment-rack1 ${FIRST_SEGMENT_ID} $ openstack network segment set --name segment-rack1 ${FIRST_SEGMENT_ID}
Setting-up the 2nd segment Setting-up the 2nd segment
@@ -213,15 +211,15 @@ Setting-up the 2nd segment
The 2nd segment, which will be attached to our provider network, is created The 2nd segment, which will be attached to our provider network, is created
this way: this way:
.. code-block:: console .. code-block:: console
$ # Create the 2nd segment, matching the 2nd rack name $ # Create the 2nd segment, matching the 2nd rack name
$ openstack network segment create \ $ openstack network segment create \
--physical-network physnet-rack2 \ --physical-network physnet-rack2 \
--network-type vlan \ --network-type vlan \
--segment 13 \ --segment 13 \
--network provider-network \ --network provider-network \
segment-rack2 segment-rack2
Setting-up the provider subnets for the BGP next HOP routing Setting-up the provider subnets for the BGP next HOP routing
@@ -232,45 +230,45 @@ network is in use in the machines. In order to use the address scope, subnet
pools must be used. Here is how to create the subnet pool with the two ranges pools must be used. Here is how to create the subnet pool with the two ranges
to use later when creating the subnets: to use later when creating the subnets:
.. code-block:: console .. code-block:: console
$ # Create the provider subnet pool which includes all ranges for all racks $ # Create the provider subnet pool which includes all ranges for all racks
$ openstack subnet pool create \ $ openstack subnet pool create \
--pool-prefix 10.1.0.0/24 \ --pool-prefix 10.1.0.0/24 \
--pool-prefix 10.2.0.0/24 \ --pool-prefix 10.2.0.0/24 \
--address-scope provider-addr-scope \ --address-scope provider-addr-scope \
--share \ --share \
provider-subnet-pool provider-subnet-pool
Then, this is how to create the two subnets. In this example, we are keeping Then, this is how to create the two subnets. In this example, we are keeping
the addresses in .1 for the gateway, .2 for the DHCP server, and .253 +.254, the addresses in .1 for the gateway, .2 for the DHCP server, and .253 +.254,
as these addresses will be used by the switches for the BGP announcements: as these addresses will be used by the switches for the BGP announcements:
.. code-block:: console .. code-block:: console
$ # Create the subnet for the physnet-rack-1, using the segment-rack-1, and $ # Create the subnet for the physnet-rack-1, using the segment-rack-1, and
$ # the subnet_service_type network:floatingip_agent_gateway $ # the subnet_service_type network:floatingip_agent_gateway
$ openstack subnet create \ $ openstack subnet create \
--service-type 'network:floatingip_agent_gateway' \ --service-type 'network:floatingip_agent_gateway' \
--subnet-pool provider-subnet-pool \ --subnet-pool provider-subnet-pool \
--subnet-range 10.1.0.0/24 \ --subnet-range 10.1.0.0/24 \
--allocation-pool start=10.1.0.3,end=10.1.0.252 \ --allocation-pool start=10.1.0.3,end=10.1.0.252 \
--gateway 10.1.0.1 \ --gateway 10.1.0.1 \
--network provider-network \ --network provider-network \
--network-segment segment-rack1 \ --network-segment segment-rack1 \
provider-subnet-rack1 provider-subnet-rack1
$ # The same, for the 2nd rack $ # The same, for the 2nd rack
$ openstack subnet create \ $ openstack subnet create \
--service-type 'network:floatingip_agent_gateway' \ --service-type 'network:floatingip_agent_gateway' \
--subnet-pool provider-subnet-pool \ --subnet-pool provider-subnet-pool \
--subnet-range 10.2.0.0/24 \ --subnet-range 10.2.0.0/24 \
--allocation-pool start=10.2.0.3,end=10.2.0.252 \ --allocation-pool start=10.2.0.3,end=10.2.0.252 \
--gateway 10.2.0.1 \ --gateway 10.2.0.1 \
--network provider-network \ --network provider-network \
--network-segment segment-rack2 \ --network-segment segment-rack2 \
provider-subnet-rack2 provider-subnet-rack2
Note the service types. network:floatingip_agent_gateway makes sure that these Note the service types. network:floatingip_agent_gateway makes sure that these
@@ -285,21 +283,21 @@ This is to be repeated each time a new subnet must be created for floating IPs
and router gateways. First, the range is added in the subnet pool, then the and router gateways. First, the range is added in the subnet pool, then the
subnet itself is created: subnet itself is created:
.. code-block:: console .. code-block:: console
$ # Add a new prefix in the subnet pool for the floating IPs: $ # Add a new prefix in the subnet pool for the floating IPs:
$ openstack subnet pool set \ $ openstack subnet pool set \
--pool-prefix 203.0.113.0/24 \ --pool-prefix 203.0.113.0/24 \
provider-subnet-pool provider-subnet-pool
$ # Create the floating IP subnet $ # Create the floating IP subnet
$ openstack subnet create vm-fip \ $ openstack subnet create vm-fip \
--service-type 'network:routed' \ --service-type 'network:routed' \
--service-type 'network:floatingip' \ --service-type 'network:floatingip' \
--service-type 'network:router_gateway' \ --service-type 'network:router_gateway' \
--subnet-pool provider-subnet-pool \ --subnet-pool provider-subnet-pool \
--subnet-range 203.0.113.0/24 \ --subnet-range 203.0.113.0/24 \
--network provider-network --network provider-network
The service-type network:routed ensures we're using BGP through the provider The service-type network:routed ensures we're using BGP through the provider
network to advertize the IPs. network:floatingip and network:router_gateway network to advertize the IPs. network:floatingip and network:router_gateway
@@ -312,13 +310,13 @@ The provider network needs to be added to each of the BGP speakers. This means
each time a new rack is setup, the provider network must be added to the 2 BGP each time a new rack is setup, the provider network must be added to the 2 BGP
speakers of that rack. speakers of that rack.
.. code-block:: console .. code-block:: console
$ # Add the provider network to the BGP speakers. $ # Add the provider network to the BGP speakers.
$ openstack bgp speaker add network \ $ openstack bgp speaker add network \
mycloud-compute-1.example.com provider-network mycloud-compute-1.example.com provider-network
$ openstack bgp speaker add network \ $ openstack bgp speaker add network \
mycloud-compute-4.example.com provider-network mycloud-compute-4.example.com provider-network
In this example, we've selected two compute nodes that are also running an In this example, we've selected two compute nodes that are also running an
@@ -332,68 +330,68 @@ This can be done by each customer. A subnet pool isn't mandatory, but it is
nice to have. Typically, the customer network will not be advertized through nice to have. Typically, the customer network will not be advertized through
BGP (but this can be done if needed). BGP (but this can be done if needed).
.. code-block:: console .. code-block:: console
$ # Create the tenant private network $ # Create the tenant private network
$ openstack network create tenant-network $ openstack network create tenant-network
$ # Self-service network pool: $ # Self-service network pool:
$ openstack subnet pool create \ $ openstack subnet pool create \
--pool-prefix 192.168.130.0/23 \ --pool-prefix 192.168.130.0/23 \
--share \ --share \
tenant-subnet-pool tenant-subnet-pool
$ # Self-service subnet: $ # Self-service subnet:
$ openstack subnet create \ $ openstack subnet create \
--network tenant-network \ --network tenant-network \
--subnet-pool tenant-subnet-pool \ --subnet-pool tenant-subnet-pool \
--prefix-length 24 \ --prefix-length 24 \
tenant-subnet-1 tenant-subnet-1
$ # Create the router $ # Create the router
$ openstack router create tenant-router $ openstack router create tenant-router
$ # Add the tenant subnet to the tenant router $ # Add the tenant subnet to the tenant router
$ openstack router add subnet \ $ openstack router add subnet \
tenant-router tenant-subnet-1 tenant-router tenant-subnet-1
$ # Set the router's default gateway. This will use one public IP. $ # Set the router's default gateway. This will use one public IP.
$ openstack router set \ $ openstack router set \
--external-gateway provider-network tenant-router --external-gateway provider-network tenant-router
$ # Create a first VM on the tenant subnet $ # Create a first VM on the tenant subnet
$ openstack server create --image debian-10.5.0-openstack-amd64.qcow2 \ $ openstack server create --image debian-10.5.0-openstack-amd64.qcow2 \
--flavor cpu2-ram6-disk20 \ --flavor cpu2-ram6-disk20 \
--nic net-id=tenant-network \ --nic net-id=tenant-network \
--key-name yubikey-zigo \ --key-name yubikey-zigo \
test-server-1 test-server-1
$ # Eventually, add a floating IP $ # Eventually, add a floating IP
$ openstack floating ip create provider-network $ openstack floating ip create provider-network
+---------------------+--------------------------------------+ +---------------------+--------------------------------------+
| Field | Value | | Field | Value |
+---------------------+--------------------------------------+ +---------------------+--------------------------------------+
| created_at | 2020-12-15T11:48:36Z | | created_at | 2020-12-15T11:48:36Z |
| description | | | description | |
| dns_domain | None | | dns_domain | None |
| dns_name | None | | dns_name | None |
| fixed_ip_address | None | | fixed_ip_address | None |
| floating_ip_address | 203.0.113.17 | | floating_ip_address | 203.0.113.17 |
| floating_network_id | 859f5302-7b22-4c50-92f8-1f71d6f3f3f4 | | floating_network_id | 859f5302-7b22-4c50-92f8-1f71d6f3f3f4 |
| id | 01de252b-4b78-4198-bc28-1328393bf084 | | id | 01de252b-4b78-4198-bc28-1328393bf084 |
| name | 203.0.113.17 | | name | 203.0.113.17 |
| port_details | None | | port_details | None |
| port_id | None | | port_id | None |
| project_id | d71a5d98aef04386b57736a4ea4f3644 | | project_id | d71a5d98aef04386b57736a4ea4f3644 |
| qos_policy_id | None | | qos_policy_id | None |
| revision_number | 0 | | revision_number | 0 |
| router_id | None | | router_id | None |
| status | DOWN | | status | DOWN |
| subnet_id | None | | subnet_id | None |
| tags | [] | | tags | [] |
| updated_at | 2020-12-15T11:48:36Z | | updated_at | 2020-12-15T11:48:36Z |
+---------------------+--------------------------------------+ +---------------------+--------------------------------------+
$ openstack server add floating ip test-server-1 203.0.113.17 $ openstack server add floating ip test-server-1 203.0.113.17
Cumulus switch configuration Cumulus switch configuration
---------------------------- ----------------------------
@@ -409,38 +407,38 @@ that works (at least with Cumulus switches). Here's how.
In /etc/network/switchd.conf we change this: In /etc/network/switchd.conf we change this:
.. code-block:: ini .. code-block:: ini
# configure a route instead of a neighbor with the same ip/mask # configure a route instead of a neighbor with the same ip/mask
#route.route_preferred_over_neigh = FALSE #route.route_preferred_over_neigh = FALSE
route.route_preferred_over_neigh = TRUE route.route_preferred_over_neigh = TRUE
and then simply restart switchd: and then simply restart switchd:
.. code-block:: console .. code-block:: console
systemctl restart switchd systemctl restart switchd
This reboots the switch ASIC of the switch, so it may be a dangerous thing to This reboots the switch ASIC of the switch, so it may be a dangerous thing to
do with no switch redundancy (so be careful when doing it). The completely safe do with no switch redundancy (so be careful when doing it). The completely safe
procedure, if having 2 switches per rack, looks like this: procedure, if having 2 switches per rack, looks like this:
.. code-block:: console .. code-block:: console
# save clagd priority # save clagd priority
OLDPRIO=$(clagctl status | sed -r -n 's/.*Our.*Role: ([0-9]+) 0.*/\1/p') OLDPRIO=$(clagctl status | sed -r -n 's/.*Our.*Role: ([0-9]+) 0.*/\1/p')
# make sure that this switch is not the primary clag switch. otherwise the # make sure that this switch is not the primary clag switch. otherwise the
# secondary switch will also shutdown all interfaces when loosing contact # secondary switch will also shutdown all interfaces when loosing contact
# with the primary switch. # with the primary switch.
clagctl priority 16535 clagctl priority 16535
# tell neighbors to not route through this router # tell neighbors to not route through this router
vtysh vtysh
vtysh# router bgp 64999 vtysh# router bgp 64999
vtysh# bgp graceful-shutdown vtysh# bgp graceful-shutdown
vtysh# exit vtysh# exit
systemctl restart switchd systemctl restart switchd
clagctl priority $OLDPRIO clagctl priority $OLDPRIO
Verification Verification
------------ ------------
@@ -449,16 +447,16 @@ If everything goes well, the floating IPs are advertized over BGP through the
provider network. Here is an example with 4 VMs deployed on 2 racks. Neutron provider network. Here is an example with 4 VMs deployed on 2 racks. Neutron
is here picking-up IPs on the segmented network as Nexthop. is here picking-up IPs on the segmented network as Nexthop.
.. code-block:: console .. code-block:: console
$ # Check the advertized routes: $ # Check the advertized routes:
$ openstack bgp speaker list advertised routes \ $ openstack bgp speaker list advertised routes \
mycloud-compute-4.example.com mycloud-compute-4.example.com
+-----------------+-----------+ +-----------------+-----------+
| Destination | Nexthop | | Destination | Nexthop |
+-----------------+-----------+ +-----------------+-----------+
| 203.0.113.17/32 | 10.1.0.48 | | 203.0.113.17/32 | 10.1.0.48 |
| 203.0.113.20/32 | 10.1.0.65 | | 203.0.113.20/32 | 10.1.0.65 |
| 203.0.113.40/32 | 10.2.0.23 | | 203.0.113.40/32 | 10.2.0.23 |
| 203.0.113.55/32 | 10.2.0.35 | | 203.0.113.55/32 | 10.2.0.35 |
+-----------------+-----------+ +-----------------+-----------+

View File

@@ -1,328 +1,328 @@
.. _config-logging: .. _config-logging:
================================ ================================
Neutron Packet Logging Framework Neutron Packet Logging Framework
================================ ================================
Packet logging service is designed as a Neutron plug-in that captures network Packet logging service is designed as a Neutron plug-in that captures network
packets for relevant resources (e.g. security group or firewall group) when the packets for relevant resources (e.g. security group or firewall group) when the
registered events occur. registered events occur.
.. image:: figures/logging-framework.png .. image:: figures/logging-framework.png
:width: 100% :width: 100%
:alt: Packet Logging Framework :alt: Packet Logging Framework
Supported loggable resource types Supported loggable resource types
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
From Rocky release, both of ``security_group`` and ``firewall_group`` are From Rocky release, both of ``security_group`` and ``firewall_group`` are
supported as resource types in Neutron packet logging framework. supported as resource types in Neutron packet logging framework.
Service Configuration Service Configuration
~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~
To enable the logging service, follow the below steps. To enable the logging service, follow the below steps.
#. On Neutron controller node, add ``log`` to ``service_plugins`` setting in #. On Neutron controller node, add ``log`` to ``service_plugins`` setting in
``/etc/neutron/neutron.conf`` file. For example: ``/etc/neutron/neutron.conf`` file. For example:
.. code-block:: none .. code-block:: none
service_plugins = router,metering,log service_plugins = router,metering,log
#. To enable logging service for ``security_group`` in Layer 2, add ``log`` to #. To enable logging service for ``security_group`` in Layer 2, add ``log`` to
option ``extensions`` in section ``[agent]`` in ``/etc/neutron/plugins/ml2/ml2_conf.ini`` option ``extensions`` in section ``[agent]`` in ``/etc/neutron/plugins/ml2/ml2_conf.ini``
for controller node and in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` for controller node and in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini``
for compute/network nodes. For example: for compute/network nodes. For example:
.. code-block:: ini .. code-block:: ini
[agent] [agent]
extensions = log extensions = log
.. note:: .. note::
Fwaas v2 log is currently only supported by openvswitch, the firewall Fwaas v2 log is currently only supported by openvswitch, the firewall
logging driver of linuxbridge is not implemented. logging driver of linuxbridge is not implemented.
#. To enable logging service for ``firewall_group`` in Layer 3, add #. To enable logging service for ``firewall_group`` in Layer 3, add
``fwaas_v2_log`` to option ``extensions`` in section ``[AGENT]`` in ``fwaas_v2_log`` to option ``extensions`` in section ``[AGENT]`` in
``/etc/neutron/l3_agent.ini`` for network nodes. For example: ``/etc/neutron/l3_agent.ini`` for network nodes. For example:
.. code-block:: ini .. code-block:: ini
[AGENT] [AGENT]
extensions = fwaas_v2,fwaas_v2_log extensions = fwaas_v2,fwaas_v2_log
#. On compute/network nodes, add configuration for logging service to #. On compute/network nodes, add configuration for logging service to
``[network_log]`` in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` and in ``[network_log]`` in ``/etc/neutron/plugins/ml2/openvswitch_agent.ini`` and in
``/etc/neutron/l3_agent.ini`` as shown bellow: ``/etc/neutron/l3_agent.ini`` as shown below:
.. code-block:: ini .. code-block:: ini
[network_log] [network_log]
rate_limit = 100 rate_limit = 100
burst_limit = 25 burst_limit = 25
#local_output_log_base = <None> #local_output_log_base = <None>
In which, ``rate_limit`` is used to configure the maximum number of packets In which, ``rate_limit`` is used to configure the maximum number of packets
to be logged per second (packets per second). When a high rate triggers to be logged per second (packets per second). When a high rate triggers
``rate_limit``, logging queues packets to be logged. ``burst_limit`` is ``rate_limit``, logging queues packets to be logged. ``burst_limit`` is
used to configure the maximum of queued packets. And logged packets can be used to configure the maximum of queued packets. And logged packets can be
stored anywhere by using ``local_output_log_base``. stored anywhere by using ``local_output_log_base``.
.. note:: .. note::
- It requires at least ``100`` for ``rate_limit`` and at least ``25`` - It requires at least ``100`` for ``rate_limit`` and at least ``25``
for ``burst_limit``. for ``burst_limit``.
- If ``rate_limit`` is unset, logging will log unlimited. - If ``rate_limit`` is unset, logging will log unlimited.
- If we don't specify ``local_output_log_base``, logged packets will be - If we don't specify ``local_output_log_base``, logged packets will be
stored in system journal like ``/var/log/syslog`` by default. stored in system journal like ``/var/log/syslog`` by default.
Trusted projects policy.yaml configuration Trusted projects policy.yaml configuration
---------------------------------------------- ----------------------------------------------
With the default ``/etc/neutron/policy.yaml``, administrators must set up With the default ``/etc/neutron/policy.yaml``, administrators must set up
resource logging on behalf of the cloud projects. resource logging on behalf of the cloud projects.
If projects are trusted to administer their own loggable resources in their If projects are trusted to administer their own loggable resources in their
cloud, neutron's policy file ``policy.yaml`` can be modified to allow this. cloud, neutron's policy file ``policy.yaml`` can be modified to allow this.
Modify ``/etc/neutron/policy.yaml`` entries as follows: Modify ``/etc/neutron/policy.yaml`` entries as follows:
.. code-block:: none .. code-block:: none
"get_loggable_resources": "rule:regular_user", "get_loggable_resources": "rule:regular_user",
"create_log": "rule:regular_user", "create_log": "rule:regular_user",
"get_log": "rule:regular_user", "get_log": "rule:regular_user",
"get_logs": "rule:regular_user", "get_logs": "rule:regular_user",
"update_log": "rule:regular_user", "update_log": "rule:regular_user",
"delete_log": "rule:regular_user", "delete_log": "rule:regular_user",
Service workflow for Operator Service workflow for Operator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. To check the loggable resources that are supported by framework: #. To check the loggable resources that are supported by framework:
.. code-block:: console .. code-block:: console
$ openstack network loggable resources list $ openstack network loggable resources list
+-----------------+ +-----------------+
| Supported types | | Supported types |
+-----------------+ +-----------------+
| security_group | | security_group |
| firewall_group | | firewall_group |
+-----------------+ +-----------------+
.. note:: .. note::
- In VM ports, logging for ``security_group`` in currently works with - In VM ports, logging for ``security_group`` in currently works with
``openvswitch`` firewall driver only. ``linuxbridge`` is under ``openvswitch`` firewall driver only. ``linuxbridge`` is under
development. development.
- Logging for ``firewall_group`` works on internal router ports only. VM - Logging for ``firewall_group`` works on internal router ports only. VM
ports would be supported in the future. ports would be supported in the future.
#. Log creation: #. Log creation:
* Create a logging resource with an appropriate resource type * Create a logging resource with an appropriate resource type
.. code-block:: console .. code-block:: console
$ openstack network log create --resource-type security_group \ $ openstack network log create --resource-type security_group \
--description "Collecting all security events" \ --description "Collecting all security events" \
--event ALL Log_Created --event ALL Log_Created
+-----------------+------------------------------------------------+ +-----------------+------------------------------------------------+
| Field | Value | | Field | Value |
+-----------------+------------------------------------------------+ +-----------------+------------------------------------------------+
| Description | Collecting all security events | | Description | Collecting all security events |
| Enabled | True | | Enabled | True |
| Event | ALL | | Event | ALL |
| ID | 8085c3e6-0fa2-4954-b5ce-ff6207931b6d | | ID | 8085c3e6-0fa2-4954-b5ce-ff6207931b6d |
| Name | Log_Created | | Name | Log_Created |
| Project | 02568bd62b414221956f15dbe9527d16 | | Project | 02568bd62b414221956f15dbe9527d16 |
| Resource | None | | Resource | None |
| Target | None | | Target | None |
| Type | security_group | | Type | security_group |
| created_at | 2017-07-05T02:56:43Z | | created_at | 2017-07-05T02:56:43Z |
| revision_number | 0 | | revision_number | 0 |
| tenant_id | 02568bd62b414221956f15dbe9527d16 | | tenant_id | 02568bd62b414221956f15dbe9527d16 |
| updated_at | 2017-07-05T02:56:43Z | | updated_at | 2017-07-05T02:56:43Z |
+-----------------+------------------------------------------------+ +-----------------+------------------------------------------------+
.. warning:: .. warning::
In the case of ``--resource`` and ``--target`` are not specified from the In the case of ``--resource`` and ``--target`` are not specified from the
request, these arguments will be assigned to ``ALL`` by default. Hence, request, these arguments will be assigned to ``ALL`` by default. Hence,
there is an enormous range of log events will be created. there is an enormous range of log events will be created.
* Create logging resource with a given resource (sg1 or fwg1) * Create logging resource with a given resource (sg1 or fwg1)
.. code-block:: console .. code-block:: console
$ openstack network log create my-log --resource-type security_group --resource sg1 $ openstack network log create my-log --resource-type security_group --resource sg1
$ openstack network log create my-log --resource-type firewall_group --resource fwg1 $ openstack network log create my-log --resource-type firewall_group --resource fwg1
* Create logging resource with a given target (portA) * Create logging resource with a given target (portA)
.. code-block:: console .. code-block:: console
$ openstack network log create my-log --resource-type security_group --target portA $ openstack network log create my-log --resource-type security_group --target portA
* Create logging resource for only the given target (portB) and the given * Create logging resource for only the given target (portB) and the given
resource (sg1 or fwg1) resource (sg1 or fwg1)
.. code-block:: console .. code-block:: console
$ openstack network log create my-log --resource-type security_group --target portB --resource sg1 $ openstack network log create my-log --resource-type security_group --target portB --resource sg1
$ openstack network log create my-log --resource-type firewall_group --target portB --resource fwg1 $ openstack network log create my-log --resource-type firewall_group --target portB --resource fwg1
.. note:: .. note::
- The ``Enabled`` field is set to ``True`` by default. If enabled, logged - The ``Enabled`` field is set to ``True`` by default. If enabled, logged
events are written to the destination if ``local_output_log_base`` is events are written to the destination if ``local_output_log_base`` is
configured or ``/var/log/syslog`` in default. configured or ``/var/log/syslog`` in default.
- The ``Event`` field will be set to ``ALL`` if ``--event`` is not specified - The ``Event`` field will be set to ``ALL`` if ``--event`` is not specified
from log creation request. from log creation request.
#. Enable/Disable log #. Enable/Disable log
We can ``enable`` or ``disable`` logging objects at runtime. It means that We can ``enable`` or ``disable`` logging objects at runtime. It means that
it will apply to all registered ports with the logging object immediately. it will apply to all registered ports with the logging object immediately.
For example: For example:
.. code-block:: console .. code-block:: console
$ openstack network log set --disable Log_Created $ openstack network log set --disable Log_Created
$ openstack network log show Log_Created $ openstack network log show Log_Created
+-----------------+------------------------------------------------+ +-----------------+------------------------------------------------+
| Field | Value | | Field | Value |
+-----------------+------------------------------------------------+ +-----------------+------------------------------------------------+
| Description | Collecting all security events | | Description | Collecting all security events |
| Enabled | False | | Enabled | False |
| Event | ALL | | Event | ALL |
| ID | 8085c3e6-0fa2-4954-b5ce-ff6207931b6d | | ID | 8085c3e6-0fa2-4954-b5ce-ff6207931b6d |
| Name | Log_Created | | Name | Log_Created |
| Project | 02568bd62b414221956f15dbe9527d16 | | Project | 02568bd62b414221956f15dbe9527d16 |
| Resource | None | | Resource | None |
| Target | None | | Target | None |
| Type | security_group | | Type | security_group |
| created_at | 2017-07-05T02:56:43Z | | created_at | 2017-07-05T02:56:43Z |
| revision_number | 1 | | revision_number | 1 |
| tenant_id | 02568bd62b414221956f15dbe9527d16 | | tenant_id | 02568bd62b414221956f15dbe9527d16 |
| updated_at | 2017-07-05T03:12:01Z | | updated_at | 2017-07-05T03:12:01Z |
+-----------------+------------------------------------------------+ +-----------------+------------------------------------------------+
Logged events description Logged events description
~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~
Currently, packet logging framework supports to collect ``ACCEPT`` or ``DROP`` Currently, packet logging framework supports to collect ``ACCEPT`` or ``DROP``
or both events related to registered resources. As mentioned above, Neutron or both events related to registered resources. As mentioned above, Neutron
packet logging framework offers two loggable resources through the ``log`` packet logging framework offers two loggable resources through the ``log``
service plug-in: ``security_group`` and ``firewall_group``. service plug-in: ``security_group`` and ``firewall_group``.
The general characteristics of each event will be shown as the following: The general characteristics of each event will be shown as the following:
* Log every ``DROP`` event: Every ``DROP`` security events will be generated * Log every ``DROP`` event: Every ``DROP`` security events will be generated
when an incoming or outgoing session is blocked by the security groups or when an incoming or outgoing session is blocked by the security groups or
firewall groups firewall groups
* Log an ``ACCEPT`` event: The ``ACCEPT`` security event will be generated only * Log an ``ACCEPT`` event: The ``ACCEPT`` security event will be generated only
for each ``NEW`` incoming or outgoing session that is allowed by security for each ``NEW`` incoming or outgoing session that is allowed by security
groups or firewall groups. More details for the ``ACCEPT`` events are shown groups or firewall groups. More details for the ``ACCEPT`` events are shown
as bellow: as bellow:
* North/South ``ACCEPT``: For a North/South session there would be a single * North/South ``ACCEPT``: For a North/South session there would be a single
``ACCEPT`` event irrespective of direction. ``ACCEPT`` event irrespective of direction.
* East/West ``ACCEPT``/``ACCEPT``: In an intra-project East/West session * East/West ``ACCEPT``/``ACCEPT``: In an intra-project East/West session
where the originating port allows the session and the destination port where the originating port allows the session and the destination port
allows the session, i.e. the traffic is allowed, there would be two allows the session, i.e. the traffic is allowed, there would be two
``ACCEPT`` security events generated, one from the perspective of the ``ACCEPT`` security events generated, one from the perspective of the
originating port and one from the perspective of the destination port. originating port and one from the perspective of the destination port.
* East/West ``ACCEPT``/``DROP``: In an intra-project East/West session * East/West ``ACCEPT``/``DROP``: In an intra-project East/West session
initiation where the originating port allows the session and the initiation where the originating port allows the session and the
destination port does not allow the session there would be ``ACCEPT`` destination port does not allow the session there would be ``ACCEPT``
security events generated from the perspective of the originating port and security events generated from the perspective of the originating port and
``DROP`` security events generated from the perspective of the destination ``DROP`` security events generated from the perspective of the destination
port. port.
#. The security events that are collected by security group should include: #. The security events that are collected by security group should include:
* A timestamp of the flow. * A timestamp of the flow.
* A status of the flow ``ACCEPT``/``DROP``. * A status of the flow ``ACCEPT``/``DROP``.
* An indication of the originator of the flow, e.g which project or log resource * An indication of the originator of the flow, e.g which project or log resource
generated the events. generated the events.
* An identifier of the associated instance interface (neutron port id). * An identifier of the associated instance interface (neutron port id).
* A layer 2, 3 and 4 information (mac, address, port, protocol, etc). * A layer 2, 3 and 4 information (mac, address, port, protocol, etc).
* Security event record format: * Security event record format:
* Logged data of an ``ACCEPT`` event would look like: Logged data of an ``ACCEPT`` event would look like:
.. code-block:: console .. code-block:: console
May 5 09:05:07 action=ACCEPT project_id=736672c700cd43e1bd321aeaf940365c May 5 09:05:07 action=ACCEPT project_id=736672c700cd43e1bd321aeaf940365c
log_resource_ids=['4522efdf-8d44-4e19-b237-64cafc49469b', '42332d89-df42-4588-a2bb-3ce50829ac51'] log_resource_ids=['4522efdf-8d44-4e19-b237-64cafc49469b', '42332d89-df42-4588-a2bb-3ce50829ac51']
vm_port=e0259ade-86de-482e-a717-f58258f7173f vm_port=e0259ade-86de-482e-a717-f58258f7173f
ethernet(dst='fa:16:3e:ec:36:32',ethertype=2048,src='fa:16:3e:50:aa:b5'), ethernet(dst='fa:16:3e:ec:36:32',ethertype=2048,src='fa:16:3e:50:aa:b5'),
ipv4(csum=62071,dst='10.0.0.4',flags=2,header_length=5,identification=36638,offset=0, ipv4(csum=62071,dst='10.0.0.4',flags=2,header_length=5,identification=36638,offset=0,
option=None,proto=6,src='172.24.4.10',tos=0,total_length=60,ttl=63,version=4), option=None,proto=6,src='172.24.4.10',tos=0,total_length=60,ttl=63,version=4),
tcp(ack=0,bits=2,csum=15097,dst_port=80,offset=10,option=[TCPOptionMaximumSegmentSize(kind=2,length=4,max_seg_size=1460), tcp(ack=0,bits=2,csum=15097,dst_port=80,offset=10,option=[TCPOptionMaximumSegmentSize(kind=2,length=4,max_seg_size=1460),
TCPOptionSACKPermitted(kind=4,length=2), TCPOptionTimestamps(kind=8,length=10,ts_ecr=0,ts_val=196418896), TCPOptionSACKPermitted(kind=4,length=2), TCPOptionTimestamps(kind=8,length=10,ts_ecr=0,ts_val=196418896),
TCPOptionNoOperation(kind=1,length=1), TCPOptionWindowScale(kind=3,length=3,shift_cnt=3)], TCPOptionNoOperation(kind=1,length=1), TCPOptionWindowScale(kind=3,length=3,shift_cnt=3)],
seq=3284890090,src_port=47825,urgent=0,window_size=14600) seq=3284890090,src_port=47825,urgent=0,window_size=14600)
* Logged data of a ``DROP`` event: Logged data of a ``DROP`` event:
.. code-block:: console .. code-block:: console
May 5 09:05:07 action=DROP project_id=736672c700cd43e1bd321aeaf940365c May 5 09:05:07 action=DROP project_id=736672c700cd43e1bd321aeaf940365c
log_resource_ids=['4522efdf-8d44-4e19-b237-64cafc49469b'] vm_port=e0259ade-86de-482e-a717-f58258f7173f log_resource_ids=['4522efdf-8d44-4e19-b237-64cafc49469b'] vm_port=e0259ade-86de-482e-a717-f58258f7173f
ethernet(dst='fa:16:3e:ec:36:32',ethertype=2048,src='fa:16:3e:50:aa:b5'), ethernet(dst='fa:16:3e:ec:36:32',ethertype=2048,src='fa:16:3e:50:aa:b5'),
ipv4(csum=62071,dst='10.0.0.4',flags=2,header_length=5,identification=36638,offset=0, ipv4(csum=62071,dst='10.0.0.4',flags=2,header_length=5,identification=36638,offset=0,
option=None,proto=6,src='172.24.4.10',tos=0,total_length=60,ttl=63,version=4), option=None,proto=6,src='172.24.4.10',tos=0,total_length=60,ttl=63,version=4),
tcp(ack=0,bits=2,csum=15097,dst_port=80,offset=10,option=[TCPOptionMaximumSegmentSize(kind=2,length=4,max_seg_size=1460), tcp(ack=0,bits=2,csum=15097,dst_port=80,offset=10,option=[TCPOptionMaximumSegmentSize(kind=2,length=4,max_seg_size=1460),
TCPOptionSACKPermitted(kind=4,length=2), TCPOptionTimestamps(kind=8,length=10,ts_ecr=0,ts_val=196418896), TCPOptionSACKPermitted(kind=4,length=2), TCPOptionTimestamps(kind=8,length=10,ts_ecr=0,ts_val=196418896),
TCPOptionNoOperation(kind=1,length=1), TCPOptionWindowScale(kind=3,length=3,shift_cnt=3)], TCPOptionNoOperation(kind=1,length=1), TCPOptionWindowScale(kind=3,length=3,shift_cnt=3)],
seq=3284890090,src_port=47825,urgent=0,window_size=14600) seq=3284890090,src_port=47825,urgent=0,window_size=14600)
#. The events that are collected by firewall group should include: #. The events that are collected by firewall group should include:
* A timestamp of the flow. * A timestamp of the flow.
* A status of the flow ``ACCEPT``/``DROP``. * A status of the flow ``ACCEPT``/``DROP``.
* The identifier of log objects that are collecting this event * The identifier of log objects that are collecting this event
* An identifier of the associated instance interface (neutron port id). * An identifier of the associated instance interface (neutron port id).
* A layer 2, 3 and 4 information (mac, address, port, protocol, etc). * A layer 2, 3 and 4 information (mac, address, port, protocol, etc).
* Security event record format: * Security event record format:
* Logged data of an ``ACCEPT`` event would look like: Logged data of an ``ACCEPT`` event would look like:
.. code-block:: console .. code-block:: console
Jul 26 14:46:20: Jul 26 14:46:20:
action=ACCEPT, log_resource_ids=[u'2e030f3a-e93d-4a76-bc60-1d11c0f6561b'], port=9882c485-b808-4a34-a3fb-b537642c66b2 action=ACCEPT, log_resource_ids=[u'2e030f3a-e93d-4a76-bc60-1d11c0f6561b'], port=9882c485-b808-4a34-a3fb-b537642c66b2
pkt=ethernet(dst='fa:16:3e:8f:47:c5',ethertype=2048,src='fa:16:3e:1b:3e:67') pkt=ethernet(dst='fa:16:3e:8f:47:c5',ethertype=2048,src='fa:16:3e:1b:3e:67')
ipv4(csum=47423,dst='10.10.1.16',flags=2,header_length=5,identification=27969,offset=0,option=None,proto=1,src='10.10.0.5',tos=0,total_length=84,ttl=63,version=4) ipv4(csum=47423,dst='10.10.1.16',flags=2,header_length=5,identification=27969,offset=0,option=None,proto=1,src='10.10.0.5',tos=0,total_length=84,ttl=63,version=4)
icmp(code=0,csum=41376,data=echo(data='\xe5\xf2\xfej\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 icmp(code=0,csum=41376,data=echo(data='\xe5\xf2\xfej\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00',id=29185,seq=0),type=8) \x00\x00\x00\x00\x00\x00\x00',id=29185,seq=0),type=8)
* Logged data of a ``DROP`` event: Logged data of a ``DROP`` event:
.. code-block:: console .. code-block:: console
Jul 26 14:51:20: Jul 26 14:51:20:
action=DROP, log_resource_ids=[u'2e030f3a-e93d-4a76-bc60-1d11c0f6561b'], port=9882c485-b808-4a34-a3fb-b537642c66b2 action=DROP, log_resource_ids=[u'2e030f3a-e93d-4a76-bc60-1d11c0f6561b'], port=9882c485-b808-4a34-a3fb-b537642c66b2
pkt=ethernet(dst='fa:16:3e:32:7d:ff',ethertype=2048,src='fa:16:3e:28:83:51') pkt=ethernet(dst='fa:16:3e:32:7d:ff',ethertype=2048,src='fa:16:3e:28:83:51')
ipv4(csum=17518,dst='10.10.0.5',flags=2,header_length=5,identification=57874,offset=0,option=None,proto=1,src='10.10.1.16',tos=0,total_length=84,ttl=63,version=4) ipv4(csum=17518,dst='10.10.0.5',flags=2,header_length=5,identification=57874,offset=0,option=None,proto=1,src='10.10.1.16',tos=0,total_length=84,ttl=63,version=4)
icmp(code=0,csum=23772,data=echo(data='\x8a\xa0\xac|\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 icmp(code=0,csum=23772,data=echo(data='\x8a\xa0\xac|\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00 \x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00
\x00\x00\x00\x00\x00\x00\x00',id=25601,seq=5),type=8) \x00\x00\x00\x00\x00\x00\x00',id=25601,seq=5),type=8)
.. note:: .. note::
No other extraneous events are generated within the security event logs, No other extraneous events are generated within the security event logs,
e.g. no debugging data, etc. e.g. no debugging data, etc.

View File

@@ -166,7 +166,7 @@ network and has access to the private networks of all nodes.
The PCI bus number of the PF (03:00.0) and VFs (03:00.2 .. 03:00.5) The PCI bus number of the PF (03:00.0) and VFs (03:00.2 .. 03:00.5)
will be used later. will be used later.
.. code-block::bash .. code-block:: bash
# lspci | grep Ethernet # lspci | grep Ethernet
03:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5] 03:00.0 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5]
@@ -176,7 +176,6 @@ network and has access to the private networks of all nodes.
03:00.4 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] 03:00.4 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
03:00.5 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function] 03:00.5 Ethernet controller: Mellanox Technologies MT27800 Family [ConnectX-5 Virtual Function]
.. code-block:: bash .. code-block:: bash
# ip link show enp3s0f0 # ip link show enp3s0f0

View File

@@ -268,13 +268,12 @@ On the network and compute nodes:
[agent] [agent]
extensions = fip_qos, gateway_ip_qos extensions = fip_qos, gateway_ip_qos
#. As rate limit doesn't work on Open vSwitch's ``internal`` ports, #. As rate limit doesn't work on Open vSwitch's ``internal`` ports,
optionally, as a workaround, to make QoS bandwidth limit work on optionally, as a workaround, to make QoS bandwidth limit work on
router's gateway ports, set ``ovs_use_veth`` to ``True`` in ``DEFAULT`` router's gateway ports, set ``ovs_use_veth`` to ``True`` in ``DEFAULT``
section in ``/etc/neutron/l3_agent.ini`` section in ``/etc/neutron/l3_agent.ini``
.. code-block:: ini .. code-block:: ini
[DEFAULT] [DEFAULT]
ovs_use_veth = True ovs_use_veth = True

View File

@@ -634,7 +634,7 @@ creating multiple networks and/or increasing broadcast domain.
example, the second segment uses the ``provider1`` physical network example, the second segment uses the ``provider1`` physical network
with VLAN ID 2020. with VLAN ID 2020.
.. code-block:: console .. code-block:: console
$ openstack network segment create --physical-network provider1 \ $ openstack network segment create --physical-network provider1 \
--network-type vlan --segment 2020 --network multisegment1 segment1-2 --network-type vlan --segment 2020 --network multisegment1 segment1-2

View File

@@ -16,55 +16,55 @@ For example, in the OVN Northbound database, this is how a VLAN
Provider Network with two segments (VLAN: 100, 200) is related to their Provider Network with two segments (VLAN: 100, 200) is related to their
``Logical_Switch`` counterpart: ``Logical_Switch`` counterpart:
.. code-block:: bash .. code-block:: bash
$ ovn-nbctl list logical_switch public $ ovn-nbctl list logical_switch public
_uuid : 983719e5-4f32-4fb0-926d-46291457ca41 _uuid : 983719e5-4f32-4fb0-926d-46291457ca41
acls : [] acls : []
dns_records : [] dns_records : []
external_ids : {"neutron:mtu"="1450", "neutron:network_name"=public, "neutron:revision_number"="3"} external_ids : {"neutron:mtu"="1450", "neutron:network_name"=public, "neutron:revision_number"="3"}
forwarding_groups : [] forwarding_groups : []
load_balancer : [] load_balancer : []
name : neutron-6c8be12a-9ed0-4ac4-8130-cb8fad83cd46 name : neutron-6c8be12a-9ed0-4ac4-8130-cb8fad83cd46
other_config : {mcast_flood_unregistered="false", mcast_snoop="true"} other_config : {mcast_flood_unregistered="false", mcast_snoop="true"}
ports : [81bce1ab-87f8-4ed5-8163-f16701499dfe, b23d0c2e-773b-4ecb-8306-53d117006a7b] ports : [81bce1ab-87f8-4ed5-8163-f16701499dfe, b23d0c2e-773b-4ecb-8306-53d117006a7b]
qos_rules : [] qos_rules : []
$ ovn-nbctl list logical_switch_port 81bce1ab-87f8-4ed5-8163-f16701499dfe $ ovn-nbctl list logical_switch_port 81bce1ab-87f8-4ed5-8163-f16701499dfe
_uuid : 81bce1ab-87f8-4ed5-8163-f16701499dfe _uuid : 81bce1ab-87f8-4ed5-8163-f16701499dfe
addresses : [unknown] addresses : [unknown]
dhcpv4_options : [] dhcpv4_options : []
dhcpv6_options : [] dhcpv6_options : []
dynamic_addresses : [] dynamic_addresses : []
enabled : [] enabled : []
external_ids : {} external_ids : {}
ha_chassis_group : [] ha_chassis_group : []
name : provnet-96f663af-19fa-4c7e-a1b8-1dfdc9cd9e82 name : provnet-96f663af-19fa-4c7e-a1b8-1dfdc9cd9e82
options : {network_name=phys-net-1} options : {network_name=phys-net-1}
parent_name : [] parent_name : []
port_security : [] port_security : []
tag : 100 tag : 100
tag_request : [] tag_request : []
type : localnet type : localnet
up : false up : false
$ ovn-nbctl list logical_switch_port b23d0c2e-773b-4ecb-8306-53d117006a7b $ ovn-nbctl list logical_switch_port b23d0c2e-773b-4ecb-8306-53d117006a7b
_uuid : b23d0c2e-773b-4ecb-8306-53d117006a7b _uuid : b23d0c2e-773b-4ecb-8306-53d117006a7b
addresses : [unknown] addresses : [unknown]
dhcpv4_options : [] dhcpv4_options : []
dhcpv6_options : [] dhcpv6_options : []
dynamic_addresses : [] dynamic_addresses : []
enabled : [] enabled : []
external_ids : {} external_ids : {}
ha_chassis_group : [] ha_chassis_group : []
name : provnet-469cbc3d-8e06-4a8f-be3a-3fcdadfd398a name : provnet-469cbc3d-8e06-4a8f-be3a-3fcdadfd398a
options : {network_name=phys-net-2} options : {network_name=phys-net-2}
parent_name : [] parent_name : []
port_security : [] port_security : []
tag : 200 tag : 200
tag_request : [] tag_request : []
type : localnet type : localnet
up : false up : false
As you can see, the two ``localnet`` ports are configured with a As you can see, the two ``localnet`` ports are configured with a
@@ -73,10 +73,10 @@ VLAN tag and are related to a single ``Logical_Switch`` entry. When
node it's running on it will create a patch port to the provider bridge node it's running on it will create a patch port to the provider bridge
accordingly to the bridge mappings configuration. accordingly to the bridge mappings configuration.
.. code-block:: bash .. code-block:: bash
compute-1: bridge-mappings = segment-1:br-provider1 compute-1: bridge-mappings = segment-1:br-provider1
compute-2: bridge-mappings = segment-2:br-provider2 compute-2: bridge-mappings = segment-2:br-provider2
For example, when a port in the multisegment network gets bound to For example, when a port in the multisegment network gets bound to
compute-1, ovn-controller will create a patch-port between br-int and compute-1, ovn-controller will create a patch-port between br-int and

View File

@@ -54,7 +54,7 @@ the host to the OVN database by creating the corresponding "Chassis" and
when the process is gracefully stopped, it deletes both registers. These when the process is gracefully stopped, it deletes both registers. These
registers are used by Neutron to control the OVN agents. registers are used by Neutron to control the OVN agents.
.. code-block:: console .. code-block:: console
$ openstack network agent list -c ID -c "Agent Type" -c Host -c Alive -c State $ openstack network agent list -c ID -c "Agent Type" -c Host -c Alive -c State
+--------------------------------------+------------------------------+--------+-------+-------+ +--------------------------------------+------------------------------+--------+-------+-------+
@@ -76,40 +76,36 @@ the other one will be down because the "Chassis_Private.nb_cfg_timestamp"
is not updated. In this case, the administrator should manually delete from is not updated. In this case, the administrator should manually delete from
the OVN Southbound database the stale registers. For example: the OVN Southbound database the stale registers. For example:
* List the "Chassis" registers, filtering by hostname and name (OVS * List the "Chassis" registers, filtering by hostname and name (OVS
"system-id"): "system-id"):
.. code-block:: console .. code-block:: console
$ sudo ovn-sbctl list Chassis | grep name $ sudo ovn-sbctl list Chassis | grep name
hostname : u20ovn hostname : u20ovn
name : "a55c8d85-2071-4452-92cb-95d15c29bde7" name : "a55c8d85-2071-4452-92cb-95d15c29bde7"
hostname : u20ovn hostname : u20ovn
name : "ce9a1471-79c1-4472-adfc-9e5ce86eba07" name : "ce9a1471-79c1-4472-adfc-9e5ce86eba07"
* Delete the stale "Chassis" register:
* Delete the stale "Chassis" register: .. code-block:: console
.. code-block:: console $ sudo ovn-sbctl destroy Chassis ce9a1471-79c1-4472-adfc-9e5ce86eba07
$ sudo ovn-sbctl destroy Chassis ce9a1471-79c1-4472-adfc-9e5ce86eba07 * List the "Chassis_Private" registers, filtering by name:
.. code-block:: console
* List the "Chassis_Private" registers, filtering by name: $ sudo ovn-sbctl list Chassis_Private | grep name
name : "a55c8d85-2071-4452-92cb-95d15c29bde7"
name : "ce9a1471-79c1-4472-adfc-9e5ce86eba07"
.. code-block:: console * Delete the stale "Chassis_Private" register:
$ sudo ovn-sbctl list Chassis_Private | grep name .. code-block:: console
name : "a55c8d85-2071-4452-92cb-95d15c29bde7"
name : "ce9a1471-79c1-4472-adfc-9e5ce86eba07"
* Delete the stale "Chassis_Private" register:
.. code-block:: console
$ sudo ovn-sbctl destroy Chassis_Private ce9a1471-79c1-4472-adfc-9e5ce86eba07
$ sudo ovn-sbctl destroy Chassis_Private ce9a1471-79c1-4472-adfc-9e5ce86eba07
If the host name is also updated during the system upgrade, the Neutron If the host name is also updated during the system upgrade, the Neutron
agent list could present entries from different host names, but the older agent list could present entries from different host names, but the older

View File

@@ -42,18 +42,18 @@ also called as legacy) have the following format; bear in mind that if labels
are shared, then the counters are for all routers of all projects where the are shared, then the counters are for all routers of all projects where the
labels were applied. labels were applied.
.. code-block:: json .. code-block:: json
{ {
"pkts": "<the number of packets that matched the rules of the labels>", "pkts": "<the number of packets that matched the rules of the labels>",
"bytes": "<the number of bytes that matched the rules of the labels>", "bytes": "<the number of bytes that matched the rules of the labels>",
"time": "<seconds between the first data collection and the last one>", "time": "<seconds between the first data collection and the last one>",
"first_update": "timeutils.utcnow_ts() of the first collection", "first_update": "timeutils.utcnow_ts() of the first collection",
"last_update": "timeutils.utcnow_ts() of the last collection", "last_update": "timeutils.utcnow_ts() of the last collection",
"host": "<neutron metering agent host name>", "host": "<neutron metering agent host name>",
"label_id": "<the label id>", "label_id": "<the label id>",
"tenant_id": "<the tenant id>" "tenant_id": "<the tenant id>"
} }
The ``first_update`` and ``last_update`` timestamps represent the moment The ``first_update`` and ``last_update`` timestamps represent the moment
when the first and last data collection happened within the report interval. when the first and last data collection happened within the report interval.
@@ -129,21 +129,21 @@ legacy mode such as ``bytes``, ``pkts``, ``time``, ``first_update``,
``last_update``, and ``host``. As follows we present an example of JSON message ``last_update``, and ``host``. As follows we present an example of JSON message
with all of the possible attributes. with all of the possible attributes.
.. code-block:: json .. code-block:: json
{ {
"resource_id": "router-f0f745d9a59c47fdbbdd187d718f9e41-label-00c714f1-49c8-462c-8f5d-f05f21e035c7", "resource_id": "router-f0f745d9a59c47fdbbdd187d718f9e41-label-00c714f1-49c8-462c-8f5d-f05f21e035c7",
"project_id": "f0f745d9a59c47fdbbdd187d718f9e41", "project_id": "f0f745d9a59c47fdbbdd187d718f9e41",
"first_update": 1591058790, "first_update": 1591058790,
"bytes": 0, "bytes": 0,
"label_id": "00c714f1-49c8-462c-8f5d-f05f21e035c7", "label_id": "00c714f1-49c8-462c-8f5d-f05f21e035c7",
"label_name": "test1", "label_name": "test1",
"last_update": 1591059037, "last_update": 1591059037,
"host": "<hostname>", "host": "<hostname>",
"time": 247, "time": 247,
"pkts": 0, "pkts": 0,
"label_shared": true "label_shared": true
} }
The ``resource_id`` is a unique identified for the "resource" being The ``resource_id`` is a unique identified for the "resource" being
monitored. Here we consider a resource to be any of the granularities that monitored. Here we consider a resource to be any of the granularities that
@@ -156,4 +156,4 @@ As follows we present all of the possible configuration one can use in the
metering agent init file. metering agent init file.
.. show-options:: .. show-options::
:config-file: etc/oslo-config-generator/metering_agent.ini :config-file: etc/oslo-config-generator/metering_agent.ini

View File

@@ -10,10 +10,10 @@ manage affected security group rules. Thus, there is no need for an agent.
It is good to keep in mind that Openstack Security Groups (SG) and their rules It is good to keep in mind that Openstack Security Groups (SG) and their rules
(SGR) map 1:1 into OVN's Port Groups (PG) and Access Control Lists (ACL): (SGR) map 1:1 into OVN's Port Groups (PG) and Access Control Lists (ACL):
.. code-block:: none .. code-block:: none
Openstack Security Group <=> OVN Port Group Openstack Security Group <=> OVN Port Group
Openstack Security Group Rule <=> OVN ACL Openstack Security Group Rule <=> OVN ACL
Just like SGs have a list of SGRs, PGs have a list of ACLs. PGs also have Just like SGs have a list of SGRs, PGs have a list of ACLs. PGs also have
a list of logical ports, but that is not really relevant in this context. a list of logical ports, but that is not really relevant in this context.
@@ -50,22 +50,22 @@ https://github.com/ovn-org/ovn/commit/880dca99eaf73db7e783999c29386d03c82093bf
Below is an example of a meter configuration in OVN. You can locate the fair, Below is an example of a meter configuration in OVN. You can locate the fair,
unit, burst_size, and rate attributes: unit, burst_size, and rate attributes:
.. code-block:: bash .. code-block:: bash
$ ovn-nbctl list meter $ ovn-nbctl list meter
_uuid : 70c76ba9-f303-471b-9d49-25dee299827f _uuid : 70c76ba9-f303-471b-9d49-25dee299827f
bands : [f114c205-a170-4425-8ca6-4e71099d1955] bands : [f114c205-a170-4425-8ca6-4e71099d1955]
external_ids : {"neutron:device_owner"=logging-plugin} external_ids : {"neutron:device_owner"=logging-plugin}
fair : true fair : true
name : acl_log_meter name : acl_log_meter
unit : pktps unit : pktps
$ ovn-nbctl list meter-band $ ovn-nbctl list meter-band
_uuid : f114c205-a170-4425-8ca6-4e71099d1955 _uuid : f114c205-a170-4425-8ca6-4e71099d1955
action : drop action : drop
burst_size : 25 burst_size : 25
external_ids : {} external_ids : {}
rate : 100 rate : 100
The burst_size and rate attributes are configurable through The burst_size and rate attributes are configurable through
neutron.conf.services.logging.log_driver_opts. That is not new. neutron.conf.services.logging.log_driver_opts. That is not new.
@@ -78,39 +78,39 @@ Moreover, there are a few attributes in each ACL that makes it able to
provide the networking logging feature. Let's use the example below provide the networking logging feature. Let's use the example below
to point out the relevant fields: to point out the relevant fields:
.. code-block:: none .. code-block:: none
$ openstack network log create --resource-type security_group \ $ openstack network log create --resource-type security_group \
--resource ${SG} --event ACCEPT logme -f value -c ID --resource ${SG} --event ACCEPT logme -f value -c ID
2e456c7f-154e-40a8-bb10-f88ba51b90b5 2e456c7f-154e-40a8-bb10-f88ba51b90b5
$ openstack security group show ${SG} -f json -c rules | jq '.rules | .[2]' | grep -v 'null' $ openstack security group show ${SG} -f json -c rules | jq '.rules | .[2]' | grep -v 'null'
{ {
"id": "de4ea1e4-c946-40ed-b5b6-53c59418dc0b", "id": "de4ea1e4-c946-40ed-b5b6-53c59418dc0b",
"tenant_id": "2600067ea3a446dba332d20a30ed44fa", "tenant_id": "2600067ea3a446dba332d20a30ed44fa",
"security_group_id": "c604e984-0789-4c9a-a297-3e7f62fa73fd", "security_group_id": "c604e984-0789-4c9a-a297-3e7f62fa73fd",
"ethertype": "IPv4", "ethertype": "IPv4",
"direction": "egress", "direction": "egress",
"standard_attr_id": 48, "standard_attr_id": 48,
"tags": [], "tags": [],
"created_at": "2021-02-06T22:17:44Z", "created_at": "2021-02-06T22:17:44Z",
"updated_at": "2021-02-06T22:17:44Z", "updated_at": "2021-02-06T22:17:44Z",
"revision_number": 0, "revision_number": 0,
"project_id": "2600067ea3a446dba332d20a30ed44fa" "project_id": "2600067ea3a446dba332d20a30ed44fa"
} }
$ ovn-nbctl find acl \ $ ovn-nbctl find acl \
"external_ids:\"neutron:security_group_rule_id\""="de4ea1e4-c946-40ed-b5b6-53c59418dc0b" "external_ids:\"neutron:security_group_rule_id\""="de4ea1e4-c946-40ed-b5b6-53c59418dc0b"
_uuid : 791679e9-237d-4732-a31e-aa634496e02b _uuid : 791679e9-237d-4732-a31e-aa634496e02b
action : allow-related action : allow-related
direction : from-lport direction : from-lport
external_ids : {"neutron:security_group_rule_id"="de4ea1e4-c946-40ed-b5b6-53c59418dc0b"} external_ids : {"neutron:security_group_rule_id"="de4ea1e4-c946-40ed-b5b6-53c59418dc0b"}
log : true log : true
match : "inport == @pg_c604e984_0789_4c9a_a297_3e7f62fa73fd && ip4" match : "inport == @pg_c604e984_0789_4c9a_a297_3e7f62fa73fd && ip4"
meter : acl_log_meter meter : acl_log_meter
name : neutron-2e456c7f-154e-40a8-bb10-f88ba51b90b5 name : neutron-2e456c7f-154e-40a8-bb10-f88ba51b90b5
priority : 1002 priority : 1002
severity : info severity : info
The first command creates a networking-log for a given SG. The second shows an SGR from that SG. The first command creates a networking-log for a given SG. The second shows an SGR from that SG.
The third shell command is where we can see how the ACL with the meter information gets populated. The third shell command is where we can see how the ACL with the meter information gets populated.
@@ -128,14 +128,14 @@ These are the attributes pertinent to network logging:
If we poked the SGR with packets that match its criteria, the ovn-controller local to where the ACLs If we poked the SGR with packets that match its criteria, the ovn-controller local to where the ACLs
is enforced will log something that looks like this: is enforced will log something that looks like this:
.. code-block:: none .. code-block:: none
2021-02-16T11:59:00.640Z|00045|acl_log(ovn_pinctrl0)|INFO| 2021-02-16T11:59:00.640Z|00045|acl_log(ovn_pinctrl0)|INFO|
name="neutron-2e456c7f-154e-40a8-bb10-f88ba51b90b5", name="neutron-2e456c7f-154e-40a8-bb10-f88ba51b90b5",
verdict=allow, severity=info: icmp,vlan_tci=0x0000,dl_src=fa:16:3e:24:dc:88, verdict=allow, severity=info: icmp,vlan_tci=0x0000,dl_src=fa:16:3e:24:dc:88,
dl_dst=fa:16:3e:15:6d:e0, dl_dst=fa:16:3e:15:6d:e0,
nw_src=10.0.0.12,nw_dst=10.0.0.11,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8, nw_src=10.0.0.12,nw_dst=10.0.0.11,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,
icmp_code=0 icmp_code=0
It is beyond the scope of this document to talk about what happens after the logs are generated It is beyond the scope of this document to talk about what happens after the logs are generated
by ovn-controllers. The harvesting of files across compute nodes is something a project like by ovn-controllers. The harvesting of files across compute nodes is something a project like

View File

@@ -14,58 +14,58 @@ load_balancer table for all mappings for a given FIP+protocol. All PFs
for the same FIP+protocol are kept as Virtual IP (VIP) mappings inside a for the same FIP+protocol are kept as Virtual IP (VIP) mappings inside a
LB entry. See the diagram below for an example of how that looks like: LB entry. See the diagram below for an example of how that looks like:
.. code-block:: none .. code-block:: none
VIP:PORT = MEMBER1:MPORT1, MEMBER2:MPORT2 VIP:PORT = MEMBER1:MPORT1, MEMBER2:MPORT2
The same is extended for port forwarding as: The same is extended for port forwarding as:
FIP:PORT = PRIVATE_IP:PRIV_PORT FIP:PORT = PRIVATE_IP:PRIV_PORT
Neutron DB OVN Northbound DB Neutron DB OVN Northbound DB
+---------------------+ +---------------------------------+ +---------------------+ +---------------------------------+
| Floating IP AA | | Load Balancer AA UDP | | Floating IP AA | | Load Balancer AA UDP |
| | | | | | | |
| +-----------------+ | | | | +-----------------+ | | |
| | Port Forwarding | | +----------->AA:portA => internal IP1:portX | | | Port Forwarding | | +----------->AA:portA => internal IP1:portX |
| | | | | | | | | | | | | |
| | External PortA +-----+ +------->AA:portB => internal IP2:portX | | | External PortA +-----+ +------->AA:portB => internal IP2:portX |
| | Fixed IP1 PortX | | | | | | | Fixed IP1 PortX | | | | |
| | Protocol: UDP | | | +---------------------------------+ | | Protocol: UDP | | | +---------------------------------+
| +-----------------+ | | | +-----------------+ | |
| | | +---------------------------------+ | | | +---------------------------------+
| +-----------------+ | | | Load Balancer AA TCP | | +-----------------+ | | | Load Balancer AA TCP |
| | Port Forwarding | | | | | | | Port Forwarding | | | | |
| | | | | | | | | | | | | |
| | External PortB +---------+ +--->AA:portC => internal IP3:portX | | | External PortB +---------+ +--->AA:portC => internal IP3:portX |
| | Fixed IP2 PortX | | | | | | | Fixed IP2 PortX | | | | |
| | Protocol: UDP | | | +---------------------------------+ | | Protocol: UDP | | | +---------------------------------+
| +-----------------+ | | | +-----------------+ | |
| | | | | |
| +-----------------+ | | | +-----------------+ | |
| | Port Forwarding | | | | | Port Forwarding | | |
| | | | | +---------------------------------+ | | | | | +---------------------------------+
| | External PortC | | | | Load Balancer BB TCP | | | External PortC | | | | Load Balancer BB TCP |
| | Fixed IP3 PortX +-------------+ | | | | Fixed IP3 PortX +-------------+ | |
| | Protocol: TCP | | | | | | Protocol: TCP | | | |
| +-----------------+ | +---------->BB:portD => internal IP4:portX | | +-----------------+ | +---------->BB:portD => internal IP4:portX |
| | | | | | | | | |
+---------------------+ | +---------------------------------+ +---------------------+ | +---------------------------------+
| |
| +-------------------+ | +-------------------+
| | Logical Router X1 | | | Logical Router X1 |
+---------------------+ | | | +---------------------+ | | |
| Floating IP BB | | | Load Balancers: | | Floating IP BB | | | Load Balancers: |
| | | | AA UDP, AA TCP | | | | | AA UDP, AA TCP |
| +-----------------+ | | +-------------------+ | +-----------------+ | | +-------------------+
| | Port Forwarding | | | | | Port Forwarding | | |
| | | | | +-------------------+ | | | | | +-------------------+
| | External PortD | | | | Logical Router Z1 | | | External PortD | | | | Logical Router Z1 |
| | Fixed IP4 PortX +------+ | | | | Fixed IP4 PortX +------+ | |
| | Protocol: TCP | | | Load Balancers: | | | Protocol: TCP | | | Load Balancers: |
| +-----------------+ | | BB TCP | | +-----------------+ | | BB TCP |
+---------------------+ +-------------------+ +---------------------+ +-------------------+
The OVN LB entries have names that include the id of the FIP and a protocol The OVN LB entries have names that include the id of the FIP and a protocol
suffix. That protocol portion is needed because a single FIP can have multiple suffix. That protocol portion is needed because a single FIP can have multiple
@@ -73,7 +73,7 @@ UDP and TCP port forwarding entries while a given LB entry can either be one
or the other protocol (not both). Based on that, the format used to specify an or the other protocol (not both). Based on that, the format used to specify an
LB entry is: LB entry is:
.. code-block:: ini .. code-block:: ini
pf-floatingip-<NEUTRON_FIP_ID>-<PROTOCOL> pf-floatingip-<NEUTRON_FIP_ID>-<PROTOCOL>
@@ -85,7 +85,7 @@ In order to differentiate a load balancer entry that was created by port
forwarding vs load balancer entries maintained by ovn-octavia-provider, the forwarding vs load balancer entries maintained by ovn-octavia-provider, the
external_ids field also has an owner value: external_ids field also has an owner value:
.. code-block:: python .. code-block:: python
external_ids = { external_ids = {
ovn_const.OVN_DEVICE_OWNER_EXT_ID_KEY: PORT_FORWARDING_PLUGIN, ovn_const.OVN_DEVICE_OWNER_EXT_ID_KEY: PORT_FORWARDING_PLUGIN,
@@ -97,7 +97,7 @@ external_ids field also has an owner value:
The following registry (API) neutron events trigger the OVN backend to map port The following registry (API) neutron events trigger the OVN backend to map port
forwarding into LB: forwarding into LB:
.. code-block:: python .. code-block:: python
@registry.receives(PORT_FORWARDING_PLUGIN, [events.AFTER_INIT]) @registry.receives(PORT_FORWARDING_PLUGIN, [events.AFTER_INIT])
def register(self, resource, event, trigger, payload=None): def register(self, resource, event, trigger, payload=None):