Added support for filtering the QoS rule type list command.
Two new filter flags are added:
- all_supported: if True, the listing call will print all QoS rule
types supported by at least one loaded mechanism driver.
- all_rules: if True, the listing call will print all QoS rule types
supported by the Neutron server.
Both filter flags are exclusive and not required.
Depends-On: https://review.opendev.org/c/openstack/neutron-lib/+/827533
Closes-Bug: #1959749
Change-Id: I41eaab177e121316c3daec34b309c266e2f81979
Table 59 will be used for pps limitation, the pipeline change is:
all original flows with ``goto table 60`` will be changed to
``goto table 59``, while table 59 has a default rule is goto
table 60. Then we can add pps flows to table 59 for all ports.
Basic limit pipeline is:
Ingress: packets get into br-int table 0, before send to table 60,
in table 59, check the destanation MAC and local_vlan ID, if the
dest is resident in this host, do the meter pps action and send
to table 60.
Egress: match src MAC and in_port, before send to table 60,
in table 59, do the meter pps action and send to table 60.
Why table 59? Because for ovs-agent flow structure, all packets
will be send to table 60 to do next actions such as security group.
Between table 0 and table 60, there are tables for ARP poison/spoofing
prevention rules and MAC spoof filtering. We want similar security
checks to take effect first, so it can drop packets before filling
our limit queues (pps limitation based on data forwarding queue).
And we do not want packets go through the long march of security group
flows, in case of performance side effect when there are large amount
of packets try to send, so limit it before goto security group flows.
Partially-Implements: bp/packet-rate-limit
Related-Bug: #1938966
Related-Bug: #1912460
Change-Id: I943f610c3b6bcf05e2e752ca3b57981f523f88a8
Dashboard urls syntax changed after the latest Gerrit upgrades. This
change fixes the urls to the correct latest syntax in the
documentation.
Change-Id: I8883eac81c6db4d6bcd96d08c072d8378b07e6e6
These are leftover from opendev migration and
without this fix the links results into Not Found.
Mainly need to use "src" in place of "tree" to
get these links working with opendev.org.
Also used git tags where line references are used
as branched references do not persist. And for line
references use #L in place of #n as that's where it
get's redirected.
Also update references for zuul and use of
devstack-vm-gate-wrap.sh in neutron functional jobs.
Change-Id: I92d11c99a17dab80d4b91da49f341f9ba202bcfe
Without this port binding fails with below error:-
Network <nw> is type of vxlan but agent <host> or mechanism
driver only support ['gre', 'local', 'flat', 'vlan'].
Also fix permissions of /opt/stack/devstack in ml2 ovs testing
documentation and added these files to irrelevant-files to skip
running functional jobs as these files are not used in those jobs.
Related-Bug: #1934466
Change-Id: I3ca2ea19bf5e316e580669caab4c607447034a11
After discussing this topic again during the PTG I spent some time
checking our scenario jobs which runs in the check and gate queues.
After analysis this patch proposes to:
* remove neutron-ovs-tempest-slow job from both check and gate queue as
slow tests are already run also in the
neutron-ovs-tempest-multinode-full job,
* remove neutron-ovn-tempest-slow job from both check and gate queue as
slow tests are already run also in the
neutron-ovn-tempest-ipv6-only job - of course this job is using IPv6
instead of IPv4 but I don't really think it's big issue in that case,
neutron-ovn-tempest-slow job was multinode job, unfortunately
neutron-ovn-tempest-ipv6-only is single node job and for now it isn't
possible to make ipv6-only job to be multinode job so we will keep it
like single node job and hopefully move to be multinode job when zuul
will provide required data in the job's inventory,
* move neutron-ovn-tempest-ovs-release and
neutron-ovn-tempest-ovs-release-ipv6-only jobs to periodic queue - I
think that running those tests once per day should be enough.
Additionally this patch removes definition of the neutron-ovs-tempest-slow
and neutron-ovn-tempest-slow jobs are those jobs aren't used anywhere now.
Change-Id: I657881c319d425470277885545240d6a8b66a1f6
Add the following jobs to the experimental queue to test with
neutron-lib master:
- neutron-ovs-tempest-with-neutron-lib-master
- neutron-fullstack-with-uwsgi-with-neutron-lib-master
- neutron-functional-with-uwsgi-with-neutron-lib-master
Change-Id: I12c2381eef365f1249a3779685112cb682d752ee
Add item to prerelease checklist to check API extension list in devstack
and link to QA checklist.
Change-Id: I5ff1c6e873b325f081e2380b4a2bd088ef427c29
This patch implements support for CRUD operations for QoS minimum
packet rate, for example:
DELETE /qos/policies/$POLICY_ID/minimum_packet_rate_rules/$RULE_ID
Placement or dataplane enforcement is not implemented yet.
Partial-Bug: #1922237
See-Also: https://review.opendev.org/785236
Change-Id: Ie994bdab62bab33737f25287e568519c782dea9a
As I will not be maintaining the ovn-octavia-provider, I
am removing my name from the list. Also, since I have
not been as active in L3 recently, update that as well.
Change-Id: Ie883044f3bedc09ff19c58ce90ab9fdc09b92e29
The quota driver ``ConfDriver`` was deprecated in Liberty release.
``NullQuotaDriver`` is created for testing although it could be used
in production if no quota enforcement is needed. However, because
the Quota engine is not plugable (is an extension always loaded), it
could be interesting to make it plugable as any other plugin.
This patch also creates a Quota engine driver API class that should be
used in any Quota engine driver. Currently it is used in the three
in-tree drivers implemented: ``NullQuotaDriver``, ``DbQuotaDriver``
and ``DbQuotaNoLockDriver``.
Change-Id: Ib4af80e18fac52b9f68f26c84a215415e63c2822
Closes-Bug: #1928211
This jobs is almost the same as tempest-slow-py3 since we switched
OVN to be default backend in Neutron. And that tempest-slow-py3 job
is used by many projects. So to avoid potential breaks of the gate for
other projects (like we did recently, see related bug for details)
let's make this job voting and gating.
As it is really used in many different projects as voting and gating
job already I don't think there is any issue with doing the same in
the Neutron gate.
Related-bug: #1936983
Related-bug: #1930402
Change-Id: I85d3830e9cc65162db846e4858871e1db547a04b
Since devstack had set OVN as the default backend for Neutron.
Then the minimum local.conf [1] for ML2 ovs will not work at
all. For some local testing of ML2 OVS, it is not right deployment
for users to test the ML2 OVS related cases.
This patch adds a sample local.conf for ml2 ovs to install a small
all in one environment for Neutron testing.
Sample tested OS:
1. CentOS Stream 8
2. CentOS Linux 8
[1] https://docs.openstack.org/devstack/latest/#create-a-local-conf
Closes-Bug: #1934466
Change-Id: Ie7bac1d2819c332a94a0ff308a300638c17f1b1f
The URL was containing quoted quotations, using e.g. %2528 instead of just
%28 for a "(", fix this.
Change-Id: I5d0fa7da847b72015aa82e5ca3f75206f0f45b2b
Prior to this patch, the metadata agent was writing into SB
database when a network had been provisioned with metadata
on a particular chassis.
Then, neutron-server would wait for that event to happen with
a 15s timeout before sending the vif-plugged event to Nova.
By removing this mechanism:
1) We'll save writes to OVN SB database which, in highly loaded
systems and at scale reduces significantly the load on ovsdb-server.
2) Ignoring healthchecks (that still requires write to the SB DB),
we can make OVN metadata agent to connect to slave instances when
using active-backup OVN databases since writes are not needed.
3) There's a chance that the VM boots very fast and requests
metadata before the service is ready but since the timeout was
15 seconds, we can safely rely on the the cloud-init retries.
Signed-off-by: Daniel Alvarez Sanchez <dalvarez@redhat.com>
Change-Id: Ia6cd7a9a3b9662a9a8ce106e01a93c357c255956
This new quota driver, ``DbQuotaNoLockDriver``, does not create a lock
per (resource, project_id) but retrieves the instant (resource,
project_id) usage and the current (resource, project_id) reservations.
If the requested number of resources fit the available quota, a new
``Reservation`` register is created with the amount of units requested.
All those operations are done inside a DB transaction context. That
means the amount of resources and reservations is guaranteed inside
this transaction (depending on the DB backend isolation level defined)
and the new reservation created will not clash with other DB transation.
That will guarantee the number of resources and instant reservations
never exceed the quota limits defined for this (resource, project_id).
NOTES:
- This change tries to be as unobtrusive as possible. The new driver
uses the same ``DbQuotaDriver`` dabatase tables (except for
``QuotaUsage``) and the same Quota engine API, located in
``neutron.quota``. However, the Quota engine resources implements some
particular API actions like "dirty", that are not used in the new
driver.
- The Pecan Quota enforcement hooks,
``neutron.pecan_wgsi.hooks.quota_enforcement``, execute actions like
"resync", "mark_resources_dirty" or "set_resources_dirty", that has
no meaning in the new driver.
- The isolation between the Quota engine and the Pecan hook, and the
driver itself is not clearly defined. A refactor of the Quota engine,
Quota service, Quota drivers and a common API between the driver and
the engine is needed.
- If ``DbQuotaDriver`` is deprecated, ``CountableResource`` and
``TrackedResource`` will be joined in a single class. This resource
class will have a count method (countable) or a hard dependency on a
database table (tracked resource). The only difference will be the
"count" method implementation.
Closes-Bug: #1926787
Change-Id: I4f98c6fcd781459fd7150aff426d19c7fdfa98c1
This patch updates list of the Neutron's stadium's projects lieutenants
and the list of the bugs' contact persons.
In details this patch:
- sets Rodolfo Alonso Hernandez as contact person for db and qos
related issues,
- adds Oleg Bondarev as "loadimpact" bugs,
- removes Matt Riedemann as contact person for "logging" bugs,
- sets PTL/Drivers team as conctact for "troubleshooting" related bugs,
It also sets Lajos Katona as "Testing" lieutenant.
Finally it removes networking-ovn and neutron-fwaas from the list of
stadium projects and removes tag "fwaas" from the list of bug tags.
Neutron-fwaas was deprecated and isn't part of the stadium since long
time and networking-ovn is now one of the Neutron in-tree drivers.
Change-Id: Id4b928e077ed684c67d4b5054f12653d63f70788
It was deprecated in the Wallaby cycle due to lack of maintainers.
This patch removes networking-midonet as an stadium project from the
official Neutron docs.
Change-Id: I5cd3da80d78d98ec4b2a49574efc0ec075e75959
These dashboards are checked in CI meeting and should always point to
latest and previous stable releases.
Change-Id: Ied2a0d9adbc33b9a41820667d961c1ce9fe72656
neutron-tempest-plugin single node scenario jobs were switched
to use L3HA routers by default in [1] and this patch reflects that
change in our docs.
[1] https://review.opendev.org/721805
Depends-On: https://review.opendev.org/721805
Change-Id: Ib3c6be059dc4f3e62a9c5d25588d44ed4a3df971
This is patchset 1 of 2 for OVN driver handling of security-group-logging.
It includes the design documentation for this feature.
Changed a few lines in doc/source/admin/ovn/features.rst, so the extensions
are sorted in alphabetical order.
Related-Bug: 1914757
Partially-implements: https://review.opendev.org/c/openstack/neutron-specs/+/203509
Change-Id: I95d57613cef3b6892d3a0dd5705e2e8f3386a3a2