We don't need to run e.g. functional, fullstack and all scenario jobs
on patches which are changing e.g. only documentation or release notes
or other things like that.
This patch also removes test-requirements.txt and requirements.txt files
from that list of unrelated files as we want to run our ci jobs when
requirements are changed.
Change-Id: I7950de04c497b14d9225abe6584b7bb7d056f79c
To use less infra resources by every Neutron patch, let's move
co-gating jobs, like Ironic, Openstacksdk and Tripleo based jobs
to the periodic queue.
Those jobs will be still run daily so we should have pretty good
coverage if (and when) some neutron change broke those projects but
we will use less infra resources by running jobs for every patch
proposed to Neutron.
Change-Id: I91c55c9151a11401bd3a7fbe94e378a027bc97df
In case a lrp port binding event fails, report the lrp ID to help
the debugging process.
Change-Id: Ib53d39b317aae36bbb746d260442b7a132355425
Related-Bug: #1912369
Fixes broken functional tests where NamespaceFixture is used and
the TestTimer raises TestTimerTimeout even if the namespace was cleaned
up in time.
The fix makes sure that the alarm is cancelled in __exit__ if there
was no alarm before TestTimer's __enter__ (if self._old_timer is 0).
It also makes sure to reset the signal handler if the old one was
Handler.SIG_DFL (which is treated as false, no we need to check for
"is not None" instead).
Closes-Bug: #1912320
Change-Id: I9efad8eb5fe6e794235280f8a9a026800513d969
To give some relief to Zuul resources, lets move non-voting
jobs which are broken and failing all the time to the experimental
queue for now.
We can bring them back to check queue when we will fix them.
Change-Id: I4c4dd7a17ea7cc483bb4b3ed7cff7ee91f917ed9
neutron-tempest-dvr-ha-multinode-full job's computes seems to lack
resource_provider_bandwidths values set, let's see if it solves the
issuse with test_migrate_with_qos_min_bw_allocation and
test_resize_with_qos_min_bw_allocation.
Change-Id: Ie02f9d40716af1b3b4efc3e9340acab7f9af113e
test_subnet_delete_race_condition fullstack test deleted port as a test
step, but now destroy can do that.
Change-Id: If54d7349f629585106151794f52c16c3d5b3c26f
Related-bug: #1909234
With the new engine facade in place, a context session can execute
one single transaction at once. Nested transactions are no longer
supported.
This patch also changes db related examples in the "retries" document
that they uses engine facade.
Partially-Implements blueprint: enginefacade-switch
Change-Id: I978a24c3b30c2bb0a3ea6aa25ebf91ca5c58b8c9
Co-authored-by: Slawek Kaplonski <skaplons@redhat.com>
Only min bw rules are required hence no need to fetch
all rules of QoS policy.
Also no need to get qos policy from DB
Partial-Bug: #1905726
Change-Id: Iad29cb34825adaa8c766d01b192a6bbe9992148b
"_TestWalkMigrations.test_walk_versions" is inherited in other classes
to check a specific DB revision upgrade. The goal is to, from the
previous DB revision, create a testing condition and then upgrade to
the related DB revision. If the upgrade is correct, the test should
pass.
This patch reduces the number of Neutron DB upgrade operations to
just three:
- Upgrade the DB to the previous DB revision to check.
- Upgrade the DB to the related DB revision, after setting the testing
conditions.
- A final upgrade up to the latest DB revision.
In a testing deployment, this reduces the execution time from 60 seconds
to 35 seconds, without reducing the testing scope.
Change-Id: Iffdc0373ed72aea1320155ea8bec93dce797f27c
Related-Bug: #1911153
Change ID Id5d8ac09a38c656619f88a6f87b8f384fe4c55a8 introduced a
call to get_subnets, using a filter to select all subnets on a
network. The syntax for the filter in get_subnets has been to
provide a dictionary where the values are lists. The change
doesn't cause an exception because the OVO layer code handles
scalar values. Tbis commit changes the filter syntax to be
consistent.
Change-Id: Ifb9df94128b7069e78e193bc289be17e15968167
When the process "neutron-keepalived-state-change" is started, wait for
the initialization message "Initial status of router", printed in the
logs, before starting any other operation.
Change-Id: Ifff470c00eae0be1f4a5a882d3b03fd5ccae9d8e
Closes-Bug: #1911925
In OVO PortForwarding, the synthetic fields 'floating_ip_address'
and 'router_id' are retrieved from the floating IP related to this
port forwarding.
PortForwarding contains, in the db_obj, the floating IP DB object too.
Instead of retrieving the OVO FloatingIP for each field, the db_obj
is read instead.
In a testing environment with 300 port forwarding registers per
floating IP, the retrieving time for a list query goes from 35 seconds
to less than one second.
$ openstack floating ip port forwarding list $fip
Change-Id: Ib2361fe4353ca571363e9a363e08537a3402513f
Closes-Bug: #1911462
When a VM fixture is destroyed, the port now can be deleted instead
of unbinding it. That will update the ML2 plugin cache. When the port
is actually deleted from the system, the ML2 agent should detect it
and trigger the deletion process.
Change-Id: I0ecbaf6f6e0b5b6b538956f2b47e7f11ce21341b
Closes-Bug: #1909234
In neutron CI queues we were running tempest-slow-py3 and
tempest-ipv6-only jobs which are defined in tempest repository and runs
all tests, e.g related to Swift or Cinder.
This patch defines new jobs: "neutron-tempest-slow-py3" and
"neutron-tempest-ipv6-only" which inherits from the tempest jobs but
disables Cinder and Swift services.
Additionally "neutron-tempest-ipv6-only" job now runs only
"integrated-networking" tox_envlist.
Change-Id: Icd376c144e1993ca84890c76743fda4196662d9b
The goal of this patch is to avoid the connection disruption during
the live-migration using OVS. Since [1], when a port is migrated,
both the source and the destination hosts are added to the profile
binding information. Initially, the source host binding is activated
and the destination is deactivated.
When the port is created in the destination host (created by Nova),
the port was not configured because the binding was not activated.
The binding (that means, all the OpenFlow rules) was done when Nova
sent the port activation. That happend when the VM was already
running in the destination host. If the OVS agent was loaded, the
port was bound seconds later to the port activation.
Instead, this patch enables the OpenFlow rule creation in the
destination host when the port is created.
Another problem are the "neutron-vif-plugged" events sent by Neutron
to Nova to inform about the port binding. Nova is expecting one single
event informing about the destination port binding. At this moment,
Nova considers the port is bound and ready to transmit data.
Several triggers were firing expectedly this event:
- When the port binding was updated, the port is set to down and then
up again, forcing this event.
- When the port binding was updated, first the binding is deleted and
then updated with the new information. That triggers in the source
host to set the port down and the up again, sending the event.
This patch removes those events, sending the "neutron-vif-plugged"
event only when the port is bound to the destination host (and as
commented before, this is happening now regardless of the binding
activation status).
This feature depends on [2]. If this Nova patch is not in place, Nova
will never plug the port in the destination host and Neutron won't be
able to send the vif-plugged event to Nova to finish the
live-migration process.
Because from Neutron cannot query Nova to know if this patch is in
place, a new temporary configuration option has been created to enable
this feature. The default value will be "False"; that means Neutron
will behave as before.
[1]https://bugs.launchpad.net/neutron/+bug/1580880
[2]https://review.opendev.org/c/openstack/nova/+/767368
Closes-Bug: #1901707
Change-Id: Iee323943ac66e566e5a5e92de1861832e86fc7fc
Do not report ovs agent state when ovs is dead,
and let neutron-server mark service as down. So
cluster admin could determine there is a problem
of the given ovs agent
Change-Id: Ib4b06c7877a7343f4204d4f4f5863931717ff507
Closes-Bug: #1910946
As we are in the middle of the migration to new secure RBAC policies
and we have a lot of deprecated default rules, our log in e.g.
functional tests has a lot of messages about deprecated rules.
So lets suppress those deprecation warnings in the tests to make our
test outputs smaller.
Related blueprint: secure-rbac-roles
Change-Id: Iab3966bad81b469eccf1050f0e0e48b9e2573750
In case QoS rule is not applied correctly, the list of OVS QoS and
queue registers will be printed in the log.
Change-Id: Ie77f70652af54d1d6bccb3c93a70f39e37f840a1
Related-Bug: #1909234
This adds support for deleting OVN controller/metadata agents.
Behavior is undefined if the agents are still actually up as per
the Agent API docs.
As part of this, it is necessary to be able to tell all workers
that the agent is gone. This can't be done by deleting the
Chassis, because ovn-controller deletes the Chassis if it is
stopped gracefully and we need to still display those agents as
down until ovn-controller is restarted. This also means we can't
write a value to the Chassis marking the agent as 'deleted'
because the Chassis may not be there. And of course you can't
use the cache because then other workers won't see that the
agent is deleted.
Due to the hash ring implementation, we also cannot naively just
send some pre-defined event that all workers can listen for to
update their status of the agent. Only one worker would process
the event. So we need some kind of GLOBAL event type that is
processed by all workers.
When the hash ring implementation was done, the agent API
implementation was redesigned to work around moving from having
a single OVN Worker to having distributed events. That
implementation relied on marking the agents 'alive' in the
OVSDB. With large numbers of Chassis entries, this induces
significant load, with 2 DB writes per Chassis per
cfg.CONF.agent_down_time / 2 seconds (37 by default).
This patch reverts that change and goes back to using events
to store agent information in the cache, but adds support for
"GLOBAL" events that are run on each worker that uses a particular
connection.
Change-Id: I4581848ad3e176fa576f80a752f2f062c974c2d1