The current rootwrap.conf file is outdated and doesn't include some
parameters. This change updates the content to make it consistent with
the latest example file in oslo.rootwrap.
Change-Id: I0b40b0bea4bbcbc78490dbfa3877cdd3a26ac298
UPPER_CONSTRAINTS_FILE is old name and deprecated
This allows to use upper-constraints file as more
readable way instead of UPPER_CONSTRAINTS_FILE=<lower-constraints file>.
Change-Id: Iae53ccf077796eb0d2518b41d0e262d564e7af10
This patch only adds one extra debug log to log current and new
port statuses in the method which updates that status in the DB.
This may be useful e.g. to understand why notifications about port
change aren't send to Nova in some cases (like unshelve instance, see
related bug).
Related-Bug: #1953478
Change-Id: I4c6fd5b0b33bf764c0b182f169173453ea7a4efc
This change is to include missing OvS DPDK nodes also as part of
ovn-controllers group in hosts_for_migration file.
Change-Id: Ic0727ffdbd1f60574b6d5397177a58172cbd60f0
In the ML2 plugin in create_port_bulk method, we are iterating over
list of the ports to be created and do everything for all ports in
single DB transaction (which makes totally sense as this is bulk
request).
But one of the things which was done during that huge transaction was
allocation of the IP addresses for all ports. That action is prone for
race conditions and can fail often, especially when there is no many IP
addresses available in the subnet(s) for the ports.
In case of the error while allocating IP address even for one port from
the whole bulk request, whole create_port_bulk method was retried so
allocations (and everything else) for all ports was reverted and started
from scratch. That takes a lot of time so some requests may be processed
very long time, like e.g. 2-3 minutes in my tests.
To reproduce that issue I did simple script which created network with
/24 subnet and then sent 24 requests to create 10 ports in bulk in each
request. That was in totall 240 ports created in that subnet.
I measured time of the creation of all those ports in the current master
branch (without this patch) and with the patch. Results are like below:
+-----+---------------+------------+---------------------------+
| Run | Master branch | This patch | Simulate bulk by creation |
| | [mm:ss] | [mm:ss] | of 10 ports one by one |
+-----+---------------+------------+---------------------------+
| 1 | 01:37 | 01:02 | 00:57 |
| 2 | 02:06 | 00:40 | 01:03 |
| 3 | 02:08 | 00:41 | 00:59 |
| 4 | 02:14 | 00:45 | 00:55 |
| 5 | 01:58 | 00:45 | 00:57 |
| 6 | 02:37 | 00:53 | 01:05 |
| 7 | 01:59 | 00:42 | 00:58 |
| 8 | 02:01 | 00:41 | 00:57 |
| 9 | 02:39 | 00:42 | 00:55 |
| 10 | 01:59 | 00:41 | 00:56 |
+-----+---------------+------------+---------------------------+
| AVG | 00:02:07 | 00:00:45 | 00:58 |
+-----+---------------+------------+---------------------------+
Closes-Bug: #1954763
Change-Id: I8877c658446fed155130add6f1c69f2772113c27
Accept OVS system-id non UUID formatted strings. The OVN metadata
agent will generate a unique UUID from the OVS system-id. If this
string is a UUID, this value will be used. If not, the OVN metadata
agent will generate a UUID based on the provided string.
This patch amends [1].
[1]https://review.opendev.org/c/openstack/neutron/+/819634
Closes-Bug: #1952550
Change-Id: I42a8a767a6ef9454419b26f80339394759644faf
PTR DNS requests support is available since
ovn-21.06.0[1].
This patch adds/removes required enteries for each ip
address of the port in DNS NB table, For example for ip
addresses "10.0.0.4 fd5a:cdd8:f382:0:f816:3eff:fe5b:bb6"
and fqdn "vm1.ovn.test." following enteries are added:-
- 4.0.0.10.in-addr.arpa="vm1.ovn.test"
- 6.b.b.0.b.5.e.f.f.f.e.3.6.1.8.f.0.0.0.0.2.8.3.f.8.d.d.c.a.5.d.f.ip6.arpa="vm1.ovn.test"
[1] https://github.com/ovn-org/ovn/commit/82a4e44
Closes-Bug: #1951872
Change-Id: If03a2ad2475cdb390c4388d6869cd0b2a0555eb7
In subnet update API call Neutron checks if gateway_ip was send to be
updated and if so, it checkes if old gateway_ip isn't already allocated
to some router port. If it's already used, Neutron returns 409 response.
This is valid behaviour but sometimes, some automation tools may do
subnet update request and pass the same gateway ip as already used by
the subnet. In such case, as gateway_ip is actually not changed Neutron
should not raise exception in that validation.
Closes-Bug: #1955121
Change-Id: Iba90b44331fdc63273fd3d19c583a24b5295c0ac
We were using devstack-tobiko-faults-centos in our periodic queue
but this job is broken and it's better to use job
'devstack-tobiko-neutron' which is kind of dedicated to test neutron
code.
It is also single node job, so will be faster and easier for the infra
resources :)
Change-Id: Ic3009bd98e3cf1e7a88d2cef02c178044e5478d1
Currently in order to bring Neutron port to active, Neutron waits for
Logical Switch Port to become up in OVN. That means ovn-controller
changes the up status in SB DB and northd propagates it up to NB DB.
We do not need to wait and can save some time if we use the newly added
SB DB up column instead, when possible:
4d3cb42b07
Change-Id: Ib071889271f4e4d6acd83b219bf908a9ae80ce5c
This test was intended to verify that the metadata was correct for a
port created with one subnet and updated to include a second. It should
care that the fixed-ips exist from both subnets in the metadata, but
because the addresses were hardcoded, periodically the address generated
could be different and the match would fail spuriously. This patch fixes
that by dynamically detecting which addresses should exist on the
metadata as cidrs.
Change-Id: I58776aca3bce57f9b811877da8ae4ee199ee7c59
In the HA router's keepalived state change monitor tests, it was
expected that enqueue_state_change method will be called 3 or 4 times.
But after some changes in the keepalived_state_change monitor which were
done some time ago, it may be now that it will be called just 2 or 3
times:
- 2 when initial status will be "primary" and it will be just
transition to "backup",
- 3 when initial status will be "backup", then it will transition to
"primary" and finally to "backup" again.
To reflect those 2 possibilities, test was changed that it will expect
2 or 3 calls and will check only that last 2 will be always transition
to "primary" and then to "backup".
Additionally this patch adds some extra logging in that test so it will
be easier to check what was going on in that test.
Closes-Bug: #1954751
Change-Id: Ib5de7e65839f52c35c43801969e3f0c16dead5bb
It was added temporary to have compatybility with 3rd party code
which uses Neutron interface driver but it was said that since
"W" release that old, deprecated way of calling "plug_new" method
will be removed. Now we are far after "W" release so it's time to
do some cleaning there.
Related-Bug: #1879307
Change-Id: I03214079f752c7efe6611f2e928f32652fe681bc
The DHCP server should not announce any DNS resolver at all on the
subnet if "0.0.0.0" (IPv4) or "::" (IPv6) are configured as DNS
name servers in any subnet.
https://docs.openstack.org/neutron/latest/admin/config-dns-res.html
Closes-Bug: #1950686
Change-Id: I78dd012764c7bd7a29aeb8d97c00b627d7723aeb
When there is attempt to delete network with ports,
a general error message is displayed that one or more
ports are in use on the network. This patch proposes
to also return the ports which are in use as part of
the message.
Also modify test_delete_network_if_port_exists unit
test to check for port id and network id in Error
message.
Also bump required version of neutron-lib to 2.18.0
as that's needed for custom message in NetworkInUse
Exception.
Depends-On: https://review.opendev.org/c/openstack/neutron-lib/+/821806
Closes-Bug: #1953716
Change-Id: Ib0b40402746c6a487a226b238907142384608d3c
Since [1], included in Wallaby, when a physical network is specified
in "network_vlan_ranges" without a VLAN network range, the configuration
parse assigns to this physical network the whole valid VLAN ID set [1,
4094]. This is needed for the VLAN type driver to create the VLAN ID
allocation registers.
[1]https://review.opendev.org/c/openstack/neutron-lib/+/779515
Related-Bug: #1954384
Change-Id: I38b7f34c001fa7a3481574f2961edf4f09ca9a81
In "DbQuotaNoLockDriver", when a new reservation is being made,
first the expired reservations are removed. That guarantees the
freshness of the existing reservations.
In systems with high concurrency of operations, the
"DbQuotaNoLockDriver.make_reservation" method will be called in
parallel. The expired reservations removal implies a deletion
on the "reservation" table that could be executed by several
workers at the same time (in the same controller or not). That
could lead to a "DBDeadlock" exception if multiple workers want
to delete the same registers.
In case an API worker receives this exception, it should continue
as the expired reservations have been deleted by other worker. It
should not retry this operation.
If the reservations are not deleted, the quota engine will filter
out those expired reservations when counting the current number of
reservations [1][2][3]. That means even if in a particular request
the expired reservations are not deleted, these won't count in the
resource quota calculation.
The default reservation expiration timeout is set to 120 seconds
(as it should have been initially set) that is the default
expiration delta for a reservation since 2015.
[1]e99d9a9d06/neutron/quota/resource.py (L340)
[2]e99d9a9d06/neutron/db/quota/api.py (L226)
[3]e99d9a9d06/neutron/objects/quota.py (L100-L101)
Closes-Bug: #1954662
Change-Id: I8af6565d2537db7f0df2e8e567ea046a0a6e003a
Added a wait event in "TestAgentMonitor" setUp method, in order to wait
for the Chassis register creation event. This patch also adds an active
wait to check the number of "AgentCache" agents present in the local
cache, before starting the tests.
Closes-Bug: #1952508
Change-Id: I75c759ce94e027790b18411617e08ae5bb46bcef
During the OVS to OVN migration, the port bindings "vif_details"
dictionary should keep the "connectivity" key ("l2" in both mech
drivers).
Closes-Bug: #1947366
Change-Id: Ia9b02847db5c2e0e3da9386cc6cd68dcca55e439