The neutron.common.rpc module has been in neutron-lib for awhile now and
neutron is shimmed to use neutron-lib already.
This patch removes neutron.common.rpc and switches the code over to use
neutron-lib's implementation where needed.
NeutronLibImpact
Change-Id: I733f07a8c4a2af071b3467bd710290eee11a4f4c
In case of an l3 agent sync it is important to understand when
a router is processing an update to identify when it applies
changes that can cause failovers.
Change-Id: Ie9ba2a8ffebfcc3bfb35f7a48f73a25352309b4e
Today the neutron common exceptions already live in neutron-lib and are
shimmed from neutron. This patch removes the neutron.common.exceptions
module and changes neutron's imports over to use their respective
neutron-lib exception module instead.
NeutronLibImpact
Change-Id: I9704f20eb21da85d2cf024d83338b3d94593671e
This test class now inherits also from
neutron.tests.functional.base.BaseLoggingTestCase so logs from
those tests will be logged in file.
Change-Id: I69516e7417a655e320c3482ae7e59a3ba8891290
Dnsmasq driver used by dhcp agent has restart() method which is
calling disable() and then enable() dnsmasq process again.
What can be observed in functional tests from time to time it may
happen that start dnsmasq process will be called before old process
is really down. That leads to error that IP address to which
dnsmasq wants to bind is already in use and it fails to start.
This patch adds possibility to call disable() method with block flag
set to True. In such case driver will ensure in disable() method that
process is really not active.
This blocking disable() is used in restart() method now.
Change-Id: I419a451633badbc3d32edcee1945fca3e3d9f6be
Closes-Bug: #1811126
This patch switches over to the payload style callbacks for
BEFORE_DELETE events of SECURITY_GROUP resources.
NeutronLibImpact
Change-Id: I44ab8bfd92ece7501793979f8f45eae65f1e7a2c
Added VLAN parent device name and index and VXLAN link device
name and index.
Change-Id: Ib44a63c0648a7b5b07b1021b10e8994002031ce8
Related-Bug: #1804274
This patch switches BEFORE_DELETE callback events for PORT resources
over to the payload style args use a DBEventPayload object.
NeutronLibImpact
Change-Id: I8d8ff8f9ed7e2a1a6a66e3c3e6fc8e38cd9e29ca
The following methods are no longer used in CommonDbMixin:
- register_dict_extend_funcs
- _apply_filters_to_query
- _filter_non_model_columns
This patch removes them from neutron.
NeutronLibImpact
Change-Id: Ic7042cdcb29e95cc3a13292819d77abc3971fe8a
This patch switches tempest-slow to new
tempest-slow-py3 job.
Depends-On: https://review.openstack.org/633983
Change-Id: Ic4162f56294a4b6c9f3773964598e1fa163aad95
This patch switches tempest-multinode-full to new
tempest-multinode-full-py3 job.
Depends-On: https://review.openstack.org/633982
Change-Id: Iaca3b2a58c44af4c7684caeaf241ba05ef67a70a
We switched from swapping the tenant_id in the context to explicitly
checking the db column. Switch back, and a test that checks for
not breaking this rather odd behavior. At least, until we decide
to fix it as a bug.
Change-Id: I6af4d414b1972e14692a8356ef95db7323e3a09a
Currently the metadata proxy binds to default 0.0.0.0, which does not
add any advantage (metadata requests are not sent to random IP
addresses), and may allow access to cloud information from
third parties.
This changes the generated configuration to bind to METADATA_DEFAULT_IP
address instead.
This is not enabled in other metadata proxy configuration (in the L3
agent), as this would require net.ipv4.ip_nonlocal_bind everywhere
(currently only enabled for DVR) or transparent mode in haproxy (which
requires net.ipv4.ip_nonlocal_bind anyway)
Changed set_ip_nonlocal_bind_for_namespace() to support setting the
value in both the given and root namespace correctly, since it was
only used from inside the neutron codebase according to codesearch.
Change-Id: I388391cf697dade1a163d15ab568b33134f7b2d9
Co-Authored-By: Andrey Arapov <andrey.arapov@nixaid.com>
Closes-Bug: #1745618
Port forwarding floating IPs QoS should be limited under
the binding QoS policy. So this patch extends the l3-agent
fip-qos agent extension floating IP list with the port
forwarding related IPs.
Change-Id: Iddabfabafc0803edd1e4ac0893dc188f1907234a
Closes-Bug: #1796925
As a part of the python 3 community goal, this converts the functional
tests to run with python3 by default, and in Zuul.
As discussed at the Stein PTG in Denver, unit and functional tests will
still run on both versions, so this adds a python2 job for functional
tests.
This patch also suppress logging levels from some external libraries
to avoid issues with subunit.parser and python 3. For details see bug
reported for Cinder [1].
[1] https://bugs.launchpad.net/cinder/+bug/1728640
Co-Authored-By: Slawek Kaplonski <skaplons@redhat.com>
Change-Id: I8958d0b5b9147ffd1ef2d1cef5dcbf79c8be5cd4
IP allocation was initially deffered due to lack of binding
information. On port update the with both `mac_address` and
`binding_host_id`` in the request 'fixed_ips: []' was
appended to the new_port data. This caused the check for
fixed_ips_requested to return True, which in turn cause
deferred_ip_allocation to evaluates False.
Only set the new_port default fixed_ips to original_ips if
the original port had fixed_ips.
Closes-Bug: #1811905
Change-Id: If98a82f8432b09a29f9d0cc6627e9649b43bc4a1
Currently, the dhcp Provisioning of ports is the crucial bottleneck
of that concurrently boot multiple VM.
The root cause is that these ports will be processed one by one by dhcp
agent when they belong to the same network, And the 'Provisioning complete'
port is still blocked other port's processing in other dhcp agents. The
patch aim to optimize the dispatch strategy of the port cast to agent to
improve the Provisioning process.
In server side, I classify messages to multi levels. Especially, I classify
the port_update_end or port_create_end message to two levels, the high-level
message only cast to one agent, the low-level message cast to all agent. In
agent side I put these messages to `resource_processing_queue`, with the queue,
We can delete `_net_lock` and process these messages in order of priority.
Additonally, I modified the `resource_processing_queue` for my demand. I update
`_queue` from LIST to PriorityQueue in `ExclusiveResourceProcessor`, by this
way, we can sort all message which cached in `ExclusiveResourceProcessor` by
priority.
Related-Bug: #1760047
Change-Id: I255caa0571c42fb012fe882259ef181070beccef