It would be helpful if "Timed out waiting for <service>" log messages at least
specified on which `reply_q` it was waited for.
Example without the reply_q:
```
12228 2020-09-14 14:56:37.187 7 WARNING nova.conductor.api
[req-1e081db6-808b-4af1-afc1-b87db7839394 - - - - -] Timed out waiting for
nova-conductor. Is it running? Or did this service start before
nova-conductor? Reattempting establishment of nova-conductor connection...:
oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply to
message ID 1640e7ef6f314451ba9a75d9ff6136ad
```
Example after adding the reply_q:
```
12228 2020-09-14 14:56:37.187 7 WARNING nova.conductor.api
[req-1e081db6-808b-4af1-afc1-b87db7839394 - - - - -] Timed out waiting for
nova-conductor. Is it running? Or did this service start before
nova-conductor? Reattempting establishment of nova-conductor connection...:
oslo_messaging.exceptions.MessagingTimeout: Timed out waiting for a reply
(reply_2882766a63b540dabaf7d019cf0c0cda)
to message ID 1640e7ef6f314451ba9a75d9ff6136ad
```
It could help us to more merely debug and observe if something went
wrong with a reply queue.
Change-Id: Ied2c881c71930dc631919113adc00112648f9d72
Closes-Bug: #1896925
The previous attempt did not update the version in pre commit config
so the old version is still used by pep8 target.
Change-Id: Idf8c7d99f7c6aeb0244d58e85524ba1f039195d8
We should have removed these when we removed the tox targets[1].
[1] 6e7a5725fa90e66c060a06a8ffe5e5454fd7a7b6
Change-Id: I4bdf987ea52479ef7790fc506158a57d8d060dc5
This function is no longer used since we removed ampq1 tests, when we
deprecated the amqp1 driver[1]
[1] 0f63c227f5425995ae8c61f1d40ec85e7728528a
Change-Id: I47fe04d6a39ed2b5f33b02fa6736d588d0383f5a
We now expect context objects to support returning a redacted copy of
themselves.
As a related cleanup, removed the practice entirely of using
dictionaries to represent contexts in unit tests and the logging driver.
As part of developing this change, I discovered code in Glance (and
potentially other services) which explicitly pass {} in lieu of a
context when notifying; so we now properly handle dictionaries as
contexts.
To ensure we have the method required; require oslo.context 5.3.0 or
newer.
Change-Id: I894f38cc83c98d3e8d48b59864c0c7c2d27e7dcd
This is introducing the "stream" queues for fanout so all components
relying on fanout can use the same stream, lowering the number of queues
needed and leveraging the new "stream" type of queues from rabbitmq.
Closes-Bug: #2031497
Change-Id: I5056a19aada9143bcd80aaf064ced8cad441e6eb
Signed-off-by: Arnaud Morin <arnaud.morin@ovhcloud.com>
The latest release of qdrouterd on focal has changed where the
internal python modules are installed. This patch updates the tox
tests python path configuration.
Change-Id: Icb53ee17af01580d899f388f69be9560e23675e0
As per the current release tested runtime, we test
python version from 3.8 to 3.11 so updating the
same in python classifier in setup.cfg
Change-Id: I303912894d12be87355f83a1a53be071db94cf84
These translation sections are not needed anymore, Babel can generate
translation files without them.
Change-Id: Ib60671941371aa22fbdeeb9d42fc619f60aa15e5
The current fake driver does not properly clean up the fake RPC exchange
between tests.
This means that if a test invokes code that makes an RPC request, using
the fake driver, without consuming the RPC message, then another test
may receive this request making it fail.
This issues has been found while working on a Cinder patch and has been
worked-arounded there with Change-Id
I52ee4b345b0a4b262e330a9a89552cd216eafdbe.
This patch fixes the source of the problem by clearing the exchange
class dictionary in the FakeExchangeManager during the FakeDriver
cleanup.
Change-Id: If82c2175cf7242b80509d180cdf92323c0f4c43b
The purpose of this change is to introduce an optional mechanism to keep
the queues name consistent between service restart.
Oslo messaging is already re-using the queues while running, but the
queues are created using a random name at the beginning.
This change propose an option named use_queue_manager (default to False
- so the behavior is not changed) that can be set to True to switch to a
consistent naming based on hostname and processname.
Related-bug: #2031497
Signed-off-by: Arnaud Morin <arnaud.morin@ovhcloud.com>
Change-Id: I2acdef4e03164fdabcb50fb98a4ac14b1aefda00
Add a new flag rabbit_transient_quorum_queue to enable the use of quorum
for transient queues (reply_ and _fanout_)
This is helping a lot OpenStack services to not fail (and recover) from
a rabbit node issue.
Related-bug: #2031497
Signed-off-by: Arnaud Morin <arnaud.morin@ovhcloud.com>
Change-Id: Icee5ee6938ca7c9651f281fb835708fc88b8464f
When rabbit is failing for a specific quorum queue, the only thing to
do is to delete the queue (as per rabbit doc, see [1]).
So, to avoid the RPC service to be broken until an operator eventually
do a manual fix on it, catch any INTERNAL ERROR (code 541) and trigger
the deletion of the failed queues under those conditions.
So on next queue declare (triggered from various retries), the queue
will be created again and the service will recover by itself.
Closes-Bug: #2028384
Related-bug: #2031497
[1] https://www.rabbitmq.com/quorum-queues.html#availability
Signed-off-by: Arnaud Morin <arnaud.morin@ovhcloud.com>
Change-Id: Ib8dba833542973091a4e0bf23bb593aca89c5905
When an operator rely on rabbitmq policies, there is no point to set the
queue TTL in config.
Moreover, using policies is much more simpler as you dont need to
delete/recreate the queues to apply the new parameter (see [1]).
So, adding the possibility to set the transient queue TTL to 0 will
allow the creation of the queue without the x-expire parameter and only
the policy will apply.
[1] https://www.rabbitmq.com/parameters.html#policies
Related-bug: #2031497
Signed-off-by: Arnaud Morin <arnaud.morin@ovhcloud.com>
Change-Id: I34bad0f6d8ace475c48839adc68a023dd0c380de
We encountered bug 2037312 in unit tests when attempting to get this
change rolled out. Heat apparently will attempt to set is_admin using
policy logic if it's not passed in for a new context; this breaks as the
context we are requested doesn't have all the needed information to
exercise the policy logic.
is_admin is just a bool; it's not sensitive; easiest route forward is to
add it to the safe list
Closes-bug: 2037312
Change-Id: I78b08edfcb8115cddd7de9c6c788c0a57c8218a8
Add file to the reno documentation build to show release notes for
stable/2023.2.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/2023.2.
Sem-Ver: feature
Change-Id: I8e9c35ebe41e0283309d64db97a4d9ffebcf9626
Publishing a fully hydrated context object in a notification would give
someone with access to that notification the ability to impersonate the
original actor through inclusion of sensitive fields.
Now, instead, we pare down the context object to the bare minimum before
passing it for serialization in notification workflows.
Related-bug: 2030976
Change-Id: Ic94323658c89df1c1ff32f511ca23502317d0f00
Kombu recommend to run heartbeat_check every seconds but we use a lock
around the kombu connection so, to not lock to much this lock to most of
the time do nothing except waiting the events drain, we start
heartbeat_check and retrieve the server heartbeat packet only two times
more than the minimum required for the heartbeat works:
heartbeat_timeout / heartbeat_rate / 2.0
Because of this, we are not sending the heartbeat frames at correct
intervals. E.G.
If heartbeat_timeout=60 and rate=2, AMQP protocol expects to send a
frame
every 30sec.
With the current heartbeat_check implementation, heartbeat_check will be
called every:
heartbeat_timeout / heartbeat_rate / 2.0 = 60 / 2 / 2.0 = 15
Which will result in the following frame flow:
T+0 --> do nothing (60/2 > 0)
T+15 --> do nothing (60/2 > 15)
T+30 --> do nothing (60/2 > 30)
T+45 --> send a frame (60/2 < 45)
...
With heartbeat_rate=3, the heartbeat_check will be executed more often:
heartbeat_timeout / heartbeat_rate / 2.0 = 60 / 3 / 2.0 = 10
Frame flow:
T+0 --> do nothing (60/3 > 0)
T+10 --> do nothing (60/3 > 10)
T+20 --> do nothing (60/3 > 20)
T+30 --> send a frame (60/3 < 30)
...
Now we are sending the frame with correct intervals
Closes-bug: #2008734
Signed-off-by: Arnaud Morin <arnaud.morin@ovhcloud.com>
Change-Id: Ie646d254faf5e45ba46948212f4c9baf1ba7a1a8
Previously the two values were the same; this caused us
to always exceed the timeout limit ACK_REQUEUE_EVERY_SECONDS_MAX
which results in various code paths never being traversed
due to premature timeout exceptions.
Also apply min/max values to kombu_reconnect_delay so it doesn't
exceed ACK_REQUEUE_EVERY_SECONDS_MAX and break things again.
Closes-Bug: #1993149
Change-Id: I103d2aa79b4bd2c331810583aeca53e22ee27a49