The rabbit_durable_queues option has been deprecated since amqp code was
moved from olso-incubator to this project, so it's high time it was
removed.
Change-Id: If2450696a43c05c32d35bff26d3dc38423f4330e
This adds an optional call_monitor_timeout parameter to the RPC client,
which if specified, will enable heartbeating of long-running calls by
the server. This enables the user to increase the regular timeout to
a much larger value, allowing calls to take a very long time, but
with heartbeating to indicate that they are still running on the server
side. If the server stops heartbeating, then the call_monitor_timeout
takes over and we fail with the usual MessagingTimeout instead of waiting
for the longer overall timeout to expire.
Change-Id: I60334aaf019f177a984583528b71d00859d31f84
1.As mentioned in [1], we should avoid using six.iteritems to achieve iterators.
We can use dict.items instead, as it will return iterators in PY3 as well.
And dict.items/keys will more readable. 2.In py2, the performance about
list should be negligible, see the link [2].
[1] https://wiki.openstack.org/wiki/Python3
[2] http://lists.openstack.org/pipermail/openstack-dev/2015-June/066391.html
Change-Id: Ia235afc3532f62f265f91ca46d2306c72fc2a2a2
Base classes should define interface between driver and
upper oslo messaging layer and shouldn't have fields and
methods used for internal driver's purposes.
1) base listener: + prefetch_size attribute;
-driver attribute
2) base incomingMessage: - reply method
- listener attribute
3) base RpcIncomingMessage added - it is incomingMessage + reply method
Change-Id: Ic2c0fce830763eb4e00f2cca789e9c1c6b5420ed
ConnectionPool and ConnectionContext can be used by other
drivers (like Kafka) and hence should be outside of amqp.py.
* Moving ConnectionPool to pool.py
* Moving ConnectionContext to common.py
* Moving a couple of global variables to common.py
No other logic changes, just refactoring
Change-Id: I85154509a361690426772ef116590d38a965ca8d
Back in liberty we marked this driver as deprecated. This patch removes
it from the tree. The patch also removes tests, options and other
references in the documentation. Note that one script is being kept
because it's required by the amqp driver.
Depends-On: If4b1773334e424d1f4a4e112bd1f10aca62682a9
Change-Id: I4a9cba314c4a2f24307504fa7b5427424268b114
When failed in connection.reset(), the current code simply
discards the broken connection, without returning a new one
to the pool, nor adjust pool counter. It results in a connection
leak and eventually blocks the thread.
It is fixed by returning a new connection into the pool.
Change-Id: I2b2c23def718d8f2409f9fc415441ac88d40f5b9
Closes-Bug: #1474698
In case of a broker restart/failover a reply queue can be
unreachable for short period the IncomingMessage.send_reply
will block for 60 seconds in this case or until rabbit recovers.
But in case of the reply queue is unreachable because the
rpc client is really gone, we can have a ton of reply to send
waiting 60 seconds.
This leads to a starvation of connection of the pool
The rpc server take to much time to send reply, other rpc client will
raise TimeoutError because their don't receive their replies in time.
This changes introduces an object cache that stores already known gone
client to not wait 60 seconds and hold a connection of the pool
Keeping 200 last gone rpc client for 1 minute is enough
and doesn't hold to much memory.
This also don't raise anymore a frightening exception when we can't send reply
to the rpc client. But just logging a info about missing exchange and
a warning about unsend reply.
Closes-bug: #1460652
Change-Id: I928b30c9b5f9ee007532ff703e136640b0e8aaf4
Added a new configuration option `send_single_reply` which allows to
send a single AMQP reply instead of two. This will reduce amount of RPC
calls and increase transport productivity.
The new behaviour is not compatible with the old logic, so isn't
backward compatible and disabled by default.
DocImpact
A new configuration option added.
Blueprint: remove-double-reply
Change-Id: Idab118b22163e734aca010f325cddfaec26bfa0f
To avoid creating a new ZMQ connection for every message sent
to a remote broker, implement pooling and re-use of ZmqClient
objects and associated ZMQ context.
A pool is created for each remote endpoint (keyed by address);
the size of each pool is configured using rpc_conn_pool_size.
All outbound message client connections are pooled.
Closes-Bug: 1384113
Change-Id: Ia55d5c310a56e51df5e2f5d39e561a4da3fe4d83
Different OpenStack processes log that line when idle, but it doesn't
offer actionable information to developers or users. Ideally process
logs should be silent when idle, even in debug mode.
Here's a sample:
http://paste.openstack.org/show/201371/
Change-Id: Ib4f63d590a6f5ed295fae12dac12897007b12879
Different OpenStack processes log that line when idle, but it doesn't
offer actionable information to developers or users. Ideally process
logs should be silent when idle, even in debug mode.
Closes-Bug: #1434727
Change-Id: I6f9f2977358d86ada7178c09b04ff6b290a6a8ad
Currently, the option amqp_durable_queues is both deprecated with name
and group that cause the option [DEFAULT]amqp_durable_queues can't work.
This patch use multi deprecated options to make it work.
Change-Id: Ied28bcf415362a976928bac75225018030304ac7
Closes-Bug: #1433956
This change ensures that connections that fail to return to the pool are
cleanly closed and exception raised are not returned to the caller.
For rabbit, we also try to reconnection in case of connection failure,
before dropping the connection.
Closes-bug: #1433458
Change-Id: Ic714db7b8be9df8b6935a903732c60aaea0bc404
AMQP offers a heartbeat feature to ensure that the application layer
promptly finds out about disrupted connections (and also completely
unresponsive peers). If the client requests heartbeats on connection, rabbit
server will regularly send messages to each connections with the expectation of
a response.
To acheive this, each driver connection object spawn a thread that
send/retrieve heartbeat packets exchanged between the server and the
client.
To protect the concurrency access to the kombu connection between the
driver and this thread use a lock that always prioritize the
heartbeat thread. So when the heartbeat thread wakes up it will acquire the
lock quickly, to ensure we have no heartbeat starvation when the driver
sends a lot of messages.
Also when we are polling the broker, the lock can be held for a long
time by the 'consume' method, so this one does the heartbeat stuffs itself.
DocImpact: 2 new configuration options for Rabbit driver
Co-Authored-By: Oleksii Zamiatin <ozamiatin@mirantis.com>
Co-Authored-By: Ilya Pekelny <ipekelny@mirantis.com>
Related-Bug: #1371723
Closes-Bug: #856764
Change-Id: I1d3a635f3853bc13ffc14034468f1ac6262c11a3
All drivers options are current stored into the DEFAULT group.
This change makes the configuration clearer by putting driver options
into a group named oslo_messaging_<driver>.
Closes-bug: #1417040
Change-Id: I96a9682afe7eb0caf1fbf47bbb0291833aec245b
Move the public API out of oslo.messaging to oslo_messaging. Retain
the ability to import from the old namespace package for backwards
compatibility for this release cycle.
bp/drop-namespace-packages
Co-authored-by: Mehdi Abaakouk <mehdi.abaakouk@enovance.com>
Change-Id: Ia562010c152a214f1c0fed767c82022c7c2c52e7