With this new parameter is possible to pass other parameters
from the client to the drivers.
So it is possible to tune the driver behavior.
For example can be used to send the mandatory flag in RabbitMQ
Note:
- The transport_options parameter is not actually used (yet).
- This part of blueprint transport-options (first part)
Implements: blueprint transport-options
The blueprint link is
https://blueprints.launchpad.net/oslo.messaging/+spec/transport-options
Change-Id: Iff23a9344c2c87259cf36b0d14c0a9fc075a2a72
This adds an optional call_monitor_timeout parameter to the RPC client,
which if specified, will enable heartbeating of long-running calls by
the server. This enables the user to increase the regular timeout to
a much larger value, allowing calls to take a very long time, but
with heartbeating to indicate that they are still running on the server
side. If the server stops heartbeating, then the call_monitor_timeout
takes over and we fail with the usual MessagingTimeout instead of waiting
for the longer overall timeout to expire.
Change-Id: I60334aaf019f177a984583528b71d00859d31f84
This adds a heartbeat() method to RpcIncomingMessage to be used by a
subsequent patch implementation of active-call heartbeating. This is
unimplemented in all drivers for the moment.
Change-Id: If8ab0dc16e3bef69d5a826c31c0fe35e403ac6a1
for some reason there are two timeouts. in the batch scenario,
all the time wasted waiting on initial 'get' is never accounted
for so the batch timeout is always longer than it is declared.
Change-Id: I6132c770cccdf0ffad9f178f7463288cf954d672
Add detailed documentation to the driver API to help driver developers
create drivers that behave consistently. Specifically prescribes a
set of operational characteristics that a driver must conform to in
order to provide consistent behavior across different implementations.
Change-Id: Icb251ee724f9a0ac4fede702a367910de4ba95e3
We can reduce a workload of rabbitmq through implementation
of expiration mechanism for idle connections in the pool with
next properties:
conn_pool_ttl (default 20 min)
conn_pool_min_size: the pool size limit for expire() (default 2)
The problem is timeless idle connections in the pool, which can be created
via some single huge workload of RPCServer. One SEND connection is heartbeat
thread + some network activity every n second. So, we can reduce it.
Here is two ways to implement an expiration:
[1] Create a separated thread for checking expire date of connections
[2] Make call expire() on pool.get() or pool.put()
The [1] has some threading overhead, but probably insignificant
because the thread can sleep 99% time and wake up every 20 mins (by default).
Anyway current implementation is [2].
Change-Id: Ie8781d10549a044656824ceb78b2fe2e4f7f8b43
This patch removes log_failure argument from the function
serialize_remote_exception and from driver implementations
using it (because it is never used and always defaults to True)
and prevents error logging in this function (because these errors
are already logged by servers while processing incoming messages).
Change-Id: Ic01bb11d6c4f018a17f3219cdbd07ef4d30fa434
Closes-Bug: 1580352
1) Add MessageHandler base interface for on_incoming_callback replacement
2) Move message_handler parameter form Listener's __init__() to start()
3) Remove wait method from listener
Change-Id: Id414446817e3d2ff67b815074d042a9ce637ec24
Current Listener interface has poll() method which return messages
To use it we need have poller thread which is located in MessageHandlerServer
But my investigations of existing driver's code shows that some implemetations have
its own thread inside for processing connection event loop. This event loop received
messages and store in queue object. And then our poller's thread reads this queue
This situation can be improved. we can remove poller's thread, remove queue object
and just call on_message server's callback from connection eventloop thread
This path provide posibility to do this for one of drivers and leave as is other drivers
Change-Id: I3e3d4369d8fdadcecf079d10af58b1e4f5616047
Add a sleep() to allow other threads (like the one collecting
the stats) to run.
Closes-Bug: #1555632
Change-Id: I6fcb63c10acd76f2815e23fbd303f08974feb993
Base classes should define interface between driver and
upper oslo messaging layer and shouldn't have fields and
methods used for internal driver's purposes.
1) base listener: + prefetch_size attribute;
-driver attribute
2) base incomingMessage: - reply method
- listener attribute
3) base RpcIncomingMessage added - it is incomingMessage + reply method
Change-Id: Ic2c0fce830763eb4e00f2cca789e9c1c6b5420ed
Gnocchi performs better if measurements are write in batch
When Ceilometer is used with Gnocchi, this is not possible.
This change introduce a new notification listener that allows that.
On the driver side, a default batch implementation is provided.
It's just call the legacy poll method many times.
Driver can override it to provide a better implementation.
For example, kafka handles batch natively and take benefit of this.
Change-Id: I16184da24b8661aff7f4fba6196ecf33165f1a77
Notifier implementation for zmq driver (ROUTER/DEALER variant).
Publishers/consumers refactoring in order to make them pluggable.
Change-Id: I2dd42cc805aa72b929a4dfa17498cd8b9c0ed7af
- Fixed universal proxy to not get stuck with multiple backends
- Fixed threading pollers/executors (proxy side)
- Driver option to switch green/no-green impl.
- Swtiched to no-green in real-world proxy (green left for unit tests)
- Minor names fixes in serializer
Change-Id: Id6508101521d8914228c639ed58ecd29db0ef456
There was still a reference to the oslo.config namespace package
in _drivers/base.py. This converts it to the new package name.
Change-Id: I9c3878094bcf8015c30d87f693f51e0d48b31a33
When rpc_conn_pool_size have been moved from amqp.py to base.py:
87137e7af05f12a99bd04566036fbf71824f45cf
We loose the deprecated_group, this change reintroduces it.
Change-Id: I8cdea7f042afebcc162bafef881ebe61a1cac989
To avoid creating a new ZMQ connection for every message sent
to a remote broker, implement pooling and re-use of ZmqClient
objects and associated ZMQ context.
A pool is created for each remote endpoint (keyed by address);
the size of each pool is configured using rpc_conn_pool_size.
All outbound message client connections are pooled.
Closes-Bug: 1384113
Change-Id: Ia55d5c310a56e51df5e2f5d39e561a4da3fe4d83
Move the public API out of oslo.messaging to oslo_messaging. Retain
the ability to import from the old namespace package for backwards
compatibility for this release cycle.
bp/drop-namespace-packages
Co-authored-by: Mehdi Abaakouk <mehdi.abaakouk@enovance.com>
Change-Id: Ia562010c152a214f1c0fed767c82022c7c2c52e7