impl_rabbit set timeout into message's header with {'ttl': (timeout * 1000)},
this mean doesn't work in real, messages still stays in queue after the ttl.
As RabbitMQ document said (http://www.rabbitmq.com/ttl.html#per-message-ttl),
we should passing "expiration" into message's property rather than header to
make it work.
Change-Id: I5d6ae72e69f856c56fb83fb939ed35246716e04d
Closes-bug: #1444854
Each times we send a message with the rabbit driver we create useless
object that's not more that the kombu exchange and a wrapper method around
the kombu producer.
So, this change just creates the exchange not our useless custom Publisher
and move the wrapped methods into the Connection object.
Change-Id: Id221f4363d897cd904f7aeccbc90cbd288db2db1
They are some case where the underlying can be stuck
until the system socket timeout is reached, but in oslo.messaging
we very often known that is not needed to wait for ever because
the upper layer (usualy the application) expect to return after
a certain period.
So this change set the timeout on the underlying socket each we can
determine that is not needed to wait more.
Closes-bug: #1436788
Change-Id: Ie71ab8147c56eaf672585da107bec8b22af9da6c
Add a check, that MessagingTimeout raises on long-running queries,
if client sends another queries at the same time.
Added a long_running_task() to TestServerEndpoint and allowed to pass a
message executor into the RpcServerFixture.
Related bug: #1338732
Co-Authored-By: Roman Podoliaka <rpodolyaka@mirantis.com>
Change-Id: Icafb6838e2d9fb76b6d1c202465c09c174a3bed9
The list_opts entrypoint test failed unnecessarily when the
dependencies in the packages were inconsistent. This test doesn't
need to verify that the dependencies are consistent, only that the
entrypoint is available and provides the expected function.
Change-Id: I0bb0f2b591c402202104af8daf07d56b514cbb2f
All plugins are supposed to be importable without their dependencies so
we can discover options and documentation. Restructure the parts of the
AMQP1 driver that depend on having proton and pyngus installed so the
driver can load without them.
Change-Id: Id0c8c2a6ae44d13f061e651c33efc9e38750a049
Oslo.messaging have an outdated release in the code tree.
but now the release note is published on the mailing.
This change removes it.
Change-Id: I0a3401b7c9bc8230169e75727e45a99e6c3c780f
The NotifyPublisher was redeclaring again and again,
the same exchange and queue each times a notification is sent.
This change fixes that by caching the already declared exchange
and queue for each channel.
Also, to make the test pass. 'Connection.ensure' have been updated
to have the same behavior for amqp and memory driver about
kombu recoverable_errors. And the hostname and port of the memory
driver are set to not fail when we print a log message.
Closes bug: #1437902
Change-Id: I20d133ac67b8a8a4c51d51b6a1b2369aa44ffe2f
In case the acknowledgement or requeue of a message fail,
the kombu transport can be disconnected
In this case, we must redeclare our consumers.
This changes fixes that.
This have no tests because the kombu memory transport we use in our tests
cannot be in disconnected state.
Closes-bug: #1448650
Change-Id: I5991a4cf827411bc27c857561d97461212a17f40
We at least need these versions of amqp and kombu to have
a working heartbeat support.
Related-bug: #1436788
Closes-bug: #1436769
Closes-bug: #1408830
Change-Id: I61440c5ccf2b540fe9a1e868bdcae9f5d2cf8422
This removes a TODO, that was not possible before the consumer code
refactoring.
Now, we can just catch the right exception instead of catching
everything and pray.
Change-Id: Id6203d79d4b2f027e5c6cd952c99fcd0967ecb3c
When a consumer is declared after we have started to consume
amqp, its queue is never consumed.
This fixes that.
Closes bug: #1450342
Change-Id: I9f2e7d83283504dfe762ac88384efde0f7b52d47
The consumer code is over engineered, it allows to override
everything, but the override is always done with functools.partial.
None of the child Class have the same signature, sometimes
the constructor use the parameter name as the parent class but for
a different purpose, that makes the code hard to read.
It's was never clear which options is passed to the queue and the
exchange at this end to kombu.
This changes removes all of that stuffs, and only use the kombu
terminology for consumer parameters.
Alse we don't hardcode anymore the tag and the channel in the consumer
class, to allow to change them without recreating a consumer object
in the futur.
Change-Id: Ie341f0c973adbda9a342cb836867345aa42652d1
The publisher code is over engineered, it allows to override
everything, but this is never used.
None of the child Class have the same signature, sometimes
the constructor use the parameter name as the parent class but for
a different purpose, that make the code hard to read.
It's was never clear which options is passed to the queue and the
exchange at this end to kombu.
This changes removes all of that stuffs, and only use the kombu
terminology for publisher parameters.
Change-Id: I3cebf3ed1647a3121dcf33e2160cf315486f5204
This serializer available (with some differences) in ceilometer,
cinder, designate, heat, ironic, magnum, manila, neutron, nova, trove.
So we can move it to the common code and re-use (or inherit from it) in
OpenStack projects
Change-Id: I0d68b1d98c2214a5d45b65146ac2d19e5f6f5953
When uses the default port, kombu.Connection.port is None
So we replace the usage of kombu.Connection.port per
kombu.Connection.info(), to get the default value.
Also some transport driver have 'None' for hostname or port as default
value, so replace usage of '%d' per '%s', to ensure the logging never fail.
Event the output of the log is less sexy.
Change-Id: I89ca1982246146717015253bd4cc26f992381584
Closes-bug: #1452189
The consumer loop is over engineered, it returns unused return,
iterconsume creates an iterator directly consumed by 'consume' without
special handling, and in some case kombu error callback are called when
the iterator is stopped and log useless error.
And in reality the consumer is always called when limit=1.
This change simplifies that, by removing the loop and removes all
returns stuffs.
Closes bug: #1450336
Change-Id: Ia2cb52c8577b29e74d4d2b0ed0b535102f2d55c7
To avoid creating a new ZMQ connection for every message sent
to a remote broker, implement pooling and re-use of ZmqClient
objects and associated ZMQ context.
A pool is created for each remote endpoint (keyed by address);
the size of each pool is configured using rpc_conn_pool_size.
All outbound message client connections are pooled.
Closes-Bug: 1384113
Change-Id: Ia55d5c310a56e51df5e2f5d39e561a4da3fe4d83
Bump eventlet to 0.17.3, the first release fully supporting Python 3
with monkey-patching.
Add aioeventlet and trollius dependencies for the aioeventlet
executor on Python 3.
This change enables tests of eventlet and aioeventlet executors on
Python 3.
Add futures to Python 3 dependencies even if it's not needed, it's
required to workaround a bug in tox.
Depends-on: I73e3056b5e8b9ce710c9c2d59fc5be8e03e28d2a
Change-Id: I0efae1c91c5d830156b867d7d21b5c0065094665
JsonPayloadSerializer exists in a several OpenStack projects such as
cinder, ironic, magnum, nova, trove so there is a sense to keep it in
oslo.messaging to avoid of code duplication.
Change-Id: I77a6e5e3e717b0afcf17b6200d5b8ff5db6e3262
redis.smembers(str) returns a list of byte strings.
I missed test failures when I submited my patch to enable redis on
Python 3. I didn't notice that redis tests are skipped when no local
redis server is running.
Change-Id: Ib9ec4e05eb9abd51613f32c93118a1c76649798a
Due to some discovered issues since heartbeat is enabled by default.
Specially #1436788, that needs to fix the underlying library, too.
So, according to the discution here:
https://bugs.launchpad.net/oslo.messaging/+bug/1436769/comments/10
We decide to mark the implementation as experimental and disable it by default.
Related-bug: #1436788
Related-bug: #1436769
Change-Id: Ib7c55977f976bdbbc8df4ad5915e0433cbf84a17