Makes use of exisiting 'rpc_zmq_bind_address' option in
order to make binding address configurable.
Change-Id: Ia46fa03e54b0e92d3504d9a0ebd65171a283e073
Closes-Bug: #1515267
New driver introduced some new options and changed its architecture.
The first update of the deployment guide after the driver being
reimplemented. Following driver updates should be reflected in the
guide as well.
Change-Id: Id8629907560e335dfcff688082fe943b3657568c
Closes-Bug: #1497278
Add a new configuration option for setting up
an alternate notification_transport_url that
can be used for notifications. This allows
operators to separate the transport mechanisms
used for RPC and Notifications.
DocImpact
Closes-Bug: #1504622
Change-Id: Ief6f95ea906bfd95b3218a930c9db5d8a764beb9
This reverts commit d700c382791b6352bb80a0dc455589085881669f.
This commit is causing a timeout/lock wait condition when using the in
memory rpc bus. It exposed in the Nova unit / functional tests which use
this extensively.
Change-Id: I9610a5533383955f926dbbb78ab679f45cd7bcdb
Closes-Bug: #1514876
Up until now it has only been available in the OpenStack spec, but it is
a living document and I believe we can maintain it in oslo.messaging's
tree.
Change-Id: I7bb9e5f02004f857d8f75909fcc0d05f2882a77d
This will allow us to find potential security issues, such as those fixed by
52e624891fc500c8ab9f3f10ef45258ce740916a and
c4a7ac0b653543e8a3ba10060cabdb114fb6672b .
Change-Id: I21aa0ca79232784069e55da46920eb43250d8939
This statement is useless since both 'username' and 'password' are set to None
in the for loop, and that they are not used outside of the loop.
Removing this line also help us getting rid of a false positive thrown by
bandit.
Change-Id: I2aa1a16f30928b77aa40c5a900e35b7bf752658a
This change formalises locking in MessageHandlingServer. It allows the
user to make calls in any order and it will ensure, with locking, that
these will be reordered appropriately. It also adds locking for
internal state when using the blocking executor, which closes a number
of races.
It fixes a regression introduced in change
gI3cfbe1bf02d451e379b1dcc23dacb0139c03be76. If multiple threads called
wait() simultaneously, only 1 of them would wait and the others would
return immediately, despite message handling not having completed.
With this change only 1 will call the underlying wait, but all will
wait on its completion.
We add a common logging mechanism when waiting too long. Specifically,
we now log a single message when waiting on any lock for longer than
30 seconds.
We remove DummyCondition as it no longer has any users.
Change-Id: I9d516b208446963dcd80b75e2d5a2cecb1187efa
Introduce mechanism of generating real life messages to the tool
using the information gathered during Rally testing. This change
allows to generate messages of the specfic length due to the
distribution observed on real environment.
messages_length.txt file contains lengths of string JSON objects
that were later sent through the MQ layer during deployment and
deletion of 50 VMs.
simulator.py was modified to use this data as a baseline to generate
random string messages of the required length with the needed
probability.
Change-Id: Iae21f90b5ca202bf0e83f1149baef8b42c64eb55
Some tempest tests were failing because of NoSuchMethod,
UnsupportedVersion and other missed endpoint errors.
This fix provides new listener per each target and
more straight-forward matchmaker target resolution logic.
Change-Id: I4bfb42048630a0eab075e462ad1e22ebe9a45820
Closes-Bug: #1501682
We currently use yaml.load to read a user-written config file. This can
lead to malicious code execution, so we should use yaml.safe_load
instead.
Found using bandit.
Change-Id: I27792f0435bc3cb9b9d31846d07a8d47a1e7679d
ListenerSetupMixin.ThreadTracker was reading self._received_msgs
unlocked and sleep/looping until the desired value was reached.
Replaced this pattern with a threading.Condition.
Change-Id: Id4731caee2104bdb231e78e7b460905a0aaf84bf
This fixes a race due to the quirkiness of the blocking executor. The
blocking executor does not create a separate thread, but is instead
explicitly executed in the calling thread. Other threads will,
however, continue to interact with it.
In the non-blocking case, the executor will have done certain
initialisation in start() before starting a worker thread and
returning control to the caller. That is, the caller can be sure that
this initialisation has occurred when control is returned. However, in
the blocking case, control is never returned. We currently work round
this by setting self._running to True before executing executor.start,
and by not doing any locking whatsoever in MessageHandlingServer.
However, this current means there is a race whereby executor.stop()
can run before executor.start(). This is fragile and extremely
difficult to reason about robustly, if not currently broken.
The solution is to split the initialisation from the execution in the
blocking case. executor.start() is no longer a blocking operation for
the blocking executor. As for the non-blocking case, executor.start()
returns as soon as initialisation is complete, indicating that it is
safe to subsequently call stop(). Actual execution is done explicitly
via the new execute() method, which blocks.
In doing this, we also make FakeBlockingThread a more complete
implementation of threading.Thread. This fixes a related issue in
that, previously, calling server.wait() on a blocking executor from
another thread would not wait for the completion of the executor. This
has a knock-on effect in test_server's ServerSetupMixin. This mixin
created an endpoint with a stop method which called server.stop().
However, as this is executed by the executor, and also joins the
executor thread, which is now blocking, this results in a deadlock. I
am satisfied that, in general, this is not a sane thing to do.
However, it is useful for these tests. We fix the tests by making the
stop method non-blocking, and do the actual stop and wait calls from
the main thread.
Change-Id: I0d332f74c06c22b44179319432153e15b69f2f45
test_server_wait_method was calling server.wait without having
previously called server.start and server.stop. This happened to work
because it also injected server._executor_obj. This is problematic,
though, as it assumes internal details of the server and does not
represent the calling contract of server.wait, which is that it must
follow server.stop (which must itself also follow server.start).
This change makes the necessary changes to call server.wait in the
correct sequence.
Change-Id: I205683ac6e0f2d64606bb06d08d3d1419f7645f4
MessageHandlingServer has both MessageHandlingServer.executor, which
is the name of an executor type, and MessageHandlingServer._executor,
which is an instance of that type. Ideally we would rename
MessageHandlingServer.executor, but as this is referenced from outside
the class we change _executor instead to _executor_obj.
Change-Id: Id69ba7a0729cc66d266327dac2fd4eab50f2814c
Instead of having to spin in the wait method, just use
a condition and block until stopping has actually happened,
when stop happens, it will use the notify_all method to let
any blockers release.
Closes-Bug: #1505730
Change-Id: I3cfbe1bf02d451e379b1dcc23dacb0139c03be76