It seems there's no gain in trying to be smarter and different from the
base Python Exception, so let's remove our custom code to be more
compatible and friendly with all Python versions.
Change-Id: I259783ef1f77c6661ea7dc2325605c8d6290b898
This removes a few import and global variables that are not used through
the code. That cleans things a little.
Change-Id: I7b30bb11e8ad3c2df01ca2107eff2444feed3fe2
The notifier itself doesn't use the configuration. So let's not store
it, that lights the dependency a bit on this configuration object.
Blueprint: messaging-decouple-cfg
Change-Id: Ic4b5ddd93ea0382bd8292f9e31b7dacba9b489d3
The standard Python logging system uses 'warning' and not 'warn'. To
ease compatibility with it, let's add 'warning' too.
Change-Id: I7778d7960ca7a72be007cb083e5434ede6e3fe6e
uuidutils module will be deprecated in Icehouse, So need replace it.
This patch uses str(uuid.uuid4()) instead of method generate_uuid.
Closes-Bug: #1253497
Change-Id: I35815544429c489096b4db3fa79a649f4cd9459f
There has been a bug open for a while pointing out that the way we
create direct exchanges with qpid results in leaking exchanges since
qpid doesn't support auto-deleting exchanges. This was somewhat
mitigated by change to use a single reply queue. This meant we created
far fewer direct exchanges, but the problem persists anyway.
A Qpid expert, William Henry, originally proposed a change to address
this issue. Unfortunately, it wasn't backwards compatible with existing
installations. This patch takes the same approach, but makes it
optional and off by default. This will allow a migration period.
As a really nice side effect, the Qpid experts have told us that this
change will also allow us to use Qpid broker federation to provide HA.
DocImpact
Closes-bug: #1178375
Co-authored-by: William Henry <whenry@redhat.com>
Change-Id: I09b8317c0d8a298237beeb3105f2b90cb13933d8
Oslo.messaging.Transport is used as an abstraction to redirect function
calls to the configured driver. _send_notification called
self._driver.send instead of self._driver.send_notification
As the test
oslo.messaging.tests.TestTransportMethodArgs.test_send_notification also
assumed that send was the correct method, this should be changed
accordingly
Change-Id: I9406d74f3dc13c44d1aaad5379aafbf1a8580137
On the client side, in the rabbit and qpid drivers, we use a connection
pool to avoid opening a connection for each message we send. However,
there is only currently one connection pool per process:
def get_connection_pool(conf, connection_cls):
with _pool_create_sem:
# Make sure only one thread tries to create the connection pool.
if not connection_cls.pool:
connection_cls.pool = ConnectionPool(conf, connection_cls)
return connection_cls.pool
This is a nasty artifact of the original RPC having no conectp of a
transport context - everything was a global. We'll fix this soon enough.
In the meantime, we need to make sure we only use this connection pool
where we're not using the default transport configuration from the
config file - i.e. where we supply a transport URL.
The use case here is cells - we send messages to a remote cell by
connecting to it using a transport URL. In our devstack testing, the
two cells are on the same Rabbit broker but under different virtual
hosts. Because we were always using the connection pool on the client
side, we were seeing both cells always send messages to the '/' virtual
host.
Note - avoiding the connection pool in the case of cells is the same
behaviour as the current RPC code:
def cast_to_server(conf, context, server_params, topic, msg, connection_pool):
...
with ConnectionContext(conf, connection_pool, pooled=False,
server_params=server_params) as conn:
Change-Id: I2f35b45ef237bb85ab8faf58a408c03fcb1de9d7
Currently, if we are supplied with a transport URL with only the virtual
host specified, we completely ignore it. Instead, the behaviour should
be that we use that virtual host with the host, port and credentials
from the config file.
Change-Id: Ic97aa511ddf9bce69b1a5069d9f6468f4bd6dd4c
'rabbit' is the canonical name for the driver and 'kombu' is just an
alias, so let's set the default to the canonical.
Change-Id: If163ece6793d2a7d6e99d0c8df1745bbcf9a36e6
__metaclass__ cannot be used in python3.six be used in general
for python 3 compatibility.
Porting Change-Id I9fc7a59df3af29b4cc1287c40fa4e883d994a961
from oslo-incubator
Change-Id: Icdacdcf5556b6d3b8450d1350c6f62b4f5a9690b
This makes the RPC version support three elements, adding a "revision"
in addition to the major and minor version. The revision would always
be zero for the master branch of a service, but could be incremented
for stable versions. This provides us some room to backport fixes
that affect RPC versions in such a way that would avoid breaking
the version lineage for systems running stable versions that may
some day be involved in a rolling upgrade to a version from master.
I didn't find any tests for version_is_compatible(), so I added
some for existing version scenarios, as well as new ones with
revisions. They also serve to validate that this doesn't break
anything for code using two-element versions (the expectation is that
two-element versions will still be used everywhere until a third
is needed).
Porting chages from Change-Id I239c17a3e305f572493498c4b96ee3c7514c5881
to oslo-incubator
Change-Id: I4fa7b0be14a7afba36136a746b76036355f119b2
packages is recursive, so there is no need to list subpackages
versions are set by tags, so the version in the file is not needed
non-d2to1-based pbr does not need the hook specification
Change-Id: Id5e6c19dfe81c630862e9b87b7f9e5f67a965945
This implements the server side of the driver without modifying the
existing code by allowing the driver to spawn off multiple greenthreads
as before, but queueing any dispatched messages so that the executor
can still do listener.poll() to dispatch messages itself.
This is a hack, but it's a starting point.
Change-Id: Ie299c2695d81d0473cea81d40114326b89de0011
This is the ZeroMQ server which acts as a proxy for all messages
destined to a particular host. Again, there are a bunch of FIXMEs
here. This still needs work.
Change-Id: I9384f486e44b0b0cbca028e219ad66f1990d5181
Get sending working with an initial version of the driver. There's a
bunch of FIXMEs inline reflecting that even the client side needs a
tonne of work yet.
Change-Id: I6d69ebc9ae3b3999832209e0c4100ffe26e35919
Modifications are:
- use stdlib logging; no huge need for oslo logging here
- stub out the _() function; we don't have any l10n infrastructure in
the project and may never have
- change imports to oslo.messaging.openstack.common and
oslo.messaging._drivers as appropriate
Change-Id: I87b85b79a33dec65e51ed95fff90cc56042240c5
Concurrency. Sigh.
A sequence of events like this is possible:
- We send a request from thread A
- Thread B, who is waiting for a response gets scheduled
- Thread B receives our response and queues it up
- Thread B receives its own response and drops the connection lock
- Thread A grabs the connection lock and wait for a response to arrive
The obvious solution is that when we grab the connection lock, we should
check whether a previous lock-holding thread had already received our
response and queued it up.
Change-Id: I88b0d55d5a40814a84d82ed4f42d5ba85d2ef7e0
There are a number of situations in which we log a message if an
exception occurs during the handling of a message:
1) Something goes wrong pulling the message from the queue and
de-serializing it - here we print "Failed to process message"
2) An RPC endpoint method raises an expected exception - here we
print an 'Expected exception during message handling' debug
message
3) An RPC endpoint method raises any other exception - here we
should print an 'Exception during message handling' error message
However, in the latter case, we are currently printing out the 'Failed
to process' error message.
Change-Id: I4f7042b8ec978aaff8f4e20e62ba1ac765fe6ba5
On the server side, we only send replies if the request included a
_msg_id key. Also, the _reply_q key is only used when we wish to send a
reply.
So, in order to retain the exact same on-the-wire behaviour and ensure
servers aren't sending replies where none is needed, only include these
keys if we're doing a call (i.e. wait_for_reply=True).
Change-Id: Iac329493252be7d94b1ebe24f00e4d3f5c61d269
I added check_for_lock because I assumed it was enabled by default and
actively in use by Nova. However, it actually isn't used by Nova yet and
enabling spews a tonne of warnings.
It's a rather clunky API and there's a good chance we can design a
better API for it, so let's leave it out until we're ready to actually
start using it in Nova.
Related-Bug: #1063222
Change-Id: Ib890978398059f360cd0f3352f4755262b8111c6
Nova sends notifications with a bunch of different publisher_ids, so we
instantiate quite a lot of Notifier objects. Loading the noification
drivers for each of these is a substantial amount of overhead.
One obvious answer would be to make publisher_id an argument to the
error(), info(), etc. methods, but I think it's nice to encapsulate the
publisher_id in a notifier instance.
Instead, add a prepare() method which mirrors the approach in RPCClient.
You use this method to create a specialized notifier instance with a new
publisher_id.
Change-Id: Ia45fda3164086bb7a9ef6dee0587f726ab8b1a97