A bunch of FIXMEs in here, but it seems like a good start.
Note that this differs from the original fake driver. Rather than the
driver consuming each message in a greenthread, we leave it up to the
choice of executor to determine how the consumer is run in parallel to
the client and we use thread-safe queues to pass the messages and
replies back and forth. The main reason for this is that we don't want
the driver explicitly depending on eventlet.
The driver shouldn't be pulling the namespace and version from the
message since that's RPC specific stuff.
Also, it's not terribly useful for the driver to pass back a target
object describing the exchange and topic the message was received on
since that's implicit in the listener.
This means you can do e.g.
from openstack.common import messaging
target = messaging.Target(...)
transport = messaging.get_transport(...)
class Client(messaging.RPCClient):
...
rather than e.g.
from openstack.common.messaging.rpc import client
from openstack.common.messaging import target
from openstack.common.messaging import transport
target = target.Target(...)
transport = transport.get_transport(...)
class Client(client.RPCClient):
...
With the MessagingServer API now private, we don't actually need to
expose the concept of an executor.
We may find in future that we want to support an executor type which
we don't want to include in the library itself but, for now, let's
be conservative.
I'm assuming for now that we'll have a specific notifications
consumption API which will use this as an internal implementation
detail. We can make this public again in future if/when we know
what the use case for it is.
These methods are private to the library, so we're prefixing them
with an underscore even though it's a bit unconventional.
See the discussion here:
https://github.com/markmc/oslo-incubator/pull/3
Rather than forcing all users of the server API to construct a
dispatcher and import a specific executor, add a convenience server
class e.g.
server = eventlet.EventletRPCServer(transport, target, endpoints)
Note that openstack.common.messaging.eventlet need be the only public
module which has a dependency on eventlet. We can expose servers,
clients and anything else eventlet specific through this part of the
API.
Move the executors into a sub-package and organize the
code so that only one module imports eventlet.
Rename messaging/rpc/server.py to messaging/rpc/dispatcher.py and
leave only the dispatcher there.
Move the rest of the server code to messaging/server.py where it
can be reused with other dispatchers.
Remove the convenience functions for instantiating servers
to avoid having eventlet imported in the module with the base
class.
Signed-off-by: Doug Hellmann <doug.hellmann@dreamhost.com>
The methods of the driver and transport should
be public, since they are used outside of those
classes.
Signed-off-by: Doug Hellmann <doug.hellmann@dreamhost.com>
There are a couple of cases where having the driver instance is a good
thing for the Listener, though, I would like to use the connection
management as motivation here:
Instead of creating a new connection for every Listener it would be
possible to let the driver instance managing the whole connect /
reconnect and session handling process.
In the old implementation, when a reconnect happens, the connection
instance calls every consumer and sets the new connection / session to
them.
See: http://github.com/openstack/oslo-incubator/blob/master/openstack/common/rpc/impl_qpid.py#L368
Listeners can access the config instance through the driver instance.