Before the move to pbr, I thought it would be cute to include the lists
of entry points in the public API. After pbr, the entry points are only
in a config file, so it doesn't make much sense.
When doing a rolling upgrade, we need to be able to tell all rpc clients
to hold off on sending newer versions of messages until all nodes
understand the new message version. This patch adds the oslo component
of this.
It's quite simple. The rpc proxy just stores the version cap and will
raise an exception if code ever tries to send a message that exceeds
this cap.
Allowing the cap to be configured and generating different types of
messages based on the configured value is the hard part here, but that
is left up to the project using the rpc library.
Implements blueprint rpc-version-control.
When an executor polls for a message, the driver needs to return the
request context and the message.
Later, if the executor wishes to send a reply, it needs to pass back
a handle which identifies the message so that the transport can deliver
the reply.
In the current interface, we're just passing back the context and
message to the transport so this presumes that the transport would
attach whatever it needs to one of these objects.
In the AMQP drivers in openstack.common.rpc, we set attributes like
reply_q and msg_id on the returned context. However, it would be much
better if we never touched the user-supplied context and, instead, had
some other way to pass this info to the executor and then have it passed
back to the transport.
To achieve that, add an IncomingMessage abstract class which wraps the
context and message and has a reply() method. That way, transports can
subclass this class, add whatver attributes they want and implement a
reply method.
To repeat what this means ... we can allow users of the API to use
read-only mapping objects as a context, rather than requiring it to be
an object we can set arbitrary attributes on.
Plumb a context dict all the way through the stack.
This is ugly, but:
- it seems valid to have generic support for a message context
that is implicitly included in all RPC interfaces
- rabbit/qpid serialize the context differently, so we really
need to pass it all the way down to the transport to continue
to support this
- notifications also send a context, so it's not rpc specific
A bunch of FIXMEs in here, but it seems like a good start.
Note that this differs from the original fake driver. Rather than the
driver consuming each message in a greenthread, we leave it up to the
choice of executor to determine how the consumer is run in parallel to
the client and we use thread-safe queues to pass the messages and
replies back and forth. The main reason for this is that we don't want
the driver explicitly depending on eventlet.
The driver shouldn't be pulling the namespace and version from the
message since that's RPC specific stuff.
Also, it's not terribly useful for the driver to pass back a target
object describing the exchange and topic the message was received on
since that's implicit in the listener.
This means you can do e.g.
from openstack.common import messaging
target = messaging.Target(...)
transport = messaging.get_transport(...)
class Client(messaging.RPCClient):
...
rather than e.g.
from openstack.common.messaging.rpc import client
from openstack.common.messaging import target
from openstack.common.messaging import transport
target = target.Target(...)
transport = transport.get_transport(...)
class Client(client.RPCClient):
...
With the MessagingServer API now private, we don't actually need to
expose the concept of an executor.
We may find in future that we want to support an executor type which
we don't want to include in the library itself but, for now, let's
be conservative.
I'm assuming for now that we'll have a specific notifications
consumption API which will use this as an internal implementation
detail. We can make this public again in future if/when we know
what the use case for it is.
These methods are private to the library, so we're prefixing them
with an underscore even though it's a bit unconventional.
See the discussion here:
https://github.com/markmc/oslo-incubator/pull/3
Rather than forcing all users of the server API to construct a
dispatcher and import a specific executor, add a convenience server
class e.g.
server = eventlet.EventletRPCServer(transport, target, endpoints)
Note that openstack.common.messaging.eventlet need be the only public
module which has a dependency on eventlet. We can expose servers,
clients and anything else eventlet specific through this part of the
API.
Move the executors into a sub-package and organize the
code so that only one module imports eventlet.
Rename messaging/rpc/server.py to messaging/rpc/dispatcher.py and
leave only the dispatcher there.
Move the rest of the server code to messaging/server.py where it
can be reused with other dispatchers.
Remove the convenience functions for instantiating servers
to avoid having eventlet imported in the module with the base
class.
Signed-off-by: Doug Hellmann <doug.hellmann@dreamhost.com>
The methods of the driver and transport should
be public, since they are used outside of those
classes.
Signed-off-by: Doug Hellmann <doug.hellmann@dreamhost.com>
There are a couple of cases where having the driver instance is a good
thing for the Listener, though, I would like to use the connection
management as motivation here:
Instead of creating a new connection for every Listener it would be
possible to let the driver instance managing the whole connect /
reconnect and session handling process.
In the old implementation, when a reconnect happens, the connection
instance calls every consumer and sets the new connection / session to
them.
See: http://github.com/openstack/oslo-incubator/blob/master/openstack/common/rpc/impl_qpid.py#L368
Listeners can access the config instance through the driver instance.