Because driver should rely on executor and not directly on eventlet,
delete eventlet related code. This also drop the old driver API.
This is the amqp part.
Change-Id: Ic6060058dafa4dabbc5e8c68bf231c818a7fec25
Some modules have different names in Python2 and Python3. This patch
make them compatible with Python 3.
* Use six.moves.filter instead of itertools.ifilter() in Python 2.
* Use common.py3kcompat.urlutils instead of urllib and urlparse.
Change-Id: Ia27ebf6057d91d0e129fbe90f995cfdaa89efa8a
In Python 3, some data structures' attribute is different in Python 2.
See http://pythonhosted.org/six/#object-model-compatibility
This is change mapping:
six Python 2 Python 3
six.next(it) it.next() next(it)
six.iterkeys(dict) dict.iterkeys() dict.keys()
six.itervalues(dict) dict.itervalues() dict.values()
Implements: blueprint make-python3-compatible
Change-Id: Ida48f39ff230860feee7305b93b134c625a21663
In current logic, class impl_qpid.Connnection constructor calls
method connection_create twice indirectly. Let us avoid this.
Change-Id: I7618cf3506d857579dc37b338690d05179ba272d
We need to support deprecated transport driver configurations like:
rpc_backend = nova.rpc.impl_kombu
i.e. 'nova.rpc.impl_kombu' is a deprecated alias for 'rabbit'.
Initially, we supported this by adding the aliases to each project's
setup.cfg:
oslo.messaging.drivers =
nova.rpc.impl_kombu = oslo.messaging._drivers.impl_rabbit:RabbitDriver
However, this means that code like this:
url = str(TransportURL(conf))
generates a bogus URL string like:
nova.rpc.impl_kombu://...
We need to apply these transport aliases when we load drivers, but also
when we create transport URLs from configuration containing potentially
deprecated aliases.
To enable that, add an aliases parameter to TransportURL(),
TransportURL.parse() and get_transport().
blueprint: transport-aliases
Change-Id: Ifce68ff62746c2a363c719417a2bd0a78ee025dd
The ack_on_error is not used by the abstraction layer, and only the
rabbitmq implements it.
This commit remove this feature, and next commit will add a new way for
this.
Partial implements blueprint notification-subscriber-server
Change-Id: I17eb23f2e3e374630251576438011f186e5b2150
Some modules use different names in Python2 and Python3. Use six.moves
to make them work well in Python2 and Python3.
This is changes mapping:
six.moves Python 2 Python 3
reduce reduce() functools.reduce()
Implements: blueprint make-python3-compatible
Change-Id: I97971f2ab40385bfc2c73ae7e8a7620c4d64a03c
No need to set tabstop 189 times, this can be set in your vimrc file
instead. Also if set incorrectly gate (pep8 check) will catch your
mistakes.
Change-Id: Ic6f0c0ef94e8194a5c121598305d1ec3c74e4843
Handle the case where the context passed into def pack_context() is a
dictionary. If a dictionary is passed in, we don't need to call to_dict
before updating the msg.
Closes-Bug: #1208971
Change-Id: I2ce0b28f97634e717868e0ee5525189338d4981c
It seems there's no gain in trying to be smarter and different from the
base Python Exception, so let's remove our custom code to be more
compatible and friendly with all Python versions.
Change-Id: I259783ef1f77c6661ea7dc2325605c8d6290b898
When the QPID broker is restarted (or fails over), subscribed clients
will attempt to re-establish their connections. In the case of fanout
subscriptions, this reconnection functionality is broken. For version
1 topologies, the clients attempt to reconnect twice to the same
exclusive address - which is illegal. In the case of version 2
topologies, the address parsing is broken and an illegal address is
created on reconnect. This fix avoids the problem by removing the
special-case reconnect code that manages UUID addresses; it is
unnecessary as the QPID broker will generate unique queue names
automatically when the clients reconnect.
Closes-bug: #1251757
Change-Id: I6051fb503663bb8c7c5468db6bcde10f6cf1b318
This removes a few import and global variables that are not used through
the code. That cleans things a little.
Change-Id: I7b30bb11e8ad3c2df01ca2107eff2444feed3fe2
The notifier itself doesn't use the configuration. So let's not store
it, that lights the dependency a bit on this configuration object.
Blueprint: messaging-decouple-cfg
Change-Id: Ic4b5ddd93ea0382bd8292f9e31b7dacba9b489d3
The standard Python logging system uses 'warning' and not 'warn'. To
ease compatibility with it, let's add 'warning' too.
Change-Id: I7778d7960ca7a72be007cb083e5434ede6e3fe6e
uuidutils module will be deprecated in Icehouse, So need replace it.
This patch uses str(uuid.uuid4()) instead of method generate_uuid.
Closes-Bug: #1253497
Change-Id: I35815544429c489096b4db3fa79a649f4cd9459f
There has been a bug open for a while pointing out that the way we
create direct exchanges with qpid results in leaking exchanges since
qpid doesn't support auto-deleting exchanges. This was somewhat
mitigated by change to use a single reply queue. This meant we created
far fewer direct exchanges, but the problem persists anyway.
A Qpid expert, William Henry, originally proposed a change to address
this issue. Unfortunately, it wasn't backwards compatible with existing
installations. This patch takes the same approach, but makes it
optional and off by default. This will allow a migration period.
As a really nice side effect, the Qpid experts have told us that this
change will also allow us to use Qpid broker federation to provide HA.
DocImpact
Closes-bug: #1178375
Co-authored-by: William Henry <whenry@redhat.com>
Change-Id: I09b8317c0d8a298237beeb3105f2b90cb13933d8
Oslo.messaging.Transport is used as an abstraction to redirect function
calls to the configured driver. _send_notification called
self._driver.send instead of self._driver.send_notification
As the test
oslo.messaging.tests.TestTransportMethodArgs.test_send_notification also
assumed that send was the correct method, this should be changed
accordingly
Change-Id: I9406d74f3dc13c44d1aaad5379aafbf1a8580137
On the client side, in the rabbit and qpid drivers, we use a connection
pool to avoid opening a connection for each message we send. However,
there is only currently one connection pool per process:
def get_connection_pool(conf, connection_cls):
with _pool_create_sem:
# Make sure only one thread tries to create the connection pool.
if not connection_cls.pool:
connection_cls.pool = ConnectionPool(conf, connection_cls)
return connection_cls.pool
This is a nasty artifact of the original RPC having no conectp of a
transport context - everything was a global. We'll fix this soon enough.
In the meantime, we need to make sure we only use this connection pool
where we're not using the default transport configuration from the
config file - i.e. where we supply a transport URL.
The use case here is cells - we send messages to a remote cell by
connecting to it using a transport URL. In our devstack testing, the
two cells are on the same Rabbit broker but under different virtual
hosts. Because we were always using the connection pool on the client
side, we were seeing both cells always send messages to the '/' virtual
host.
Note - avoiding the connection pool in the case of cells is the same
behaviour as the current RPC code:
def cast_to_server(conf, context, server_params, topic, msg, connection_pool):
...
with ConnectionContext(conf, connection_pool, pooled=False,
server_params=server_params) as conn:
Change-Id: I2f35b45ef237bb85ab8faf58a408c03fcb1de9d7