Sphinx 1.8 introduced [1] the '--keep-going' argument which, as its name
suggests, keeps the build running when it encounters non-fatal errors.
This is exceptionally useful in avoiding a continuous edit-build loop
when undertaking large doc reworks where multiple errors may be
introduced.
[1] https://github.com/sphinx-doc/sphinx/commit/e3483e9b045
Change-Id: If9885a1f064226909181d8b69241eb814deb2105
Typically a simple log will not narrow down the
performance, but give us more information about
the service status.
Change-Id: I51c8f2743dd39cccd3d1d021d3c50dc09f70cd97
Closes-Bug: #1847747
Every in tree driver that implements RPC send uses jsonutils.dumps to
serialize the message, except FakeDriver. FakeDriver uses json.dumps.
However json.dumps is a lot more strict than jsonutils. This caused
nova to introduce test specific changes in the rpc handling [1].
This patch makes sure that each driver uses the same json serialization.
I've tried to dig in to the history of the strictness of the
FakeDriver. That driver with the json.dumps() call was added back in
2013 with e2b74cc9e6605156dfd6e36cdfd1b5136161d526. (I cannot link to
that commit in any online way but it is in my local git clone.)
Checking out that commit I don't see any other drivers present in the
repo but the code does mention drivers like RabbitDriver and ZmqDriver
in oslo.messaging/openstack/common/messaging/drivers.py but only there.
Today the oslo_messaging._drivers.common.serialize_msg() call is used
to do the final serialization of the message. It uses jsonutils.dumps
since Icd54ee8e3f5c976dfd50b4b62c7f51288649e112 which is a revert of
I0e0f6b715ffc4a9ad82be52e55696d032b6d0976 that changed from
jsonutils.dumps to jsonutils.dump_as_bytes by mistake. And before this
back and forth it was jsonutils.dumps since the code was imported from
oslo-incubator by I38507382b1ce68c7f8f697522f9a1bf00e76532d. Here
I lost the trail. Honestly I don't know the reason why the fake driver
was made stricter than the real drivers. Still I think today the
strictness is unnecessary as every driver uses jsonutils and even
counterproductive as in [1].
[1] 09bf71407f/nova/compute/rpcapi.py (L820)
Change-Id: I186305b7897a2a4ce033c11ab9e6bc028854381b
Closes-Bug: #1529084
Add file to the reno documentation build to show release notes for
stable/train.
Use pbr instruction to increment the minor version number
automatically so that master versions are higher than the versions on
stable/train.
Change-Id: I99e73a88a654b27cbd9334974413daf9e4a30e5d
Sem-Ver: feature
With this feature, the server will raise and log a Message Undeliverable
exception. So it is possible to log immediately an error in case the
reply queue does not exist for some reason.
This is part of blueprint transport-options
The blueprint link is [1]
Please follow the link [2] to use and test the feature.
1-
https://blueprints.launchpad.net/oslo.messaging/+spec/transport-options
2- https://github.com/Gsantomaggio/rabbitmq-utils/
tree/master/openstack/mandatory_test
Change-Id: Iac7474c06ef425a2afe5bcd912e51510ba1c8fb3
Introduce a RabbitMQ driver documentation for admin.
Describing:
- some RabbitMQ and AMQP specifications (exchanges, queues, routing-key)
- the heartbeat specification and the type of used threads
- the driver options
Change-Id: I8fd1624834510f8dee81ab9342c708d726b8f827
This is an experimental feature.
The proposed changes will fix related issues when we run
heartbeat under apache/httpd enviornment with the apache MPM `prefork`
[1] engine and mod_wsgi or uwsgi in a monkey patched environment.
Propose changes to allow user to choose to run the rabbitmq health check
heartbeat in a standard python thread.
Issue
=====
We facing an issue with the rabbitmq driver heartbeat
under apache MPM `prefork` module and mod_wsgi when nova_api monkey
patched the stdlib by using eventlet.
nova_api calling eventlet.monkey_patch() [2] when it runs under mod_wsgi.
This impacts the AMQP heartbeat thread,
which is meant to be a native thread. Instead of checking AMQP sockets
every 15s, It is now suspended and resumed by eventlet. However,
resuming greenthreads can take a very long time if mod_wsgi isn't
processing traffic regularly, which can cause rabbitmq to close the AMQP
connection.
Root Cause
==========
The oslo.messaging RabbitMQ driver and especially the heartbeat
suffer to inherit the execution model of the service which consume him.
In this scenario nova_api need green threads to manage cells and edge
features so nova_api monkey patch the stdlib to obtain async features,
and the oslo.messaging rabbitmq driver endure these changes.
I think the main issue here is that nova_api want async and use eventlet green
threads to obtain it.
Solution
========
We want to allow user to isolate the heartbeat execution model
from the parent process inherited execution model by passing the
`heartbeat_in_pthread` option through the driver config.
While we use MPM `prefork` we want to avoid to use libevent and epoll.
If the `heartbeat_in_pthread` option is given we want to force to use the
python stdlib threading module to run the
rabbitmq heartbeat to avoid issue related to a non "standard"
environment. I mean "standard" because async features isn't the default
config in mostly case, starting by apache which define `prefork` is the
default engine.
This is an experimental feature, we can help us to ensure to run heartbeat
through a classical python thread
Specifications
==============
- https://review.opendev.org/661314
[1] https://httpd.apache.org/docs/2.4/fr/mod/prefork.html
[2] 3c5e2b0e9f
Change-Id: If8846599efc48fe18ecfb99c04e2c38f9a45b9ed
There is no need to explicitly list the choices in the help text.
The oslo.config sample generator will include the choices automatically[0]
Also tweaks the wording of text to make it clear that it is the allowed
values which vary based on kafka version.
Change-Id: I4116e8871436097dea650f56e7b187358367d92e
0: 2488c1e1ce/oslo_config/generator.py (L263)
There is a typographical errors in amqpdriver.py. Correcting spelling
from acknowlege to acknowledge.
Change-Id: I4a80d8c6b162a99176eadb052f6201dc38dbc5f9
Some options are now automatically configured by the version 1.20:
- project
- html_last_updated_fmt
- latex_engine
- latex_elements
- version
- release.
Change-Id: Ib5e22f6a5374f05e576bbc00a209209fdb09acad
Lots of exchanges create problems during failover under high
load. Please see bug report for details.
This is a step 2 patch.
Step 1 was: only using default exchange
when publishing.
Step 2 is to update consumers to only listen on default exchange,
happening now in T release.
Change-Id: Ib2ba62a642e6ce45c23568daeef9703a647707f3
Closes-Bug: #1789177
Use a sensible header style, fix some syntax highlighting, and generally
tidy things up.
Change-Id: I0b141b968ed8db10ff41a626569dd185edbdc641
Signed-off-by: Stephen Finucane <sfinucan@redhat.com>
With this feature, it is possible to specialize the parameters to send.
`options = oslo_messaging.TransportOptions(at_least_once=True)`
TransportOptions is used in every single driver,
for example in RabbitMQ driver is used to handle the mandatory flag.
Notes:
- The idea of creating a new class TransportOptions is because I'd like
to have an abstract class not related only to the RPCClient
- at_least_once is the first parameter, when needed we can add the
others.
Implements: blueprint transport-options (second point)
The blueprint link is [1]
To test it you can use [2]
1- https://blueprints.launchpad.net/oslo.messaging/+spec/transport-options
2- https://github.com/Gsantomaggio/rabbitmq-utils/
tree/master/openstack/mandatory_test
Change-Id: I1858e4a990507d3c2bac2ef7fbef75d8c2dbfce2
When the message is large, in order to improve the efficiency of
kafka, we need to compress the message before send it, so we need to
support kafka message compression.
Change-Id: I9e86d43ad934c1f82dc3dcf93d317538f9d2568e
Implements: blueprint support-kafka-compression
With this new parameter is possible to pass other parameters
from the client to the drivers.
So it is possible to tune the driver behavior.
For example can be used to send the mandatory flag in RabbitMQ
Note:
- The transport_options parameter is not actually used (yet).
- This part of blueprint transport-options (first part)
Implements: blueprint transport-options
The blueprint link is
https://blueprints.launchpad.net/oslo.messaging/+spec/transport-options
Change-Id: Iff23a9344c2c87259cf36b0d14c0a9fc075a2a72
It seems that versions are deleted from www.apache.org pretty quickly.
They stick around longer on archive.apache.org so we won't have to
be constantly chasing the latest version in our functional tests.
Change-Id: I047edac67699dd598f8dfd0f859b3772f6068bd3
Bandit 1.6.0 accidentally changed how the exclusion list option is
handled and breaks our use of it. Cap to the previous version until
Bandit has fixed the problem.
Sphinx 2.0 no longer works on python 2.7, so we need to start
capping it there as well.
Change-Id: Ie6b379f2c99862c37891ac03c52464e07bc2b2cc