[OVN][OVS] Different metadata_workers default based on driver

Both drivers have different approaches when it comes to the metatada
agent, for one the metadata agent for ML2/OVN runs on the compute nodes
(it's distributed) instead of the controller nodes.

The previous default of "<# of CPUs> / 2" did not make sense for ML2/OVN
and if left unchanged could result in scalation problems because of the
number of connections to the OVSDB Southbound database, as seeing in
this email thread for example [0].

This patch puts a placeholder value (None) on the default field of
the "metadata_workers" config by not setting it immediately and then
conditionally set the default value based on each driver:

* ML2/OVS defaults to <# CPUs> // 2, as before.
* ML2/OVN defaults to 2, as suggested in the bug description and also
  what's default in TripleO for the OVN driver.

[0]
http://lists.openstack.org/pipermail/openstack-discuss/2020-September/016960.html

Change-Id: I60d5dfef38dc130b47668604c04299b9d23b59b6
Closes-Bug: #1893656
Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
This commit is contained in:
Lucas Alvares Gomes 2020-09-07 15:38:23 +01:00
parent b78a953d7f
commit f3a8e1547d
4 changed files with 25 additions and 5 deletions

View File

@ -22,6 +22,7 @@ from neutron_lib.agent import topics
from neutron_lib import constants
from neutron_lib import context
from neutron_lib import rpc as n_rpc
from neutron_lib.utils import host
from oslo_config import cfg
from oslo_log import log as logging
import oslo_messaging
@ -364,9 +365,13 @@ class UnixDomainMetadataProxy(object):
def run(self):
server = agent_utils.UnixDomainWSGIServer(
constants.AGENT_PROCESS_METADATA)
# Set the default metadata_workers if not yet set in the config file
md_workers = self.conf.metadata_workers
if md_workers is None:
md_workers = host.cpu_count() // 2
server.start(MetadataProxyHandler(self.conf),
self.conf.metadata_proxy_socket,
workers=self.conf.metadata_workers,
workers=md_workers,
backlog=self.conf.metadata_backlog,
mode=self._get_socket_mode())
self._init_state_reporting()

View File

@ -199,9 +199,13 @@ class UnixDomainMetadataProxy(object):
def run(self):
self.server = agent_utils.UnixDomainWSGIServer(
'neutron-ovn-metadata-agent')
# Set the default metadata_workers if not yet set in the config file
md_workers = self.conf.metadata_workers
if md_workers is None:
md_workers = 2
self.server.start(MetadataProxyHandler(self.conf),
self.conf.metadata_proxy_socket,
workers=self.conf.metadata_workers,
workers=md_workers,
backlog=self.conf.metadata_backlog,
mode=self._get_socket_mode())

View File

@ -12,7 +12,6 @@
# License for the specific language governing permissions and limitations
# under the License.
from neutron_lib.utils import host
from oslo_config import cfg
from neutron._i18n import _
@ -93,10 +92,10 @@ UNIX_DOMAIN_METADATA_PROXY_OPTS = [
"'all': set metadata proxy socket mode to 0o666, to use "
"otherwise.")),
cfg.IntOpt('metadata_workers',
default=host.cpu_count() // 2,
sample_default='<num_of_cpus> / 2',
help=_('Number of separate worker processes for metadata '
'server (defaults to half of the number of CPUs)')),
'server (defaults to 2 when used with ML2/OVN and half '
'of the number of CPUs with other backend drivers)')),
cfg.IntOpt('metadata_backlog',
default=4096,
help=_('Number of backlog requests to configure the '

View File

@ -0,0 +1,12 @@
---
upgrade:
- |
The default value for the ``metadata_workers`` configuration option
has changed to 2 for the ML2/OVN driver. For ML2/OVS the default
value remains the same. Each driver has different approaches when
serving metadata to the instances and the previous default value of
"<number of CPUs> / 2" did not make sense for ML2/OVN as the OVN
metadata agents are distributed running on Compute nodes instead of
Controller nodes. In fact, the previous default value could cause
scalability issues with ML2/OVN and was overwritten by the deployment
tools to avoid problems.