f3a8e1547d
Both drivers have different approaches when it comes to the metatada agent, for one the metadata agent for ML2/OVN runs on the compute nodes (it's distributed) instead of the controller nodes. The previous default of "<# of CPUs> / 2" did not make sense for ML2/OVN and if left unchanged could result in scalation problems because of the number of connections to the OVSDB Southbound database, as seeing in this email thread for example [0]. This patch puts a placeholder value (None) on the default field of the "metadata_workers" config by not setting it immediately and then conditionally set the default value based on each driver: * ML2/OVS defaults to <# CPUs> // 2, as before. * ML2/OVN defaults to 2, as suggested in the bug description and also what's default in TripleO for the OVN driver. [0] http://lists.openstack.org/pipermail/openstack-discuss/2020-September/016960.html Change-Id: I60d5dfef38dc130b47668604c04299b9d23b59b6 Closes-Bug: #1893656 Signed-off-by: Lucas Alvares Gomes <lucasagomes@gmail.com>
13 lines
616 B
YAML
13 lines
616 B
YAML
---
|
|
upgrade:
|
|
- |
|
|
The default value for the ``metadata_workers`` configuration option
|
|
has changed to 2 for the ML2/OVN driver. For ML2/OVS the default
|
|
value remains the same. Each driver has different approaches when
|
|
serving metadata to the instances and the previous default value of
|
|
"<number of CPUs> / 2" did not make sense for ML2/OVN as the OVN
|
|
metadata agents are distributed running on Compute nodes instead of
|
|
Controller nodes. In fact, the previous default value could cause
|
|
scalability issues with ML2/OVN and was overwritten by the deployment
|
|
tools to avoid problems.
|