The dnsmasq PXE filter [1] provides far better scalability than the
iptables filter typically used. Inspector manages files in a dhcp-hostsdir
directory that is watched by dnsmasq via inotify. Dnsmasq then either
whitelists or blacklists MAC addresses based on the contents of these
files.
This change adds a new variable, ironic_inspector_pxe_filter, that can
be used to configure the PXE filter for ironic inspector. Currently
supported values are 'iptables' and 'dnsmasq', with 'iptables' being the
default for backwards compatibility.
[1]
https://docs.openstack.org/ironic-inspector/latest/admin/dnsmasq-pxe-filter.html
Implements: blueprint ironic-inspector-dnsmasq-pxe-filter
Change-Id: I73cae9c33b49972342cf1984372a5c784df5cbc2
The variable {{ node_config_directory }} is used for the configuration
directory on the remote hosts, and should not be used for paths on the
deploy host (localhost).
This changes the default value of the TLS certificate and CA file to
reference {{ CONFIG_DIR }}, in line with the directory used for
admin-openrc.sh (as of I0709482ead4b7a67e82796e17f85bde151e71bc0).
This change also introduces a variable, {{ node_config }}, that
references {{ CONFIG_DIR | default('/etc/kolla') }}, to remove
duplication.
Change-Id: Ibd82ac78630ebfff5824c329d7399e1e900c0ee0
Closes-Bug: #1804025
The concept of splitting the compute group into external/internal just
to specify agent_mode for Neutron DVR was deemed to be heavy handed, and
depreacated in the Pike cycle.
Now that Rocky has been released we can remove these completely for Stein.
Change-Id: I28a1eba7f40fee55a7ec41c27451e39e4d7fd8f0
If the [processing] ramdisk_logs_dir option is set, logs returned by the
ironic inspection ramdisk following hardware inspection will be stored
at that location. This enables easier debugging if inspection fails.
Change-Id: I36bdf75c04b088b67b5f54fdf20251c10bdddb63
The Monasca Grafana fork allows users to log into Grafana with their
OpenStack user credentials and see metrics associated with their
OpenStack project. The long term goal is to enable Keystone support
in upstream Grafana, but this work seems to have stalled.
Partially-Implements: blueprint monasca-grafana
Change-Id: Icc04613b2571c094ae23b66d0bcc38b58c0ee4e1
The Monasca Agent collects metrics and in this change is deployed
across the control plane. These metrics are collected into an OpenStack
project. It supports configuring a small number of plugins, which can
be extended in later commits. It also makes the Monasca Agent credentials
available to other roles, such as the common role to allow forwarding
logs to Monasca.
Partially-Implements: blueprint monasca-roles
Change-Id: I76b34fc5e1c76407a45fcf272268d5798b473ca2
Currently, the serial consoles as accessed through Horizon,
timeout after the haproxy_client_timeout (default: 1m) of
inactivity. This change allows you to set a larger timeout.
Change-Id: I2a9923cb69d5db976395146685aded83922c4120
Closes-Bug: #1800643
Apply Swift rolling upgrade based on recommendations from Swift PTL John
Dickinson at [1]
[1] https://www.swiftstack.com/blog/2013/12/20/upgrade-openstack-swift-no-downtime/
Co-Authored-By: Surya Prakash <singh.surya64mnnit@gmail.com>
Change-Id: I99f505438916be2f89b24df20506339604e5bd6e
Implements: blueprint apply-service-upgrade-procedure
This patchset implements Neutron rolling upgrade logic as described
in [1].
Due to only neutron, vpnass and fwaas have supported for rolling upgrade
database migration, so I used the list "neutron_rolling_upgrade_services"
in neutron/default/main.yml for contain there services.
[1] https://docs.openstack.org/neutron/latest/contributor/internals/upgrade.html
Co-author: Ha Manh Dong <donghm@vn.fujitsu.com>
Change-Id: I2ed2f941d30d4df0d0f42c0d10e7ca03ec1c166a
Implements: blueprint apply-service-upgrade-procedure
Two new parameters (migration_interface, migration_interface_address) to make
the use of a dedicated migration network possible.
Change-Id: I723c9bea9cf1881e02ba39d5318c090960c22c47
Even though Kolla services are configured to log output to file rather than
stdout, some stdout still occurs when for example the container re(starts).
Since the Docker logs are not constrained in size, they can fill up the
docker volumes drive and bring down the host. One example of when this is
particularly problematic is when Fluentd cannot parse a log message. The
warning output is written to the Docker log and in production we have seen
it eat 100GB of disk space in less than a day. We could configure Fluentd
not to do this, but the problem may still occur via another mechanism.
Change-Id: Ia6d3935263a5909c71750b34eb69e72e6e558b7a
Closes-Bug: #1794249
The Monasca Persister reads metrics from Kafka and stores them
in a configurable time series database.
Change-Id: I8166b32bfb1583098ab8318a5f38d25bddb81e89
Partially-Implements: blueprint monasca-roles
The Monasca Notification engine generates alerts such as Slack
notifications from alerts.
Change-Id: I84861d5feefe6b6f38acc4dd71e94c386d40b562
Partially-Implements: blueprint monasca-roles
Monasca Thresh is a Storm topology which generates alerts from
metric streams according to alarms defined via the Monasca API.
This change runs the thresholder in local mode, which means that
the log output for the topology is directed to stdout and the
topology is restarted if the container is restarted. A future
change will improve the log collection and introduce a better
way of the checking the topology is running for multi-node
clusters.
Change-Id: I063dca5eead15f3cec009df62f0fc5d857dd4bb0
Partially-Implements: blueprint monasca-roles
Having all services in one giant haproxy file makes altering
configuration for a service both painful and dangerous. Each service
should be configured with a simple set of variables and rendered with a
single unified template.
Available are two new templates:
* haproxy_single_service_listen.cfg.j2: close to the original style, but
only one service per file
* haproxy_single_service_split.cfg.j2: using the newer haproxy syntax
for separated frontend and backend
For now the default will be the single listen block, for ease of
transition.
Change-Id: I6e237438fbc0aa3c89a3c8bd706a53b74e71904b
The log metrics service generates metrics from log messages
which allows further analysis and alerting to be performed
on them. Basic configuration is provided so that metrics
are generated for high level warning logs such as error, or
warning.
Change-Id: I45cc17817c716296451f620f304c0b1108162a56
Partially-Implements: blueprint monasca-roles
- Uses swift if swift is enabled.
- Uses ceph if ceph is enabled.
- Defaults to file if swift and ceph are enabled.
Explicitly set to swift or ceph when both are enabled.
- Include swift client detail in storage section of gnocchi conf
Change-Id: I78df9a2fbe546038e1d6df350d8db0fd9b6f6d49
For now, we use api interface/network for Octavia.
This change will make more flexible for Octavia deployment
with Kolla when we want to use another network for managing
amphora instances (config, health check, clean up)
Change-Id: Ief12f1f8b6c7d3974932e6320af95bb58d46bdb9
Co-Authored-By: Duc Nguyen Cong <ducnc7@viettel.com.vn>
Closes-Bug: #1791207
In some cases a deployer may want to use haproxy for SSL termination but
has external infrastructure for load balancing, and thus no need for
keepalived to manage the VIP.
Co-Authored-By: Adam Harwell <flux.adam@gmail.com>
Change-Id: I451d7e33f1e631038a8d198dbc33c9a8850571b7
To create a magnum cluster, its required to specify
'default_docker_volume_type' with some default value (default cinder
volume type). And, it also enables users to select
diffferent cinder volume types for their volumes.
Change-Id: I50b4c436875e4daac48a14fc1e119136eb5fd844