kolla-ansible upgrade failed when octavia_auto_configure set
to true. because upgrade action don't register the resources
info.
this change adds some tasks to query the resources info
for upgrade action in octavia role.
Change-Id: I4b0ac001b38bee81d983dd68534b9d0e78b4f6d7
we use octavia user to upload image currently, so it is better to
create a octavia openrc file for user
Implements: blueprint implement-automatic-deploy-of-octavia
Change-Id: Ib53d00fa4a6ee59b8a0b2245f83786a6af0cbf53
this patchset has implemented:
- network (lb-mgmt-net)
- security groups and rules (used by amphora and health manager)
- amphora flavor (used by amphora)
- nova keypair (used by amphora at the time of debugging)
Add a octavia_amp_listen_port variable which used by amphora
Add amp_image_owner_id in octavia.conf
Implements: blueprint implement-automatic-deploy-of-octavia
Co-Authored-By: zhangchun <zhangchun@yovole.com>
Depends-On: https://review.opendev.org/652030
Change-Id: I67009d046925cfc02c1e0073c80085c1471975f6
When the internal VIP is moved in the event of a failure of the active
controller, OpenStack services can become unresponsive as they try to
talk with MariaDB using connections from the SQLAlchemy pool.
It has been argued that OpenStack doesn't really need to use connection
pooling with MariaDB [1]. This commit reduces the use of connection
pooling via two configuration options:
- max_pool_size is set to 1 to allow only a single connection in the
pool (it is not possible to disable connection pooling entirely via
oslo.db, and max_pool_size = 0 means unlimited pool size)
- lower connection_recycle_time from the default of one hour to 10
seconds, which means the single connection in the pool will be
recreated regularly
These settings have shown better reactivity of the system in the event
of a failover.
[1] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html
Change-Id: Ib6a62d4428db9b95569314084090472870417f3d
Closes-Bug: #1896635
This change adds support for encryption of communication between
OpenStack services and RabbitMQ. Server certificates are supported, but
currently client certificates are not.
The kolla-ansible certificates command has been updated to support
generating certificates for RabbitMQ for development and testing.
RabbitMQ TLS is enabled in the all-in-one source CI jobs, or when
The Zuul 'tls_enabled' variable is true.
Change-Id: I4f1d04150fb2b5af085b762890092f87ae6076b5
Implements: blueprint message-queue-ssl-support
Recently a patch [1] was merged to stop adding the octavia user to the
admin project, and remove it on upgrade. However, the octavia
configuration was not updated to use the service project, causing load
balancer creation to fail.
There is also an issue for existing deployments in simply switching to
the service project. While existing load balancers appear to continue to
work, creating new load balancers fails due to the security group
belonging to the admin project. At a minimum, the deployer needs to
create a security group in the service project, and update
'octavia_amp_secgroup_list' to match its ID. Ideally the flavor and
network would also be recreated in the service project, although this
does not seem to impact operation and will result in downtime for
existing Amphorae.
This change adds a new variable, 'octavia_service_auth_project', that
can be used to set the project. The default in Ussuri is 'service',
switching to the new behaviour. For backports of this patch it should be
switched to 'admin' to maintain compatibility.
If a deployer sets 'octavia_service_auth_project' to 'admin', the
octavia user will be assigned the admin role in the admin project, as
was done previously.
Closes-Bug: #1882643
Related-Bug: #1873176
[1] https://review.opendev.org/720243/
Co-Authored-By: Mark Goddard <mark@stackhpc.com>
Change-Id: I1efd0154ebaee69373ae5bccd391ee9c68d09b30
The octavia service communicates to the barbican service with
public endpoint_type by default[1], it should use internal
like other services.
[1] 0056b5175f/octavia/common/config.py (L533-L537)
Closes-Bug: #1875618
Change-Id: I90d2b0aeac090a3e2366341e260232fc1f0d6492
Adds necessary "region_name" to octavia.conf when
"enable_barbican" is set to "true".
Closes-Bug: #1867926
Change-Id: Ida61cef4b9c9622a5e925bac4583fba281469a39
This fixes Octavia in scenarios requiring providing
CA cert (self-signed, internally-signed).
Change-Id: I60b7ec85f4fd8bbacf5df0ab7ed9a00658c91871
Closes-Bug: #1872404
The use of default(omit) is for module parameters, not templates. We
define a default value for openstack_cacert, so it should never be
undefined anyway.
Change-Id: Idfa73097ca168c76559dc4f3aa8bb30b7113ab28
Include a reference to the globally configured Certificate Authority to
all services. Services use the CA to verify HTTPs connections.
Change-Id: I38da931cdd7ff46cce1994763b5c713652b096cc
Partially-Implements: blueprint support-trusted-ca-certificate-file
Introduce kolla_address filter.
Introduce put_address_in_context filter.
Add AF config to vars.
Address contexts:
- raw (default): <ADDR>
- memcache: inet6:[<ADDR>]
- url: [<ADDR>]
Other changes:
globals.yml - mention just IP in comment
prechecks/port_checks (api_intf) - kolla_address handles validation
3x interface conditional (swift configs: replication/storage)
2x interface variable definition with hostname
(haproxy listens; api intf)
1x interface variable definition with hostname with bifrost exclusion
(baremetal pre-install /etc/hosts; api intf)
neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
basic multinode source CI job for IPv6
prechecks for rabbitmq and qdrouterd use proper NSS database now
MariaDB Galera Cluster WSREP SST mariabackup workaround
(socat and IPv6)
Ceph naming workaround in CI
TODO: probably needs documenting
RabbitMQ IPv6-only proto_dist
Ceph ms switch to IPv6 mode
Remove neutron-server ml2_type_vxlan/vxlan_group setting
as it is not used (let's avoid any confusion)
and could break setups without proper multicast routing
if it started working (also IPv4-only)
haproxy upgrade checks for slaves based on ipv6 addresses
TODO:
ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
not supported, invalid by default because neutron_external has no address
No idea whether ovs-dpdk works at all atm.
ml2 for xenapi
Xen is not supported too well.
This would require working with XenAPI facts.
rp_filter setting
This would require meddling with ip6tables (there is no sysctl param).
By default nothing is dropped.
Unlikely we really need it.
ironic dnsmasq is configured IPv4-only
dnsmasq needs DHCPv6 options and testing in vivo.
KNOWN ISSUES (beyond us):
One cannot use IPv6 address to reference the image for docker like we
currently do, see: https://github.com/moby/moby/issues/39033
(docker_registry; docker API 400 - invalid reference format)
workaround: use hostname/FQDN
RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
This is due to old RabbitMQ versions available in images.
IPv4 is preferred by default and may fail in the IPv6-only scenario.
This should be no problem in real life as IPv6-only is indeed IPv6-only.
Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
no longer be relevant as we supply all the necessary config.
See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
to work well). Older Ansible versions are known to miss IPv6 addresses
in interface facts. This may affect redeploys, reconfigures and
upgrades which run after VIP address is assigned.
See: https://github.com/ansible/ansible/issues/63227
Bifrost Train does not support IPv6 deployments.
See: https://storyboard.openstack.org/#!/story/2006689
Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
Implements: blueprint ipv6-control-plane
Signed-off-by: Radosław Piliszek <radoslaw.piliszek@gmail.com>
octavia.conf is missing configuration values required to do service
catalog lookups in multiple region environments. Without them Octavia
can try to contact a service in a different region than its own. Specify
region_name and endpoint_type for the glance, neutron, and nova services
to prevent this from happening.
Change-Id: I753cf443c1506bbd7b69fc47e2e0a9b39857509c
Closes-Bug: #1841479
This allows octavia service endpoints to use custom hostnames, and adds the
following variables:
* octavia_internal_fqdn
* octavia_external_fqdn
These default to the old values of kolla_internal_fqdn or
kolla_external_fqdn.
This also adds a octavia_api_listen_port option, which defaults to
octavia_api_port for backward compatibility.
This option allow the user to differentiate between the port the
service listens on, and the port the service is reachable on. This is
useful for external load balancers which live on the same host as the
service itself.
Change-Id: I1310eb5573a469b1a0e9549e853734455307a8b3
Implements: blueprint service-hostnames
We're duplicating code to build the keystone URLs in nearly every
config, where we've already done it in group_vars. Replace the
redundancy with a variable that does the same thing.
Change-Id: I207d77870e2535c1cdcbc5eaf704f0448ac85a7a
In multi controller deployment, kolla will generate
"controller_ip_port_list option" in [health_manager] section with
ONLY IP of that node instead of a list of controller ip.
Therefor, "amphora-agent.conf" file of amphora instance will
contain IP of ONLY ONE controller node.
In case of that node fail, amphora agent won't send heartbeat
message to other health manager node, and the loadbalancer will
go to ERROR state.
Change-Id: I102ed6ba3fff2c12cc6d37f81ad59508eacc859c
Co-Authored-By: Hieu LE <hieulq2@viettel.com.vn>
For now, we use api interface/network for Octavia.
This change will make more flexible for Octavia deployment
with Kolla when we want to use another network for managing
amphora instances (config, health check, clean up)
Change-Id: Ief12f1f8b6c7d3974932e6320af95bb58d46bdb9
Co-Authored-By: Duc Nguyen Cong <ducnc7@viettel.com.vn>
Closes-Bug: #1791207
Option auth_uri from group keystone_authtoken is deprecated[1].
Use option www_authenticate_uri from group keystone_authtoken.
[1]https://review.openstack.org/#/c/508522/
Co-Authored-By: confi-surya <singh.surya64mnnit@gmail.com>
Change-Id: Ifd8527d404f1df807ae8196eac2b3849911ddc26
Closes-Bug: #1761907
This commit separates the messaging rpc and notify transports in order
to support separate and different oslo.messaging backends
This patch:
* add rpc and notify variables
* update service role conf templates
* add example to globals.yaml
* add release note
Implements: blueprint hybrid-messaging
Change-Id: I34691c2895c8563f1f322f0850ecff98d11b5185
Option 'bind_host' from group 'DEFAULT' is deprecated for removal[0],
please use option 'bind_host' from group 'api_settings' instead. bind_port
option are the same.
The default value of api_handler is queue_producer, we did not configure
it. So delete api_handler option.
[0]https://github.com/openstack/octavia/blob/master/octavia/common/config.py#L45
Change-Id: I4e9c1d40bcb497f147ea38d4f3c6d78c181fa20b
Closes-Bug: #1717190