In ansible/roles/iscsi/tasks/pull.yml, there are references to
'iscsi', which should be 'iscsid' instead. This patchset
fixes this typo.
Change-Id: Id2c31bf69556ec8dcf66cc1d32d2bfe77f02367b
Closes-bug: #1602566
Add the following prechecks for network_interface:
* Check it exists on the node
* Check its up
* Check it has an IP associated
TrivialFix
Change-Id: I86f1d79d8592a3b108822e7d19541f91a1c0d716
Co-Authored-By: James McCarthy <james.m.mccarthy@oracle.com>
Notification driver should be configured to avoid timeout failures of
murano app deployments while waiting notifications which will never be
sent.
The required driver is "messagingv2".
TrivialFix
Change-Id: Id0c753f50d93c81eedb2455a7323d86c08873c5f
Migrate to full variable syntax in with_ loop
instead of bare variables for:
- cinder
- haproxy
- ironic
- magnum
- mistral
- mongodb
- murano
- swift
- watcher
TrivialFix
Change-Id: I3ef2e79053cf609aaa710e43ffd0adbc5a97565b
This PS switches to use orchestration_engine variable to differentiate
between ansible and kubernetes when generating configs.
TrivialFix
Change-Id: I8e566a9995f49e924614331458d0c81b9925e543
keystone_*_url are cross role variables. They are used in multi roles.
Move them from the common role to the group vars
TrivialFix
Change-Id: If451823ed7612bfec7bc797ec9dd2597164c6804
When ironic is deployed using kolla, in ironic.conf file
there is no configuration option of enabled_drivers present.
Change-Id: I5c9e7533e8ca139addee8cf4cc4084e856ae0306
Closes-Bug: 1610272
When setting multi memcached servers, the value should be a list
rather then a comma joined string
This patch set I586ce1c6c3300254c4e2a398ff46645df576aeb0 set it in
wrong
TrivialFix
Change-Id: Ic612658ab0310c6764310bbca92c925da6d47f6c
Note: This should not result in any behavior changes in regular Kolla,
just Kolla-Kubernetes and only when you've overridden stuff in globals.yml
Allows override of interface address and memcached pools, so that
Kubernetes can do the right thing.
There are some significant architectural issues involved in
memcached pooling in the Kolla-kubernetes world. Avoiding them right
now.
Current working Kolla-Kubernetes globals.yml file, assuming that your
memcached servers are available under the DNS alias "memcached":
api_interface_address: "0.0.0.0"
memcached_servers: "memcached"
keystone_database_address: "mariadb"
keystone_admin_url: "{{ admin_protocol }}://keystone-admin:{{ keystone_admin_port }}/v3"
keystone_internal_url: "{{ internal_protocol }}://keystone-public:{{ keystone_public_port }}/v3"
keystone_public_url: "{{ public_protocol }}://keystone-public:{{ keystone_public_port }}/v3"
Co-authored-by: Ryan Hallisey <rhallise@redhat.com>
Change-Id: I5126f81da7b4d48001b87f73d58bbbfad658209c
Partially-implements: blueprint api-interface-bind-address-override
Note: This should not result in any behavior changes in regular Kolla, just Kolla-Kubernetes and only when you've overridden stuff in globals.yml
Binds to the api_interface_address variable and uses the keystone and memcached facts we defined in earlier patches.
Co-authored-by: Ryan Hallisey <rhallise@redhat.com>
Change-Id: I8610f4adaa557a21fedd05601e10f5c308fd7ce3
Partially-implements: blueprint api-interface-bind-address-override
enable_rabbitmq_cluster is now a "yes" by default but you can set it
to "no" if you want to disable clustering under any circumstances.
The agreement made at OpenStack in Austin was that Kolla-Kubernetes
would concentrate on RabbitMQ and MariaDB without clustering but
with persistent storage and workload migration, then examine how to
do proper distributed functionality as the project progresses, so I
am just following what we'd already agreed upon.
First, it helps us deal with issues of version upgrades without
dealing with clustered version upgrades and the synchronization
thereof.
Second, it provides an alternative model for durability when used in
Kubernetes. Understand that, if we disable RabbitMQ's clustering,
Kubernetes is still able to re-schedule the queue off of a failed node
in ways that Kolla-Ansible is not. There are known issues with
RabbitMQ clustering, especially with auto-heal turned on. For many
small-to-mid-sized clusters, it's going to provide for a better
operator experience to have the known potential for a 30 second blip
after RabbitMQ node failure than it is to have the known potential
for partition and data loss and/or manual operations after you've
turned off auto-heal.
Kolla-kubernetes has already turned off host networking for the
RabbitMQ pod; it's safe to set the interface address in the
Kubernetes context.
The question was asked why don't I just set the RabbitMQ cluster to be
a single instance. It's unlikely that Kubernetes RabbitMQ with a
PetSet will be clustered in the same declaritive fashion as the
rabbitmq-clusterer plugin. Easier to just disable it and worry about
how to configure the kube-friendly clustered RabbitMQ at a later point
in time. Furthermore, it's an entirely valid case for many OpenStack
control planes hosted atop Kolla-Kubernetes to accept the possibility
of a 30-60 second blip in lieu of the long and questionable history
of RabbitMQ clustering in production.
Co-authored-by: Ryan Hallisey <rhallise@redhat.com>
Change-Id: I7f0cb22d29a418fce4af8d69f63739859173d746
Partially-implements: blueprint api-interface-bind-address-override
The reason for introducing this script is to be able
to launch ovsdb-server and initialize it (create external bridge and plug
external interface) in one shot. It is applicable ONLY to Kubernetes environment
and it is required for Kubernetes DaemonSet usage. The behavior in classical
Kolla has not been changed.
TrivialFix
Change-Id: I54897cc2c0f2bcaaf0411822f3409bf96e92833d
Leverage the browser cache and compress to speed up the file transfer.
In RHEL based image, the expire and deflate module are enabled in
default. In the Debian based image, only the deflate is enabled
* Enable expire module on the Debian based image
* Enable the expire for the assets resource
* Enable the deflate for the http response
Closes-Bug: #1605907
Change-Id: If25decc38a10a21929f72a89cdb350d4ac64a5a9
Kolla's default heka pipline may not satisfy user's requirements,
for example, sending openstack log to outer log analyze system.
So it is necessary to add heka custom configuration for define
user's encoders, outputs and other plugins.
Change-Id: I48bd8d7e0afbc2d023c49c83041f87a04970bbb6
Closes-Bug: #1611164
Ansible's template action supports replacing horizon default config with
custom config, it should only add with_first_found param to
config.yml to support this.
Change-Id: I45b8eed5b8d6c4d42672d99e41bc4eff7023a26f
Closes-Bug: #1570677
The cleanup command in the external API is a misnomer and should
be called destroy.
Change-Id: I083e80699e09bb24266ce1bf549772a5de92a49e
Closes-Bug: 1610364
In the case of a single node environment without haproxy, the var
"kolla_internal_vip_adress" in global.yml should be the ip address
of the host. However, the prechecks will fail, because this ip
address is used by the host node and is pingable.
This commit fixes the prechecks of a vip address properly.
When the var "enable_haproxy" is "no", this fix will skip prechecks
for a vip address.
Change-Id: I0b752f179d20f82e3d6331047ee0bd802ab99a4b
Closes-Bug: #1570935