It can be that zuul_site_mirror_fqdn env variable will not be set,
in this case the whole job will fail, instead of simply not configuring
mirrors during image build. With this patch, if set_fact fails, mirrors
simply will not be configured during image build, as planned in lines 62
and 88 in this playbook
Change-Id: I049c696c7fb0d7cadb527a9f17dd01a42a671baa
Occasionally the default config can result in attempts
to bind to ipv6 which fail - so we explicity set the
host to ipv4.
Change-Id: I3c01ed0ef7c84cf779d88386c14f7c7bd2003310
Signed-off-by: Pete Birley <pete@port.direct>
This PS updates the start script to use `config set`, rather
than `config-key set` which has been depricated in Mimic.
Change-Id: I97d0c4385b016d73aa362c0fc293d235b532810c
Signed-off-by: Pete Birley <pete@port.direct>
This removes the tests that query the Grafana API for checking
whether the prometheus datasource has been provisioned and for
checking the number of active dashboards against the number of
expected dashboards determined via the chart's values.yaml.
The reason for removing these is that Grafana can be configured
to use data source types beyond just Prometheus and additional
dashboards can be added to Grafana via the Grafana UI. In cases
where dashboards are added via the Grafana UI, they are persisted
in the grafana database which will cause helm test failures during
upgrade scenarios. Now that we have selenium tests executed as
part of the Grafana helm tests that validate Grafana is
functional, these API tests add little value
Change-Id: I9f20ca28e9c840fb3f4fa0707a43c9419fafa2c1
Signed-off-by: Steve Wilkerson <sw5822@att.com>
This disables the analytics settings for Grafana that will check
grafana.com for plugin/dashboard updates every 10 minutes and for
sending anonymous usage statistics
Change-Id: I0f5283a8a54b563199528bb612aa0cdc6cf238e2
Signed-off-by: Steve Wilkerson <sw5822@att.com>
This PS adds several fixes to Selenium tests (for Kibana) and adds
role which allows to collect the results.
Change-Id: If9fb5f50e395379fdd3ccc46e945a93606dcbabe
This updates the Fluentd clusterrole to allow for getting
namespaces, as this is required for the fluentd kubernetes
plugin to function correctly
Change-Id: Id9d735310c53a922a62c6a82121edd332e7df724
Signed-off-by: Steve Wilkerson <sw5822@att.com>
This fixes the whitespace chomps for adding extra volumes and
volume mounts via values.yaml for the Fluentd chart, as currently
too much whitespace is removed and the extra volumes and mounts
are not added correctly
Change-Id: I9cf67c3321339078ac795a7290f441b16cc41d41
Signed-off-by: Steve Wilkerson <sw5822@att.com>
This updates the helm version from 2.13.1 to 2.14.1
Change-Id: I619351d846253bf17caa922ad7f7b0ff19c778a2
Signed-off-by: Steve Wilkerson <sw5822@att.com>
This PS stores the applied helm values for releases in the gate.
Change-Id: I6563104ded6631b63d9fced775b9b9dba7fd00ef
Signed-off-by: Pete Birley <pete@port.direct>
This adds a conditional check on the deployment type of the
Fluentd chart to determine whether to enable the current liveness
and readiness probes or not. The current probes are designed
around using fluentd as an aggregator and do not function properly
when fluentd is deployed as a daemonset. When run as a daemonset
and configured to tail files via the tail input plugin, fluentd
will prioritize reading the entirety of those files before
processing other input types, including opening the forward source
socket required for the current probes to function correctly. This
results in scenarios where the current probes will fail when in
fact fluentd is functioning correctly.
Daemonset focused probes to come as a follow on once a proper path
forward has been determined
Change-Id: I8a164bd47ce1950e0bd6c5043713f4cde9f85d79
Signed-off-by: Steve Wilkerson <sw5822@att.com>
root_conf area is used for host-specific configuration and overwritten in
each round of loop. It causes that all hosts will share same properties.
This makes use each host's own area in the loop.
Task: 34282
Story: 2005936
Change-Id: I0afb0b32ab80456aa3439b4221f2a95ca05ddf24
This PS updates the ingress controller configmap to be valid with
k8s schema validation turned on.
Change-Id: Ibbc82be62398ee63eb353aa58f1ebdf98e66b30d
Signed-off-by: Pete Birley <pete@port.direct>
This PS indroduces a simpler way to incorp over-rides into gate
runs, and also ensures that they are scoped to a single chart, rather
than all of the charts deployed within a gate run.
Change-Id: Iba80f645f33c6d5847fbbb28ce66ee3d23e4fce8
Signed-off-by: Pete Birley <pete@port.direct>
The PS allows to use tmpfs for etcd during the gates.
There is an assumption that it will improve the performance and
will allow to get rid of weird issues.
Change-Id: Id68645b6535c9b1d87c133431b7cd6eb50fb030e
This removes the old fluent-logging chart from network
policy and replaces it with the new fluentbit and fluentd
charts. This will return the network policy gate back to
passing
Change-Id: I060c6c3034fa798a131a053b9d496e5d8781c55d
This removes the readOnly flag from the /var/log mount for the
fluentd pod to allow for using the file buffer mechanism when
desired
Change-Id: I23f0f03824eec5b142d3f2e8e42e8d07cddfe618
Signed-off-by: Steve Wilkerson <sw5822@att.com>
This change allows the openvswitch to interact with SDN controller
(e.g., ONOS, ODL) through 6640 port.
Story: 2005763
Task: 33473
Change-Id: Ifcbb6a157c230fa729d295ef0d3fb9a16fff60a2
Job openstack-helm-infra uses role named "start-zuul-console" that is part
of another project named "zuul/zuul-jobs". If this job is
ever used by another project as "parent job", it would fail, because
wouldn't find the role in any of the default pathes. This patch adds the
roles from zuul/zuul-jobs project, to the job that uses these roles from
the project
Change-Id: Ib3b7e0e43008b7a4f394b49b75529bfde9780d2f
- When using the TLS certificate generation macro, optionally
support base64 encoding values for direct inclusion in a Kubernetes
secret. The default is to maintain current behavior for backward
compatibility.
Change-Id: Ib62af4e5738cbc853a18e0d2a14c6103784e7370
This patch set implements pool quotas on each pool in the Ceph
cluster by obtaining the total capacity of the cluster in bytes,
multiplying that by the defined percentage of total data expected
to reside in each pool and by the cluster quota, and setting a
byte quota on each pool that is equal to its expected percentage
of the total cluster quota.
Change-Id: I1686822a74c984e99e9347f55b98219c47decec1
This adds the ability for the ceph-osd osd-directory.sh script to
handle existing deployments that place data in hosts via CRUSH and
modify those deployments to place data in racks instead. The
existing data remains intact but is redistributed across the new
rack-level failure domains by updating the CRUSH map and assigning
new rules to existing pools.
Change-Id: Ida79f876d0cae3d99e796e4de1aac55a7978986c