... so we can know how long take resources configuration in Puppet
catalogs, and more easily debug why we have timeouts.
Change-Id: If3fae8837140caae91120e46b4880146ffe22afc
Bind mount the /etc/iscsi host path for iscsi container puppet config.
Use the real host path /etc/iscsi for containers dependsing on it.
Closes-bug: #1735425
Change-Id: I838427ccae06cfe1be72939c4bcc2978f7dc36a8
Depends-on: I7e9f0641164691682516ac3e72e2145c7d112409
Co-authored-by: Alan Bishop <abishop@redhat.com>
Co-authored-by: Martin André <m.andre@redhat.com>
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
scenario001 is timeouting a lot while scenario 007 is fine and far from
timeout limit, so moving out services.
Change-Id: Id34321f95a0584cbc9f6e40f3cd47ed0386cfc9d
The job timeouts too much, some services are already covered by
scenarios, no need to duplicate testing.
Change-Id: I30092400142af5c3308534a8da9daa22cbb82bad
Depends-On: I2a4aa707fa10664f1fc9026e3eb417f35834436f
Background:
extraconfig/pre_deploy/rhel-registration interface has been maintained
for some time now but it's missing some features and the code overlaps
with ongoing efforts to convert everything to Ansible.
Plan:
Consume ansible-role-redhat-subscription from TripleO, so all the logics
goes into the Ansible role, and not in TripleO anymore.
The single parameter exposed to TripleO is RhsmVars and any Ansible
parameter can be given to make the role working.
The parameter can be overriden per roles, so we can think at specific
cases were some Director roles would have specific RHSM configs.
Once we have feature parity between what is done and what was here
before, we'll deprecate the old interface.
Testing:
Because RHSM can't be tested on CentOS, this code was manually tested on
RHEL against the public subscription portal. Also, we verified that
generated Ansible playbooks were correct and called the role with the
right parameters.
Documentation:
We'll work on documentation during the following weeks and explain
how to switch from the previous interface to the new one, and also
document new uses requested by our users.
Change-Id: I8610e4f1f8478f2dcbe3afc319981df914ce1780
These options specify the minimum and maximum poll intervals
for NTP messages, in seconds to the power of two.
The maximum poll interval defaults to 10 (1,024 s), but can be
increased by the MaxPoll option to an upper limit of 17 (36.4 h).
The minimum poll interval defaults to 6 (64 s), but can be decreased
by the MinPoll option to a lower limit of 4 (16 s).
Change-Id: Ib2929be86e8cb31c00d166abe750354188302415
Closes-bug: #1736170
This patch fix the following error when running python3
TypeError: can only concatenate list (not "dict_keys") to list
Change-Id: Ic487bf4c4f6cb2bc35011416056bef1417a23076
The method "generate_environments" in environment_generator.py
takes two arguments but 1 given in "test_environment_generator.py"
Change-Id: I39abcf2340ce04f3d193d80c8af177027c512556
With the move to containers, Ceph OSDs may be combined with other
Ceph services and dedicated Ceph monitors on controllers will be
used less. Popular Ceph roles which include OSDs are Ceph file,
object and nodes which can run all Ceph services. This pattern
will also apply to HCI roles. This change adds the following
pre-composed roles to make it easier for users to use these
patterns:
- CephAll: Standalone Storage Full Role (OSD + MON + RGW + MDS + MGR + RBD Mirroring)
- CephFile: Standalone Scale-out File Role (OSD + MDS)
- CephObject: Standalone Scale-out Object Role (OSD + RGW)
- HciCephAll: HCI Full Stack Role (OSD + MON + Nova + RGW + MDS + MGR + RBD Mirroring)
- HciCephFile: HCI Scale-out File Role (OSD + Nova + MDS)
- HciCephObject: HCI Scale-out Object Role (OSD + Nova + RGW)
- HciCephMon: HCI Scale-out Block Full Role (OSD + MON + MGR + Nova)
- ControllerNoCeph: OpenStack Controller without any Ceph Services
Change-Id: Idce7aa04753eadb459124d6095efd1fe2cc95c17
CI is very unstable now but we need to merge some patches
so we can get promotion and hopefully stabilize CI.
Change-Id: Iffbb2da53221efe6f014f245316c66913ff8c648
This patch exposes puppet_tripleo's docker_options
in the tripleo-heat-templates.
Change-Id: I1b48b2a25dfa5afc3d2e4e4c8f0593e03ead3907
Closes-bug: #1715134
The barbican keystone listener has been added to the same pod
as the barbican api. Need to set some barbican config to enable it.
Also set a specific topic for barbican_notifications so that
we do not compete with other services.
Change-Id: I5f7e4d2367b9776a1b7e74d1727472e1f81f509a
This patch adds the ability to configure DVR in
networking-ovn setups.
Depends-On: I565a5b9918eaf9df1d315c653f76dc4136953ca9
Change-Id: I14d3411f62b411010ea4bd270746436fe3e3cd3a
Signed-off-by: Daniel Alvarez <dalvarez@redhat.com>
There are still some templates with the wrong
alias name. This patch updates them with the
correct version.
Change-Id: I43549ac98f3736029d4aaad1ead745caf40f9299
We weren't creating the default flavors for the undercloud. Do it here!
Co-Authored-By: Martin André <m.andre@redhat.com>
Change-Id: Ic0b00ab42422e8d7f1ddd750d993c7919af0823e
Set NetworkPluginIpv6Enabled if IPv6 networks
have been enabled. Currently this parameter and
NetworkPluginIPv4Enabled are mutually exclusive so
set the latter false as well. Default is IPv4
with NetworkPluginIPv4Enabled.
Depends-On: Ic7e5b5351e429755ba48613ab89d1b7e7d6e2d34
Change-Id: Ia895d7190f0fb8e97c87b3178461d9fc26393b9b
We need to wait for rabbitmq_ready exec so that rabbit is fully
up. This can only happen if we add the tag for it.
Also we need to make sure that launching the epmd process cannot
happen. The reason for this is the following:
When the puppet-rabbitmq module gets invoked (a simple facter run
will be sufficient) inside the rabbitmq_init_bundle container it spawns
an epmd process.
Now if we wait for the Exec[rabbitmq-ready], it means that this epmd
process is staying around until rabbit is up, but then will disappear
suddenly when the rabbitmq_init_bundle container exits, which will
subsequently confuse the rabbitmq cluster and make it fail.
Partial-Bug: #1739026
Co-Authored-By: Damien Ciabrini <dciabrin@redhat.com>
Co-Authored-By: John Eckersberg <jeckersb@redhat.com>
Change-Id: Ie74a13a6c8181948900ea0de8ee9717f76f3ce79
A previous (failed/hanging?) yum process blocks 'yum makecache'
and 'yum check-update' operations, which leads to timeout during
minor update.
Change-Id: I461c1c722944813493f53f339054f420d6ddbe15
Related-Bug: #1704131