The default control plane subnet name is "ctlplane-subnet", so let's
create the right subnet for the containerized undercloud.
Note: the subnet can't be overriden (yet) but for now we rely on the
default.
Change-Id: I15954bced81ef6c3e1a1f4a73bc989f33d08d6f7
Some work is being done in I46fce28926cb5a881f7384948480266712ae75e3
to secure SNMP on a specific network but until then we need to stop
opening the services so cloud providers won't report any security issue
for TripleO jobs.
Change-Id: Icd8a6ddda6152186d6be4a227f6449232fecba5e
Related-Bug: #1749324
Currently when deploying with TLS for internal API traffic, Neutron is
not configured to securely communicate with OVSDB. In regular OVS agent
deployments OVS listens on ptcp and accepts any incoming connection. In
ODL deployments OVS is configured to only listen for pssl connections.
To allow Neutron agents to communicate with OVSDB in pssl, Neutron needs
to be configured with SSL key/certificate in order to connect to OVS.
This patch adds key/certificate generation for NeutronBase service to be
consumed by any agent. The only agent required with ODL is DHCP, so
this patch only addresses configuring SSL there. However, a future
patch could enable SSL for default ML2/OVS agent deployments as well by
building off of this change.
Note, by default OVSDB listens on port 6640. This does not work in ODL
deployments when ODL is on the control node because ODL also listens
on port 6640. Therefore from the ODL service, the ovsdb_connection
setting for DHCP agent is modified when ODL is deployed.
Depends-On: I82281eefa1aa81207ccd8ea565cffc6ca0ec48de
Depends-On: I4bbaf00f0776cab0be34d814a541fb2fd1e64326
Closes-Bug: 1746762
Change-Id: I97352027d7f750d0820610fb9e06f82b47e77056
Signed-off-by: Tim Rozet <trozet@redhat.com>
This change allows FASTFORWARDUPGRADE to be fed to puppet-tripleo
allowing mainifests to act according when applied during FFU.
Change-Id: I8792937c2524c31becfb8a9f28047b73617c0fc3
This change converts the existing NIC templates to jinja2 in
order to dynamically render the ports and networks according
to the network_data.yaml. If networks are added to the
network_data.yaml file, parameters will be added to all
NIC templates. The YAML files (as output from jinja with
the default network_data.yaml) are present as an example.
The roles in roles_data.yaml are used to produce NIC configs
for the standard and custom composable roles. In order to
keep the ordering of NICs the same in the multiple-nics
templates, the order of networks was changed in the
network_data.yaml file. This is reflected in the network
templates, and in some of the files that is the only
change.
The roles and roles_data.yaml were modified to include
a legacy name for the NIC config templates for the
built-in roles Controller, Compute, Object Storage,
Block Storage, Ceph Storage, Compute-DPDK, and
Networker roles. There will now be a file produced
with the legacy name, but also one produced with the
<role>-role.j2.yaml format (along with environment
files to help use the new filenames).
Note this change also fixes some typos as well as
a number of templates that had VLANs with device:
entries which were ignored.
Closes-Bug: 1737041
Depends-On: I49c0245c36de3103671080fd1c8cfb3432856f35
Change-Id: I3bdb7d00dab5a023dd8b9c94c0f89f84357ae7a4
This patch enables health checks execution for all Barbican docker container.
Change-Id: I2e542fa0adb52447abb251910f3ff1095289c726
Depends-On: Ic0573f6dfe550dd7f5d6bc579b3b44a60d4bf1fc
This makes it clearer that the previous task failed, which isn't
immediately evident from the ansible task output due to the failed_when
on those tasks.
Change-Id: I765208d5865f6e5a292e5b52c572e2e79540c663
Closes-Bug: #1748443
When copying templates or files with the
process-templates.py's shutil, ignore cases when
the source and the destination are same files.
This allows the following scenario:
- Symlink t-h-t from the installed package to a work dir
- Process j2 templates with overwrite in the work dir
Required-by: https://review.openstack.org/#/c/542875
Change-Id: I9a9c32f05fde325709998f4fe8bc7fef6c25b5c5
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
This change introduces some basic repo management for fast-forward
upgrades using the tripleo-repos tool as the default implementation.
The following parameters have been added to the template to allow for
additional implementations to be added.
FastForwardRepoType - Currently defaults to tripleo-repos
FastForwardRepoArgs - Currently defaults to tripleo-repos args for O and P
bp fast-forward-upgrades
Change-Id: I92f6f5015f34e6c5e8ef131f303d9c8144d1c83e
As outlined in the spec, fast-forward upgrades aim to take an
environment from an initial release of N to a release of N>=2, beyond
that of the traditionally supported N+1 upgrade path provided today by
many OpenStack projects.
For TripleO the first phase of this upgrade will be to move the
environment to the release prior to the target release. This will be
achieved by disabling all OpenStack control plane services and then
preforming the minimum number of steps required to upgrade each service
through each release until finally reaching the target release.
This change introduces the framework for this phase of the fast-forward
upgrades by adding playbooks and task files as outputs to RoleConfig.
- fast_forward_upgrade_playbook.yaml
This is the top level play and acts as the outer loop of the process,
iterating through the required releases as set by the
FastForwardUpgradeReleases parameter for the fast-forward section of the
upgrade. This currently defaults to Ocata and Pike for Queens.
Note that this play is run against the overcloud host group and it is
currently assumed that the inventory used to run this play is provided
by the tripleo-ansible-inventory command.
- fast_forward_upgrade_release_tasks.yaml
This output simply imports the top level prep and bootstrap task files.
- fast_forward_upgrade_prep_tasks.yaml
- fast_forward_upgrade_bootstrap_tasks.yaml
These outputs act as the inner loop for the fast-forward upgrade phase,
iterating over step values while importing their associated role tasks.
As prep tasks are carried out first for each release we loop over step
values starting at 0 and ending at the defined
fast_forward_upgrade_prep_steps_max, currently 3.
Following this we then complete the bootstrap tasks for each release,
looping over steps values starting at
fast_forward_upgrade_prep_steps_max + 1 , currently 4 and ending at
fast_forward_upgrade_steps_max,currently 9.
- fast_forward_upgrade_prep_role_tasks.yaml
- fast_forward_upgrade_bootstrap_role_tasks.yaml
These outputs then finally import the fast_forward_upgrade_tasks files
generated by the FastForwardUpgradeTasks YAQL query for each role. For
prep tasks these are always included when on an Ansible host of a given
role. This differs from bootstrap tasks that are only included for the
first host associated with a given role.
This will result in the following order of task imports with their
associated value of release and step:
fast_forward_upgrade_playbook
\_fast_forward_upgrade_release_tasks
\_fast_forward_upgrade_prep_tasks - release=ocata
\_fast_forward_upgrade_prep_role_tasks - release=ocata
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=0
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=0
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=1
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=1
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=2
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=2
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=3
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=3
\_fast_forward_upgrade_bootstrap_tasks - release=ocata
\_fast_forward_upgrade_bootstrap_role_tasks - release=ocata
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=4
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=4
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=5
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=5
\_$roleA/fast_forward_upgrade_tasks - release=ocata, step=N
\_$roleB/fast_forward_upgrade_tasks - release=ocata, step=N
\_fast_forward_upgrade_release_tasks
\_fast_forward_upgrade_prep_tasks - release=pike
\_fast_forward_upgrade_prep_role_tasks - release=pike
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=0
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=0
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=1
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=1
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=2
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=2
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=3
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=3
\_fast_forward_upgrade_bootstrap_tasks - release=pike
\_fast_forward_upgrade_bootstrap_role_tasks - release=pike
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=4
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=4
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=5
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=5
\_$roleA/fast_forward_upgrade_tasks - release=pike, step=N
\_$roleB/fast_forward_upgrade_tasks - release=pike, step=N
bp fast-forward-upgrades
Change-Id: Ie2683fd7b81167abe724a7b9245bf85a0a87ad1d
If we use variables defined in later step in conditional before
checking which step are we on we will fail.
Resolves: rhbz#1535457
Closes-Bug: #1743764
Change-Id: Ic21f6eb5c4101f230fa894cd0829a11e2f0ef39b
Templates processor fails to locate *.j2 files,
when a custom output dir is specified.
Ensure *.j2 templates are on their expected search
paths for upcoming pasring and rendering
Change-Id: Idbc93e27574c66a9a5a73e3fcd7e88647282f201
Closes-bug: #1748425
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
In puppet neutron ovs agent we apply workaround which we forgot
to reuse in the docker service definition.
Resolves: rhbz#1536142
Closes-Bug: #1744126
Change-Id: If3a63cad754a875f56b604033c1aff498243140c
L2GW Neutron driver is only present in neutron-server-opendaylight
image. This service will apply its configuration to that image,
but it should be extensible to other containers such neutron-server
in the future.
Depends-On: I22023a645c4752c6371b5cea5ab69c7503991887
Change-Id: I9c39e9ff2ce2e15d3e383035c8cac7413e9eeb03
Signed-off-by: Ricardo Noriega <rnoriega@redhat.com>
BGPVPN Neutron driver is only present in neutron-server-opendaylight
image. This service will apply its configuration to that image,
but it should be extensible to other containers such neutron-server
in the future.
Depends-On: I22023a645c4752c6371b5cea5ab69c7503991887
Change-Id: Ie2b958f67859d285b02af5a80b2a14ccaaf9820a
Signed-off-by: Ricardo Noriega <rnoriega@redhat.com>
rootwrap.conf is not a nova conf file.
Also cleaned up redundant config file args, were the same as the defaults.
Change-Id: I4db5b0c896e7b3ee00c0d97cf07caacb83f04a9c
Related-bug: 1739492