This is problematic for the containerised heat-agents, lsb_release has
to be bind-mounted in, and atomic host doesn't even have lsb_release
installed.
Instead just write to every /etc/cloud/templates/hosts.*.tmpl file.
Change-Id: If2aab7e9b1e03aa657baf1c33aa4392ef7044075
The script run-os-net-config[1] copies in ifcfg-* from the host before
running os-net-config. Apparently it was done this way because the
other scripts in /etc/sysconfig/network-scripts/ differed between host
and agent container. This should be less of an issue now that host and
heat-agents run centos-7 (even when the host is atomic)
tripleo-heat-templates recently changed to running os-net-config in a
deployment script instead of an os-refresh-config script [2]. This
means that our current run-os-net-config approach is currently
resulting in os-net-config being executed twice.
Another issue with run-os-net-config is that it copies ifcfg-* from
host to container, but not back again. This means that rebooting the
server will result in unconfigured interfaces until os-net-config is
somehow run again.
This change bind mounts /etc/sysconfig/network-scripts/ from the host
and uses the conventional approach to running os-refresh-config.
This may fix the issue where compute nodes are losing network
connectivity, so
Closes-Bug: #1646897
[1] http://git.openstack.org/cgit/openstack/tripleo-common/tree/heat_docker_agent/run-os-net-config
[2] I0ed08332cfc49a579de2e83960f0d8047690b97a
Change-Id: I763fc8d8e3eb10ac64d33e46c92888d211003e72
For usability and to reduce the number of environments that need to be
given when enabling TLS in the internal network, it's convenient to add
the enabling of TLS in the internal front-ends for HAProxy, instead of
doing that in a separate environment file.
bp tls-via-certmonger
Change-Id: Icef0c70b4b166ce2108315d5cf0763d4e8585ae1
It's no longer available in Neutron (removed in Mitaka). See:
I2a879213c3b095a007a4531f430a33cea9fdf1bd
Change-Id: I044c648eb8c4933667b8ea2c9159a30e5ebb7df3
The script tries to download all artifact URLs with a single
request, instead of downloading each URL on its own if
multiple DeployArtifactURLs were given.
Change-Id: I6a8be699aff7023a67702bb1d3ddc2273984cd08
This seems to have broken the updates job, causing it to fail
with following error:
Can't set long node name!\nPlease check your configuration\n
Related-Bug: 1646873
This reverts commit 3e9fcfd09320ace07bc1bd4cb57feb98cd057332.
Change-Id: I72ba891cd9cd8c4f1bc204144f46aaabbdfd3647
There were several instances where the short-names/FQDNs where being
gotten in the same way in the role's templates. So this introduces a
mapping to get these values in order to reduce clutter.
Change-Id: Ie7df360bb69d56655f3e0fcbbf4d297db39b7a26
Updates the get-occ-config.sh script used with the deployed-server
environment to support custom roles. Any custom role name, and a
corresponding set of hosts (ip addresses or hostnames) can now be passed
to the script and it will query for the proper nested stack uuid's and
configure os-collect-config appropriately on the respective nodes.
Change-Id: I8fc39e6d18cd70ff881e2a284234b26261018d67
Improve scenario001 with Cinder + RBD coverage.
Also remove Barbican bits, we don't deploy Barbican in scenario001, but
002.
Change-Id: Ib9cadbefcb3ddcdb4812f47ff5496e74b2bd888d
Improve scenario003 to configure Keystone tokens with Fernet provider.
Scenario001 and scenario002 will still deploy uuid for now.
Change-Id: I8c671d0371b2c3590b58b9623bb0df0b0c625a5b
Like Puppet OpenStack CI, implement scenario004 with Ceph RGW scenario,
where Glance uses it as a image storage backend.
Change-Id: If055ca225c456a738c5726ef1e76a4a4f9c566a8
'user' is required or puppet-ceph will complain that the Keystone_user
has no title:
Evaluation Error: Missing title. The title expression resulted in undef
at /etc/puppet/modules/ceph/manifests/rgw/keystone/auth.pp
The value is set to Swift, as we use the same credentials as Swift
service.
Closes-Bug: #1642524
Change-Id: Ib4a7c07086b0b3354c8e589612f330ecdffdc637
The resource is failing and it prevents us to add more coverage. Until
we figure what's wrong with it, let's disable it.
Change-Id: If89775bf67d686327d0d27222e0c9179be74a668
This shows how we could wire in the upgrade steps using Ansible
as was previously proposed e.g in https://review.openstack.org/#/c/321416/
but it's more closely integrated with the new composable services
architecture.
It's also very similar to the approach taken by SpinalStack where
ansible snippets per-service were combined then run in a series of
steps using Ansible tags.
This patch just enables upgrade of keystone - we'll add support for
other patches in subsequent patches.
Partially-Implements: blueprint overcloud-upgrades-per-service
Change-Id: I39f5426cb9da0b40bec4a7a3a4a353f69319bdf9
This changes how we get the network-based FQDNs for the specific
services, from using the custom fact, to the new hiera entries.
Change-Id: Iae668a5d89fb7bee091db4a761aa6c91d369b276
Currently, one can get the network-based FQDNs via a custom puppet
fact. This is currently unreliable, as it's based on the ::hostname
fact which we assume it's set correctly by nova. However, this is not
necessarily the case (for instance, if you use pre-deployed services
such as we do with the multinode-jobs). In these cases, the
::hostname fact will return something other than what we specified in
nova, and effectively breaks the configurations in we relly too much
on the network-based FQDN facts.
By using hiera instead, we avoid this issue as we set those values to
be exactly what we expect (as we set them in the OS::TripleO::Server
resource.
Change-Id: I6ce31237098f57bdc0adfd3c42feef0073c224fb
This patch optimizes how we deploy hiera by using a new
heat hook specifically designed to help compose hiera
within heat templates. As part of this change:
- we update all the 'hiera' software configurations to set the group to hiera
instead of os-apply-config.
- The new format uses JSON instead of YAML. The hook actually writes
out the hiera JSON directly so no conversion takes place. Arrays,
Strings, Booleans all stay in their native formats. As such we can avoid
having to do many of the awkward string and list conversions in t-h-t to
support the previous YAML formatting.
- The new hook prefers JSON over YAML so upgrading users will have the
new files prefered. (we will post a cleanup routine for the old files
soon but this isn't a new behavior, JSON is now simply prefered.)
- A lot of services required edits to account for default settings that
worked in YAML that no longer work correctly in the native JSON
format. In almost all these cases I think the resulting codes looks
cleaner and is more explicit with regards to what is getting
configured in hiera on the actual nodes.
Depends-On: I6a383b1ad4ec29458569763bd3f56fd3f2bd726b
Closes-bug: #1596373
Change-Id: Ibe7e2044e200e2c947223286fdf4fd5bcf98c2e1