Installing and configuring tmpwatch allows to get rid of some
ugly things in logrotate configuration. As the container has no
network access anymore, we have to install the tool on the host
directly - this isn't that bad.
In order to avoid issues with logrotate manage logs, we explicitely
exclude patterns manage in the specific logorate configuration.
Also, always in order to avoid issues and ensure logrotate does its
own cleanup, we clean files one day later.
Change-Id: Ic666388d9ba7556e7b68ab2fc1082957a9e26552
Congress doesn't seem to be used anywhere, we never had a bug report or
any sign of somebody out there actually using it.
Let's remove its support in TripleO, to reduce the codebase.
Change-Id: Idca6b12f1c0ca3bc15bedf6469d4063a4dac31fa
This addresses a possible bug when using FreeIPA to do TLS
everywhere.
It is possible that the IPA server is not on the ctlplane.
In this case, when the nodes start up, the registration of the node
with IPA will fail, resulting in failed certificate issuance requests
later on.
We introduce a composable service to run in host_prep_tasks.
This will always run once the networks have been set up. If the
instance has already been enrolled (by cloud-init or in an update),
then the script executed by the service will just exit.
In this iteration, we simply execute the code that the cloud-init
would have done. In later releases, we will execute all the code
performed by novajoin-server here in ansible - and deprecate the
novajoin server.
Change-Id: I31f64c3cbd1d151e3c2a436cc3e2ec5316535087
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Resolves: rhbz#1661635
Closes-Bug: #1815924
Removed all glance-registry related changes from THT, since
Glance Registry has become redundant & been deprecated from
glance due to support of Glance V2. The registry code base is
also going to be removed from Glance project once all the
dependencies removed from other projects.
Change-Id: I548816e3f2d8b9deed8a6f0ba3e203f84ad3d9ca
Closes-Bug: #1808911
Change https://review.openstack.org/614457 added these
networks because of the defaults in ServiceNetMap. With
changes related to LP Bug #1809313 these are no longer
required, as the ServiceNetMap fall's back to ctlplane
when networks are not defined or disabled in networks
data.
Related-Bug: #1809313
Depends-On: I102912851a3b9952daaf7c4d5a34a919f527f805
Change-Id: Ic4f22692f93db4ce0db0f4fbc83eca6b492b28e7
We have yet Nova for SSH keys management, when deploying a standalone
cloud. Allow Octavia deployments for such a case as well.
Jinja2 rendering of the octavia service template provides that
functionality by relying on a new role tag 'standalone'.
Change-Id: I69f3623646ec5b65109e0a4f0c16139018da9282
Closes-bug: #1806113
Co-Authored-By: Harald Jensas <hjensas@redhat.com>
Signed-off-by: Bogdan Dobrelya <bdobreli@redhat.com>
MongoDB support was stopped in Pike, it is not used anywhere now.
Therefore, in Stein are removing it to clean things up.
Change-Id: I4ec8f35b1dd71c25cfb41cc54105ac743ef67745
When using neutron routed networks we need to specify
either the subnet or a ip address in the fixed-ips-request
when creating neutron ports.
a) For the Vip's:
Adds VipSubnetMap and VipSubnetMapDefaults parameters in
service_net_map.yaml. The two maps are merged, so that the
operator can override the subnet where VIP port should be
hosted. For example:
parameter_defaults:
VipSubnetMap:
ctlplane: ctlplane-leaf1
InternalApi: internal_api_leaf1
Storage: storage_leaf1
redis: internal_api_leaf1
b) For overcloud node ports:
Enrich 'networks' in roles defenition to include both
network and subnet data. Changes the list to a map
instead of a list of strings. New schema:
- name: <role_name>
networks:
<network_name>
subnet: <subnet_name>
For backward compatibility a conditional is used to check
if the data is a map or not. In either case the internal
list of role networks is created as '_role_networks' in
the jinja2 templates.
When the data is a map, and the map contains the 'subnet'
key the subnet specified in roles_data.yaml is used as
the subnet in the fixed-ips-reqest when ports are created.
If subnet is not set (or role.networks is not a map) the
default will be {{network.name_lower}}_subnet.
Also, since the fixed_ips request passed to Vip ports are no
longer [] by default, the conditinal has been updated to
test for 'ip_address' entries in the request.
Partial: blueprint tripleo-routed-networks-templates
Depends-On: I773a38fd903fe287132151a4d178326a46890969
Change-Id: I77edc82723d00bfece6752b5dd2c79137db93443
The standalone job were not running yum update on the containers, to do
so we need to specify the updater paremters in the
container-prepare-parameters [1] and also we have to activate the docker
local registry, call the conatiner prepare service and activate registry at
podman.
[1] https://review.openstack.org/#/c/621517/
Change-Id: I74e817bc9b9dd522db3da7753c91a3884d99f8c8
Related-Bug: #1805968
For network isolation, we specifcy available networks for role.
Therefore, there is no point in creating noop network resources for
networks that are not available/connected. This results in redundant
host entries for not available networks on overcloud nodes.
If a network is not available for a role we don't need to create
those extra noop resources.
For Undercloud/Standalone role we keep all networks in roles data
as the default ServiceNetMap specifies non ctlplane networks though
they map to ctlplane.
Change-Id: I07822ec0cba7eed352c0010eb893b5e5a522e95c
Closes-Bug: #1800811
We did not have a easy way to ensure all the openstack clients are
installed on a given system. In the old instack-undercloud installation,
we were installing some additional clients outside of the ones required
via python-tripleoclient. To allow a user to quickly install all the
clients on a given system, this change adds an OpenStack clients
"service" which can be added to a role to ensure the clients are
available. In the future if we provide a client container, this service
can be converted into a container deployment mechanism.
Change-Id: If878c2ab7679eea2fff42b410bec9c8c9b92ed6f
Closes-Bug: #1800001
In some cases we may need to disable selinux (like in CI). The role
needs the SELinux service so that the management can be done during the
deployment.
Change-Id: Ife3c4600f5bd70490a68059eb27c5100743a5298
Closes-Bug: #1797910
Podman service will be in charge of installing, configuring, upgrading
and updating podman in TripleO.
For now, the service is disabled by default but included in all roles.
In the cycle, we'll make it the default.
Note: when Podman will be able to run in TripleO without Docker,
we'll do like https://review.openstack.org/#/c/586679/ and make it as
a generic service that can be switched to either podman or docker.
But for now, we need podman & docker working side by side.
Depends-On: Ie9f5d3b6380caa6824ca940ca48ed0fcf6308608
Change-Id: If9e311df2fc7b808982ee54224cc0ea27e21c830
The standalone role can be used either with the tripleo deploy command
to deploy locally, or it can be used with an undercloud to deploy an
all-in-one node. This change provides a sample set of environment files
for both deployment mechanisms.
Change-Id: Ibc735ac4326a9217469e368c074de8b0df7689bd
Related-Blueprint: all-in-one
In order to support switching between multiple timesync backends, let's
simplify the service configurations for the roles so that there is a
single timesync service. This timesync service should point to the
expected backend (ntp/ptp/chrony).
Change-Id: I986d39398b6143f6c11be29200a4ce364575e402
Related-Blueprint: tripleo-chrony
This patch adds composable new service (QDR) for containerized deployments.
Metrics QDR will run on each overcloud node in 'edge' mode. This basically
means that there is a possibility that there will be two QDRs running
on controllers in case that oslo messaging is deployed. This is a reason why
we need separate composable service for this use case.
Depends-On: If9e3658d304c3071f53ecb1c42796d2603875fcd
Depends-On: I68f39b6bda02ba3920f2ab1cf2df0bd54ad7453f
Depends-On: I73f988d05840eca44949f13f248f86d094a57c46
Change-Id: I1353020f874b348afd98e7ed3832033f85a5267f
Update the standalone role to include compute and other OpenStack
services. This list can be used to deploy a standalone all-in-one
nova/neutron/cinder/glance node.
Depends-On: I796a192b46c3372bebb41096c1051f01445b7e57
Depends-On: If55cf8f90ee7be4acd40fda1f72bb1f31d218b57
Change-Id: I6713469bfbd4fe9fc47d3cb2d3571fe10871f34e
Related-Blueprint: all-in-one
For a standalone all-in-one, we need to create a basic role that has
some of the services, a network config for a single node and an
environment file that has all the services defined but disabled so
that we can enable just the services we will need. In the future, we
will likely make the service list more dynamic but for now it contains a
minimal set of services for a keystone/openshift/kubernetes deployment.
Change-Id: Ieb7c94563bd0132393b5fa268d743981f6e0b6f2
Related-Blueprint: all-in-one