27 Commits

Author SHA1 Message Date
Ade Lee
2a83856585 Move ipa enrollment to host_prep_tasks
This addresses a possible bug when using FreeIPA to do TLS
everywhere.

It is possible that the IPA server is not on the ctlplane.
In this case, when the nodes start up, the registration of the node
with IPA will fail, resulting in failed certificate issuance requests
later on.

We introduce a composable service to run in host_prep_tasks.
This will always run once the networks have been set up.  If the
instance has already been enrolled (by cloud-init or in an update),
then the script executed by the service will just exit.

In this iteration, we simply execute the code that the cloud-init
would have done.  In later releases, we will execute all the code
performed by novajoin-server here in ansible - and deprecate the
novajoin server.

Change-Id: I31f64c3cbd1d151e3c2a436cc3e2ec5316535087
Co-Authored-By: Juan Antonio Osorio Robles <jaosorior@redhat.com>
Resolves: rhbz#1661635
Closes-Bug: #1815924
2019-02-14 16:07:17 +00:00
lkuchlan
a2d0899f9c Add ContainerImagePrepare service to ControllerStorageNfs role
While using ControllerStorageNfs role images are not pushed to local registry,
since ContainerImagePrepare service is missing in ControllerStorageNfs role.

Closes-Bug: #1814057

Change-Id: Iafe7bf37d7d04eed32a32b8881fab48fdc9f9dd6
2019-02-04 14:10:53 +00:00
Zuul
63a657d2f4 Merge "Remove all glance-registry related changes" 2019-01-24 00:00:44 +00:00
Simon Dodsley
f77d8e7909 Add missing entries for Pure Storage Cinder Backend and fix typos
Closes-bug: 1807195
Change-Id: Ibaaaab9d4169829c0f71cf7acea25971b4526695
2019-01-23 09:34:13 -05:00
Pranali Deore
2dcd56041c Remove all glance-registry related changes
Removed all glance-registry related changes from THT, since
Glance Registry has become redundant & been deprecated from
glance due to support of Glance V2. The registry code base is
also going to be removed from Glance project once all the
dependencies removed from other projects.

Change-Id: I548816e3f2d8b9deed8a6f0ba3e203f84ad3d9ca
Closes-Bug: #1808911
2019-01-22 15:07:29 -07:00
Zuul
845bc3e845 Merge "Remove MongoDB" 2019-01-07 18:39:49 +00:00
Emilien Macchi
be07f991b6 Remove MongoDB
MongoDB support was stopped in Pike, it is not used anywhere now.
Therefore, in Stein are removing it to clean things up.

Change-Id: I4ec8f35b1dd71c25cfb41cc54105ac743ef67745
2019-01-04 15:17:00 +00:00
Harald Jensås
2f2d8183e6 L3 routed networks - subnet fixed_ips (3/3)
When using neutron routed networks we need to specify
either the subnet or a ip address in the fixed-ips-request
when creating neutron ports.

a) For the Vip's:

Adds VipSubnetMap and VipSubnetMapDefaults parameters in
service_net_map.yaml. The two maps are merged, so that the
operator can override the subnet where VIP port should be
hosted. For example:

parameter_defaults:
  VipSubnetMap:
    ctlplane: ctlplane-leaf1
    InternalApi: internal_api_leaf1
    Storage: storage_leaf1
    redis: internal_api_leaf1

b) For overcloud node ports:

Enrich 'networks' in roles defenition to include both
network and subnet data. Changes the list to a map
instead of a list of strings. New schema:

- name: <role_name>
  networks:
    <network_name>
      subnet: <subnet_name>

For backward compatibility a conditional is used to check
if the data is a map or not. In either case the internal
list of role networks is created as '_role_networks' in
the jinja2 templates.

When the data is a map, and the map contains the 'subnet'
key the subnet specified in roles_data.yaml is used as
the subnet in the fixed-ips-reqest when ports are created.
If subnet is not set (or role.networks is not a map) the
default will be {{network.name_lower}}_subnet.

Also, since the fixed_ips request passed to Vip ports are no
longer [] by default, the conditinal has been updated to
test for 'ip_address' entries in the request.

Partial: blueprint tripleo-routed-networks-templates
Depends-On: I773a38fd903fe287132151a4d178326a46890969
Change-Id: I77edc82723d00bfece6752b5dd2c79137db93443
2019-01-03 19:07:20 +01:00
karthik s
512c032a0b Add bootparams service for all roles
NIC partitioning requires IOMMU to be enabled on roles using it.
By adding the BootParams service to all the roles, we could
enable IOMMU selectively by supplying the role specific parameter
"KernelArgs". If a role doesn't use NIC Partitioning then
"KernelArgs" shall be not be set and backward compatibility would
be retained.

Change-Id: I2eb078d9860d9a46d6bffd0fe2f799298538bf73
2018-11-19 05:02:07 -05:00
Emilien Macchi
7bebdefda8 Introduce OS::TripleO::Services::Podman
Podman service will be in charge of installing, configuring, upgrading
and updating podman in TripleO.

For now, the service is disabled by default but included in all roles.
In the cycle, we'll make it the default.

Note: when Podman will be able to run in TripleO without Docker,
we'll do like https://review.openstack.org/#/c/586679/ and make it as
a generic service that can be switched to either podman or docker.
But for now, we need podman & docker working side by side.

Depends-On: Ie9f5d3b6380caa6824ca940ca48ed0fcf6308608
Change-Id: If9e311df2fc7b808982ee54224cc0ea27e21c830
2018-10-02 01:47:46 +00:00
Zuul
f1571e6834 Merge "Cleanup ControllerStorageNfs role" 2018-09-27 12:24:50 +00:00
Tom Barron
5d015ceb53 Cleanup ControllerStorageNfs role
1) Complete the role description to include the main
point.

2) Set reasonable CountDefault given that ganesha
service should be deployed in HA configuration.

Change-Id: Ie787693c0f5360529a81f6e03bdcae9e19488a2c
2018-09-09 18:23:14 -04:00
Alex Schultz
f7f9053963 Create a Timesync service declaration
In order to support switching between multiple timesync backends, let's
simplify the service configurations for the roles so that there is a
single timesync service.  This timesync service should point to the
expected backend (ntp/ptp/chrony).

Change-Id: I986d39398b6143f6c11be29200a4ce364575e402
Related-Blueprint: tripleo-chrony
2018-09-04 21:00:56 +00:00
Martin Mágr
b76d7623ac QDR for metrics collection purposes
This patch adds composable new service (QDR) for containerized deployments.
Metrics QDR will run on each overcloud node in 'edge' mode. This basically
means that there is a possibility that there will be two QDRs running
on controllers in case that oslo messaging is deployed. This is a reason why
we need separate composable service for this use case.

Depends-On: If9e3658d304c3071f53ecb1c42796d2603875fcd
Depends-On: I68f39b6bda02ba3920f2ab1cf2df0bd54ad7453f
Depends-On: I73f988d05840eca44949f13f248f86d094a57c46
Change-Id: I1353020f874b348afd98e7ed3832033f85a5267f
2018-07-31 21:55:45 +00:00
Zuul
30dbc58252 Merge "Add IronicInspector to the Controller roles" 2018-05-28 13:58:18 +00:00
Dan Prince
3b56c9e501 Add validation on role names
Also, fixes role templates that weren't passing this new validation.

Closes-bug: #1756346

Change-Id: I387386a0d7cf47fe11f24a68b9f2bb2ee2f0b7bd
2018-04-25 15:42:22 +00:00
Zuul
822bd996b3 Merge "Support separate oslo.messaging services for RPC and Notification" 2018-04-25 04:43:46 +00:00
Andrew Smith
78bc457585 Support separate oslo.messaging services for RPC and Notification
This commit introduces oslo.messaging services in place of a single
rabbitmq server. This will enable the separation of rpc and
notifications for the continued use of a single backend (e.g.
rabbitmq server) or a dual backend for the messaging communications.

This patch:
* add oslo_messaging_rpc and oslo_messaging_notify services
* add puppet services for rpc and notification
  (rabbitmq and qdrouterd servers)
* add docker services to deploy rpc (rabbitmq or qdrouterd)
  and notify (rabbitmq or shared)
* retains rabbit parameters for core services
* update resource registries, service_net_map, roles, etc.
* update ci environment container scenarios
* add environment generator for messaging
* add release note

Depends-On: Ic2c1a58526febefc1703da5fec12ff68dcc0efa0
Depends-On: I154e2fe6f66b296b9b643627d57696e5178e1815
Depends-On: I03e99d35ed043cf11bea9b7462058bd80f4d99da
Needed-By: Ie181a92731e254b7f613ad25fee6cc37e985c315
Change-Id: I934561612d26befd88a9053262836b47bdf4efb0
2018-04-22 04:33:44 +00:00
Derek Higgins
c2a555f77f Add IronicInspector to the Controller roles
By default its stubbed out with OS::Heat::None, so the
default Controler wont include it.

Change-Id: I3bb104538a836a8f2e20e04dea58edfa0e568c2d
2018-04-18 11:21:56 +01:00
Carlos Goncalves
9526cef547 Containerize Neutron LBaaS service plugin
Change-Id: I68e5ca5a78a2bd08082a494b636c6e2debb6bbae
2018-04-18 10:53:48 +02:00
Harald Jensas
5203e43979 Add Ironic Networking Baremetal Templates
Ironic neutron agent will be installed on controller nodes, or
networker nodes, when environments/services/ironic.yaml or
environments/services-docker/ironic.yaml is used.

It should also be enabled on undercloud.

Also enables ``baremetal`` ML2 mechanism driver on undercloud.

Depends-On: Ic1f44414e187393d35e1382a42d384760d5757ef
Depends-On: I3c40f84052a41ed440758b971975c5c81ace4225
Change-Id: I0b4ef83a5383ff9726f6d69e0394fc544c381a7e
2018-04-12 23:59:34 +02:00
Zuul
97664cb9fe Merge "FFU: Fix glance tasks" 2018-03-19 08:23:35 +00:00
Zuul
78af246b66 Merge "Adds fast_forward_upgrade_tasks for Heat services" 2018-03-16 23:35:08 +00:00
Lukas Bezdicka
9765f8d225 FFU: Fix glance tasks
We need to register fact instead of reruning checks and we can't
hijack glance-api service with glance-registry removal. For the
removal of glance-registry we reintroduce the disabled service
to Controller role.

Change-Id: I38ab5a91b541e7e070f188ee73ef4c7dd7f65eaa
2018-03-14 17:54:35 +00:00
rajinir
a462d796a7 Add support for Dell EMC XtremIO Cinder ISCSI Backend
This change adds a new define for cinder::backend::dellemc_xtremio_iscsi

Change-Id: Icf4a199383064e7884953f0f5085dcef54c3b9a4
Implements: blueprint dellemc-xtremeio-cinder
2018-03-09 14:25:14 -06:00
marios
fa66d68c08 Adds fast_forward_upgrade_tasks for Heat services
Adds ffu tasks for the heat services -api, -api-cfn,
-api-cloudwatch and -engine under systemd are stopped
and also disabled (e.g. to be containerized, migrated httpd etc).
Services stopped step 1, package update step 6, dbsync step8.

Change-Id: Ida0b4cb7f6f0a9d966e2a79dd05460565d98aaf9
2018-03-07 17:41:27 +01:00
Jan Provaznik
96b82d149e Add support for ceph-nfs manila backend
If ceph-nfs (ganesha) service is enabled, it's set up by ceph-ansible
and it can be used as a manila backend. Manila can be configured to use
ceph either directly (manila-cephfsnative-config-docker.yaml env file)
or through ganesha (environments/manila-cephfganesha-config-docker.yaml
env file).

Change-Id: Ib408c7827e5fba0c1b01388db26363806fc64370
Partially-Implements: blueprint nfs-ganesha
2018-02-06 19:04:39 +00:00