wait_for module waits 300 seconds for the port started or stopped. This
is meaningless and useless in precheck. This patch change timeout to 1
seconds.
Change-Id: I9b251ec4ba17ce446655917e8ef5e152ef947298
Closes-Bug: #1688152
With nova cells_v2 at least one compute node is required in
inventory to deploy nova.
This change add prechecks to ensure at least 1 compute is present.
Change-Id: I242518ad3bd149ad245515299301777f6b3bdd08
Closes-Bug: #1686410
Simple_cell_setup is not recomended to use.
Is better create map_cell0 manually, create base
cell for non cell deployments and run discover_hosts.
This PS migrate actual config to make use of described
workflow at [1]. We our actual workflow we're running
into the issue that services are not mapped until cells
are present, breaking deployment waiting for compute
services to appear.
[1] https://docs.openstack.org/developer/nova/cells.html#fresh-install
Change-Id: Id061e8039e72de77a04c51657705457193da2d0f
Closes-Bug: #1682060
Some roles made a bad usage of the 'node_config_directory' variable.
As described here:
https://github.com/openstack/kolla-ansible/blob/master/ansible/group_vars/all.yml#L16
'node_config_directory' is the directory to store the config files on
the destination node.
This variable MUST be changed to 'node_custom_config'.
Futhermore this will unified all roles.
Closes-Bug: #1682445
Change-Id: Id8d8a1268c79befac8938c1e0396267314b40301
Nova service-list is returning empty registered service
when in database they are. Because of this, simple_cell_setup
is not executed and deploy stucks waiting for nova-compute.
This change temporaly checks the database for existing nova services
instead of openstack compute service list.
This change will need to be reverted once the command is fixed.
Change-Id: Ic508eb3ff03b5f233186353fc7697305cc792d14
Add support for basic multiple regions, that is to say, many OpenStack
with a shared Keystone (same users) and Horizon. The shared Keystone
and Horizon are deployed into one region, for instance RegionOne.
Services of other regions have an access to this Keystone. This
support assumes that the operator knows the name of all OpenStack
regions in advance, and considers as many Kolla runs as there are
regions.
The new variable, multiple_regions_names, contains the name of
regions. It is needed by the region that includes Keystone and
Horizon. In register.yml, it specifies to create as many Keystone
endpoints as there are regiones, so that services of other regions can
connect to Keystone. In local_settings.j2, it changes the render to
support multiple regions in Horizon. The multi-regions.rst explains
how to perform a multiple regions deployment.
Implements: blueprint multi-kolla-config
Change-Id: Icab2aebfc4de0e3bc609950956e0af397705f403
Nova external ceph task have a type which break deployment.
State module not present, the module used should be stat.
Change-Id: Ie8a0b30f44fc35a597334383a85353d324e765cd
Closes-Bug: #1671526
Add a new subcommand 'check' to kolla-ansible, used to run the
smoke/sanity checks.
Add stub files to all services that don't currently have checks.
Change-Id: I9f661c5fc51fd5b9b266f23f6c524884613dee48
Partially-implements: blueprint sanity-check-container
cell0 is already setup in Ocata, when upgrading to Pike
is not anymore necessary to create.
All nova DBs (nova_api, nova and nova_cell0) are already
created in Ocata too. Only bootstrap_service is needed
while upgrading.
Change-Id: Idc4941334faf91feee868472155a8c8ea0eba436
Booting from volume require cinder's ceph client secret now. Move cinder
before nova in site.yml, because nova depends on cinder ceph client key
now.
Change-Id: I01c9ed80843d98305b8963894c4917c21a35d3ac
Closes-Bug: #1670676
* Move the tasks to the role
* Skip the task when container is already running
Change-Id: I1990d4dd2a02efa2b3766329000aa23419e0ff17
Closes-Bug: #1670286
In ironic environment deployment, the compute nodes info will be empty
until ironic node is created. There are also some case that user just
want deploy without any nova-compute.
Also enable auto discover hosts feature. This is useful for small
environment.
Closes-Bug: #1666031
Change-Id: I6f3d1c3668452a404875aa5621ee99b2b41e28f0
Sighup signal is neccessary to reload upgrade levels
on all services communicating with rpc.
If not rpc version will remain in Newton.
Change-Id: I4b02d933699aa9b013dfbc65d1e57d53db49bcee
Usernames can be configured with variables in
configuration files, but user creation is hardcoded.
Change-Id: I057cfb921d776217db66f59226dcfa79f3eb7368
Closes-Bug: #1661587
At present, cinder/nova/glance/gnocchi relative containers's
ceph.conf aren't be merge from user custom's config.
In some condition, we should add extra parameter to custom's
ceph.conf, for example:rbd_default_features = 1.
So, it is necessary to use merge_configs instead of template.
Closes-Bug: #1656162
Change-Id: I824e0c68af270b85c52382ae35987213266fc6f6
Check enable_* variables first, then check inventory_host in
group, will help to avoid configuration errors.
Change-Id: Icdb1f50e5c911203b92ac431723620756b15f3c6
Closes-Bug: #1648376
When enable_cinder_backend_nfs is only used, /var/lib/nova/mnt is no
need to mount. Then we do not need care whether this folder is sharable.
Change-Id: I53f4c2c9ec25775cdb02a3256fd3a878723d15f6
Closes-Bug: #1644602
Currently, policy.json is put in
"{{ node_config_directory }}/{{ service_name }}"
in target nodes.
Relocation policy.json to "{{ node_config_directory }}/{{ item }}"
with item is corresponding service compoment config directory.
Currently, the policy.json is copied to all services, but it
should be reviewed and left only in neccesary service
(at many cases, only API service needs that).
Redundant files will be removed in follow up patchset.
Change-Id: I0e997dccf4ec438c9c0436db71ec2fd06650f50d
Closes-Bug: #1639686
Allow cinder-volume, nova-compute and nova-libvirtd to be configured to
use NFS. In order to mount and work with NFS shares, several containers
needed the NFS packages installed during build time.
One somewhat significant change is the addition of an explicit bind
volume for nova-compute that has shared mounts enabled.
According to docker-run(1), the shared mount propagation flag can only
be specified for bind mounted Docker volumes and not named volumes.
In an NFS setup, cinder-volume mounts the NFS shares so that it can
create and manage the Cinder volumes. When a new instance is created
with a Cinder volume or a Cinder volume is attached to an existing
instance, nova-compute mounts the Cinder volume from the NFS share for
nova-libvirtd. In order for nova-libvirtd to then see those Cinder
volumes the shared mounts flag must be enabled for the Docker volume.
Remove the rpcbind container as it is only necessary for operators who
are using NFSv3 or lower. There is no known need for this currently
however, this container can be added in the future should an operator
require it.
Co-authored-by: Ryan Hallisey <rhallise@redhat.com>
Co-authored-by: Andrew Widdersheim <amwiddersheim@gmail.com>
Change-Id: Iad77c05bce8876bdcc69b7ec22edd50e3bf48b9f
Closes-Bug: #1530515
Partially implements: blueprint nfs-support-in-cinder