Mark Goddard 1862e24bb5 Add variables for API VIP address and FQDN
Kayobe currently supports definition of various different networks -
public, internal, tunnel, etc. These typically map to a VLAN or flat
network, with an IP subnet. When a cloud exceeds the size of a single
VLAN/subnet, this approach no longer works.

One way to resolve this is to have multiple subnets that map to a single
logical network, and provide routing between them. This is a similar
concept to neutron's routed networks, but for the control plane.

An issue arising from this is that if different hosts can have different
network definitions for the internal and public networks, it is no
longer trivial to use a network attribute [1] to specify the VIP address
and FQDN. Furthermore, the play that generates Kolla Ansible's
globals.yml containing the VIP and FQDN variables runs as localhost,
which does not necessarily have the internal and public networks
defined.

To resolve this, we add global variables for the VIPs and FQDNs. The
default values are as before, except in the case where HAProxy is
disabled, which we no longer provide a useful default for. That
configuration is very rarely used in practice, and the need to reference
the IP address of a host in the network group makes it difficult to
define safely.

[1] https://docs.openstack.org/kayobe/latest/configuration/reference/network.html#global-network-configuration

Story: 2008180
Task: 40937

Change-Id: I2c428ffc2b285aee03d8f59ae7cd3fb7230ce4ae
2020-10-05 19:59:53 +00:00
2017-12-14 20:39:55 +00:00
2020-09-16 15:54:13 +02:00
2019-09-16 16:26:27 +02:00
2020-09-11 15:44:17 +02:00
2019-06-25 02:24:45 +00:00
2017-04-06 10:15:29 +01:00
2020-08-13 20:30:30 +00:00
2020-07-15 17:32:50 +08:00
2020-04-20 18:04:19 +00:00
2020-05-19 10:08:36 +01:00

Kayobe

Kayobe enables deployment of containerised OpenStack to bare metal.

Containers offer a compelling solution for isolating OpenStack services, but running the control plane on an orchestrator such as Kubernetes or Docker Swarm adds significant complexity and operational overheads.

The hosts in an OpenStack control plane must somehow be provisioned, but deploying a secondary OpenStack cloud to do this seems like overkill.

Kayobe stands on the shoulders of giants:

  • OpenStack bifrost discovers and provisions the cloud
  • OpenStack kolla builds container images for OpenStack services
  • OpenStack kolla-ansible delivers painless deployment and upgrade of containerised OpenStack services

To this solid base, kayobe adds:

  • Configuration of cloud host OS & flexible networking
  • Management of physical network devices
  • A friendly openstack-like CLI

All this and more, automated from top to bottom using Ansible.

Features

  • Heavily automated using Ansible
  • kayobe Command Line Interface (CLI) for cloud operators
  • Deployment of a seed VM used to manage the OpenStack control plane
  • Configuration of physical network infrastructure
  • Discovery, introspection and provisioning of control plane hardware using OpenStack bifrost
  • Deployment of an OpenStack control plane using OpenStack kolla-ansible
  • Discovery, introspection and provisioning of bare metal compute hosts using OpenStack ironic and ironic inspector
  • Virtualised compute using OpenStack nova
  • Containerised workloads on bare metal using OpenStack magnum
  • Big data on bare metal using OpenStack sahara
  • Control plane and workload monitoring and log aggregation using OpenStack monasca
Description
Deployment of containerised OpenStack to bare metal using kolla and bifrost
Readme 40 MiB
Languages
Python 85%
Shell 8%
Jinja 7%