This patch removes legacy support for 14.04/16.04/18.04
on the deploy node and moves the default deploy to
Xena on 20.04 LTS. Root disk size has been bumped to support
upgrades (8 GB -> 12 GB).
Change-Id: I81a13464b9daa90090cb380e2b0d89e5eb8fe89a
This patch implements support for deploying an MNAIO with
Open vSwitch and DVR.
Change-Id: I0fb03e2eb0ead198c64019eb0cdd06451e1e7c94
Implements: openvswitch+dvr
This commit adds the ability for users to enable Designate (DNSaaS)
in a MNAIO build. This service is disabled by default.
Change-Id: I36e10922b6fe8e5cba3cc929e2b91b59507c210d
This patch adds support for deploying the ML2/Open Virtual Network (OVN)
plugin for Neutron to an MNAIO deployment. A new var,
osa_enable_networking_ovn, can be set to lay down the appropriate bits.
Change-Id: Ib5bd4e0c20be62ddbf0bff13c91d1918907bf230
This commit enables overriding the default storage options of
cinder/swift to a Ceph backed storage for the multinode all in one
deployments.
We also correct the README and build.sh script to show/use the
current defaults correctly for VM_DISK_SIZE and INFRA_VM_SERVER_RAM.
Change-Id: I9e1f1b09d1bcf224f4afa765c585baf28e6cafa8
In https://review.openstack.org/611582 we removed
the legacy group as it has been deprecated since
Newton, however it appears to still be used by some
downstream tests, so we add it back, but make it
only get implemented if the associated services
are enabled.
Change-Id: I477a46d606d75d44a1ecd5bcfcb29c8308c65245
1. If osa_enable_block_storage=false, then there should
be no cinder hosts deployed, nor any presence of the
vars related to it in openstack_user_config.
2. If osa_enable_compute=false, then there should be no
compute hosts deployed, nor any presence of the
vars related to it in openstack_user_config..
3. If osa_enable_object_storage=false, then there should
be no swift hosts deployed, nor any presence of the
vars related to it in openstack_user_config..
Change-Id: Id6858d277c80095024af5d8e04dfc97cc3e3b253
The legacy group 'os-infra_hosts' is not actually part
of the infrastructure - it includes all the openstack
infrastructure groups (keystone, nova, neutron, glance,
heat, swift). This group's use is unnecessary because
all the other groups included are represented with
individual options.
Having this in the osa_enable_infra conditional also
means that if you mean to disable swift/heat, it does
not work and you end up with a broken deployment.
Change-Id: Icd80fd96aad713372b1fe21752799d56ada3dac4
This commit enables the openstack Octavia load balancing service
if the option is enabled. The Octavia service replaced the legacy
Neutron LBAAS service.
Change-Id: Ib820ec3c4a7f6c9116608140b59332d03cf4c451
In order to get flat networking working correctly we need to create
a new veth pair that neutron can use on the host machines. Neutron can
take the veth end of this pair for the brq bridge while the other end
remains in br-flat allowing communication back to the VM. This also
expands the DHCP range for the veth pairs and changes the
host_bind_override to use the new veth.
Change-Id: I9cd161599ba659890142143d4718420d680d7dca
It looks like the host_bind_override isn't being set for the flat network.
This results in the neutron linuxbridge agent to error out when trying to
attach to br-flat instead of eth2. If attaching a device to the GATEWAY_NET
network, they will be unable to communicate.
Change-Id: Ia8de61011677ec1a7d9683fdff3c3d5848e85c2e
Closes-Bug: 1754097
This change updates the preseed files and the default
openstack-user-config file so that deployers can use and test nspawn
type containers using the in-built automation.
Change-Id: I2ec3bd284540fa9f79490a350f016ca594fb5f98
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
Deployers sometime need the ability to opt out of specific deployment
groups. While a deployer can modify or extend the configuration
groups using conf.d files, until now they didn't have the ability
to remove groups when testing different scenarios. This change
simply adds conditionals to the openstack_user_config giving users
the ability to tailor the default user configuration options to
their needs.
Change-Id: I100ddf09faa072a999b72c4e46a1d3de6480d7e6
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
the previous group build outs for the osa user config were statically
defined. This change makes all of them dynamic which gives a user the
ability to add or remove hosts from the basic inventory as they need.
Change-Id: I1eae7de6d62435e8222ec80b05b6c0a060c5bb69
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
Currently, deploying a multi-node AIO results in
internal_lb_vip_address and external_lb_vip_address being set to the
same IP (10.0.236.150), which results in services having issues
communicating with the lb.
This commit simply sets external_lb_vip_address to use the
load balancer's dhcp address, which is fixed to 10.0.2.150.
Change-Id: I6faabd641c0559a0381e559f364cc76c49293014
This change allows the MNAIO to really be used as a stand alone kick
system which has the potential to be developed into a stand alone
project. At the very least this change improves playbook performance
by scoping variables.
The inventory has been converted into a typical Ansible inventory and
the "servers" used in the MNAIO are now simply host_vars
which will trigger specific VM builds when instructed to do so. This
gives the MNAIO the ability to serve as a stand alone kick system which
could be used for physical hosts as well as MNAIO testing all through
the same basic set of playbooks. Should a deployer want to use this with
physical servers they'd need to do nothing more than define their basic
inventory and where the the required pieces of infrastructure needed to
PXE boot their machines.
Change-Id: I6c47e02ecfbe8ee7533e77b11041785db485a1a9
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
Sadly the log node does not have enough ram to run a full ansible run.
Ansible 2.x requires more ram than one would expect, especially when the
inventory gets large. this change moves the deploy node to infra1 as it
will already have the ram needed to run the playbooks. Additionally the
container storage for infra nodes was too small which forces builds into
error. The default storage for VMs has been set to 90GiB each and the
preseed will create a logical volume for VMs mounted at /var/lib/lxc.
While the limited ram works well for the VMs and within a running
deployment of OSA, ansible-playbook is subject to crash like so:
An exception occurred during task execution. To see the full traceback,
use -vvv. The error was: OSError: [Errno 12] Cannot allocate memory
fatal: [infra1_cinder_api_container-b38b47ea]: FAILED! =>
{"failed": true, "msg": "Unexpected failure during module execution.", "stdout": ""}
So infra nodes have had the memory constraint raised to 8GiB
Change-Id: I7175ea92f663dfef5966532cfc0b4beaadb9eb03
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
This reduces the resource consumption by removing the deploy node and
using the log node instead. This also ups the ram allocation to the
infra hosts which will improve the deployment experience by ensuring we
don't run out of memory.
Change-Id: Id38ff386669308ac3fd1e539ae37c969f00353b8
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>
The original mnaio was built using a lot of bash and was tailored
specifically for ubuntu 14.04. The new mnaio was built using a mix of
bash and ansible and was tailored specifically for ubuntu 16.04. This
patch takes the two code bases and combines the best things from each
method and wraps it up into a single code path all written using ansible
playbooks and basic variables.
While underlying system has changed the bash environment variable syntax
for overrides remains the same. This allows users to continue with what
has become their normal work-flow while leveraging the new structure and
capabilities.
High level overview:
* The general performance of the VMs running within the MNAIO will now
be a lot better. Before the VMs were built within QCOW2 containers,
while this was flexible and portable it was slower. The new
capabilities will use RAW logical volumes and native IO.
* New repo management starts with preseeds and allows the user to pin
to specific repositories without having to worry about flipping them
post build.
* CPU overhead will be a lot less. The old VM system used an
un-reasonable number of processors per VM which directly translated
to sockets. The new system will use cores and a single socket
allowing for generally better VM performance with a lot less
overhead and resource contention on the host.
* Memory consumption has been greatly reduced. Each VM is now
following the memory restrictions we'd find in the gate, as a MAX.
Most of the VMs are using 1 - 2 GiB of RAM which should be more than
enough for our purposes.
Overall the deployment process is simpler and more flexible and will
work on both trusty and xenial out of the box with the hope to bring
centos7 and suse into the fold some time in the future.
Change-Id: Idc8924452c481b08fd3b9362efa32d10d1b8f707
Signed-off-by: Kevin Carter <kevin.carter@rackspace.com>