MNAIO: Switch to using file-backed VM's only

The MNAIO tooling is a test system, and with that target
we can afford to be opinionated in the implementation
rather than try to cater to every possibility.

Now that the file-backed VM implementation has matured,
we can switch to using it exclusively. This cuts down on
code complexity and allows us to mature the implementation
further with more options without having to cater to two
options.

Change-Id: Ibe04b5676a392301cd79a5d290b77df4c7d9f79a
This commit is contained in:
Jesse Pretorius 2018-10-09 20:42:07 +01:00
parent 06fca4a2f6
commit aee8b6d910
5 changed files with 68 additions and 242 deletions

View File

@ -114,7 +114,6 @@ Set to instruct the preseed what the default network is expected to be:
Set the VM disk size in gigabytes: Set the VM disk size in gigabytes:
``VM_DISK_SIZE="${VM_DISK_SIZE:-252}"`` ``VM_DISK_SIZE="${VM_DISK_SIZE:-252}"``
Instruct the system do all of the required host setup: Instruct the system do all of the required host setup:
``SETUP_HOST=${SETUP_HOST:-true}`` ``SETUP_HOST=${SETUP_HOST:-true}``
@ -203,20 +202,9 @@ Instruct the system to use a customized iPXE script during boot of VMs:
Re-kicking VM(s) Re-kicking VM(s)
---------------- ----------------
Re-kicking a VM is as simple as stopping a VM, delete the logical volume, create To re-kick all VMs, simply re-execute the ``deploy-vms.yml`` playbook and it
a new logical volume, start the VM. The VM will come back online, pxe boot, and will do it automatically. The ansible ``--limit`` parameter may be used to
install the base OS. selectively re-kick a specific VM.
.. code-block:: bash
virsh destroy "${VM_NAME}"
lvremove "/dev/mapper/vg01--${VM_NAME}"
lvcreate -L 60G vg01 -n "${VM_NAME}"
virsh start "${VM_NAME}"
To rekick all VMs, simply re-execute the ``deploy-vms.yml`` playbook and it will
do it automatically.
.. code-block:: bash .. code-block:: bash
@ -267,47 +255,37 @@ command or the following bash loop to restore everything to a known point.
virsh snapshot-revert --snapshotname $instance-kilo-snap --running $instance virsh snapshot-revert --snapshotname $instance-kilo-snap --running $instance
done done
Using a file-based backing store with thin-provisioned VM's Saving VM images for re-use on another host
----------------------------------------------------------- -------------------------------------------
If you wish to use a file-based backing store (instead of the default LVM-based If you wish to save the current images in order to implement a thin-provisioned
backing store) for the VM's, then set the following option before executing set of VM's which can be saved and re-used, then use the ``save-vms.yml``
``build.sh``. playbook. This will stop the VM's and rename the files to ``*-base.img``.
Re-executing the ``deploy-vms.yml`` playbook afterwards will rebuild the VMs
.. code-block:: bash from those images.
export MNAIO_ANSIBLE_PARAMETERS="-e default_vm_disk_mode=file"
./build.sh
If you wish to save the current file-based images in order to implement a
thin-provisioned set of VM's which can be saved and re-used, then use the
``save-vms.yml`` playbook. This will stop the VM's and rename the files to
``*-base.img``. Re-executing the ``deploy-vms.yml`` playbook afterwards will
rebuild the VMs from those images.
.. code-block:: bash .. code-block:: bash
ansible-playbook -i playbooks/inventory playbooks/save-vms.yml ansible-playbook -i playbooks/inventory playbooks/save-vms.yml
ansible-playbook -i playbooks/inventory -e default_vm_disk_mode=file playbooks/deploy-vms.yml ansible-playbook -i playbooks/inventory playbooks/deploy-vms.yml
To disable this default functionality when re-running ``build.sh`` set the To disable this default functionality when re-running ``build.sh`` set the
build not to use the snapshots as follows. build not to use the images as follows.
.. code-block:: bash .. code-block:: bash
export MNAIO_ANSIBLE_PARAMETERS="-e default_vm_disk_mode=file -e vm_use_snapshot=no" export MNAIO_ANSIBLE_PARAMETERS="-e vm_use_snapshot=no"
./build.sh ./build.sh
If you have previously saved some file-backed images to remote storage then, If you have previously saved some images to remote storage then, if they are
if they are available via a URL, they can be downloaded and used on a fresh available via a URL, they can be downloaded and used on a fresh host as follows.
host as follows.
.. code-block:: bash .. code-block:: bash
# First prepare the host and get the base services started # First prepare the host and get the base services started
./bootstrap.sh ./bootstrap.sh
source ansible-env.rc source ansible-env.rc
export ANSIBLE_PARAMETERS="-i playbooks/inventory -e default_vm_disk_mode=file" export ANSIBLE_PARAMETERS="-i playbooks/inventory"
ansible-playbook ${ANSIBLE_PARAMETERS} playbooks/setup-host.yml ansible-playbook ${ANSIBLE_PARAMETERS} playbooks/setup-host.yml
ansible-playbook ${ANSIBLE_PARAMETERS} playbooks/deploy-acng.yml playbooks/deploy-pxe.yml playbooks/deploy-dhcp.yml ansible-playbook ${ANSIBLE_PARAMETERS} playbooks/deploy-acng.yml playbooks/deploy-pxe.yml playbooks/deploy-dhcp.yml

View File

@ -41,15 +41,6 @@
failed_when: false failed_when: false
with_items: "{{ _virt_list.list_vms }}" with_items: "{{ _virt_list.list_vms }}"
- name: Delete any LV's related to running VM's
lvol:
vg: "{{ default_vm_disk_vg }}"
lv: "{{ item }}"
state: absent
force: yes
failed_when: false
with_items: "{{ _virt_list.list_vms }}"
- name: Delete any disk images related to running VM's - name: Delete any disk images related to running VM's
file: file:
path: "{{ _virt_pools.pools.default.path | default('/data/images') }}/{{ item }}.img" path: "{{ _virt_pools.pools.default.path | default('/data/images') }}/{{ item }}.img"
@ -63,10 +54,6 @@
failed_when: false failed_when: false
with_items: "{{ _virt_list.list_vms }}" with_items: "{{ _virt_list.list_vms }}"
- name: Setup/clean-up file-based disk images
when:
- default_vm_disk_mode == "file"
block:
- name: Find existing base image files - name: Find existing base image files
find: find:
paths: "{{ _virt_pools.pools.default.path | default('/data/images') }}" paths: "{{ _virt_pools.pools.default.path | default('/data/images') }}"
@ -93,17 +80,6 @@
tags: tags:
- deploy-vms - deploy-vms
tasks: tasks:
- name: Create VM LV
lvol:
vg: "{{ default_vm_disk_vg }}"
lv: "{{ server_hostname }}"
size: "{{ default_vm_storage }}"
when:
- server_vm | default(false) | bool
- default_vm_disk_mode == "lvm"
delegate_to: "{{ item }}"
with_items: "{{ groups['vm_hosts'] }}"
- name: Create VM Disk Image - name: Create VM Disk Image
command: >- command: >-
qemu-img create qemu-img create
@ -115,7 +91,6 @@
{{ default_vm_storage }}m {{ default_vm_storage }}m
when: when:
- server_vm | default(false) | bool - server_vm | default(false) | bool
- default_vm_disk_mode == "file"
delegate_to: "{{ item }}" delegate_to: "{{ item }}"
with_items: "{{ groups['vm_hosts'] }}" with_items: "{{ groups['vm_hosts'] }}"
@ -134,7 +109,6 @@
# ref: https://bugs.launchpad.net/ubuntu/+source/libguestfs/+bug/1615337. # ref: https://bugs.launchpad.net/ubuntu/+source/libguestfs/+bug/1615337.
- name: Prepare file-based disk images - name: Prepare file-based disk images
when: when:
- default_vm_disk_mode == "file"
- vm_use_snapshot | bool - vm_use_snapshot | bool
block: block:
- name: Inject the host ssh key into the VM disk image - name: Inject the host ssh key into the VM disk image

View File

@ -15,8 +15,6 @@ default_interface: "{{ default_network | default('eth0') }}"
default_vm_image: "{{ default_image | default('ubuntu-16.04-amd64') }}" default_vm_image: "{{ default_image | default('ubuntu-16.04-amd64') }}"
default_vm_storage: "{{ vm_disk_size | default(92160) }}" default_vm_storage: "{{ vm_disk_size | default(92160) }}"
default_vm_root_disk_size: 8192 default_vm_root_disk_size: 8192
default_vm_disk_mode: lvm
default_vm_disk_vg: vg01
default_acng_bind_address: 0.0.0.0 default_acng_bind_address: 0.0.0.0
default_os_families: default_os_families:
ubuntu-16.04-amd64: debian ubuntu-16.04-amd64: debian
@ -37,7 +35,7 @@ ipxe_kernel_base_url: "http://boot.ipxe.org"
vm_ssh_timeout: 1500 vm_ssh_timeout: 1500
# Whether to use snapshots (if they are available) for file-backed VM's # Whether to use snapshots (if they are available) for file-backed VM's
vm_use_snapshot: "{{ default_vm_disk_mode == 'file' }}" vm_use_snapshot: yes
# IP address, or domain name of the TFTP server # IP address, or domain name of the TFTP server
tftp_server: "{{ hostvars[groups['pxe_hosts'][0]]['ansible_host'] | default(ansible_host) }}" tftp_server: "{{ hostvars[groups['pxe_hosts'][0]]['ansible_host'] | default(ansible_host) }}"

View File

@ -34,15 +34,9 @@
</pm> </pm>
<devices> <devices>
<emulator>/usr/bin/kvm-spice</emulator> <emulator>/usr/bin/kvm-spice</emulator>
{% if default_vm_disk_mode == "lvm" %}
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source dev='/dev/{{ default_vm_disk_vg }}/{{ server_hostname }}'/>
{% elif default_vm_disk_mode == "file" %}
<disk type='file' device='disk'> <disk type='file' device='disk'>
<driver name='qemu' type='qcow2' discard='unmap' cache='none' io='native'/> <driver name='qemu' type='qcow2' discard='unmap' cache='none' io='native'/>
<source file='{{ hostvars[item]['virt_pools'].pools.default.path | default('/data/images') }}/{{ server_hostname }}.img'/> <source file='{{ hostvars[item]['virt_pools'].pools.default.path | default('/data/images') }}/{{ server_hostname }}.img'/>
{% endif %}
<target dev='sda' bus='scsi'/> <target dev='sda' bus='scsi'/>
<alias name='scsi0-0-0-0'/> <alias name='scsi0-0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/>

View File

@ -326,95 +326,6 @@
when: when:
- mnaio_data_disk is undefined - mnaio_data_disk is undefined
- name: Get info about existing virt storage pools
virt_pool:
command: info
register: _virt_pools
- name: If an existing virt pool does not match default_vm_disk_mode, remove it
when:
- _virt_pools.pools.default is defined
- (default_vm_disk_mode == "file" and _virt_pools.pools.default.format is defined) or
(default_vm_disk_mode == "lvm" and _virt_pools.pools.default.format is not defined)
block:
- name: Stop running VMs
virt:
name: "{{ item }}"
command: destroy
failed_when: false
with_items: "{{ _virt_pools.pools.default.volumes }}"
- name: Delete VM LVs
lvol:
vg: "{{ default_vm_disk_vg }}"
lv: "{{ item }}"
state: absent
force: yes
failed_when: false
with_items: "{{ _virt_pools.pools.default.volumes }}"
- name: Delete VM Disk Images
file:
path: "{{ _virt_pools.pools.default.path | default('/data/images') }}/{{ item }}.img"
state: absent
with_items: "{{ _virt_pools.pools.default.volumes }}"
- name: Undefine the VMs
virt:
name: "{{ item }}"
command: undefine
failed_when: false
with_items: "{{ _virt_pools.pools.default.volumes }}"
- name: Dismount the mount point if default_vm_disk_mode is 'lvm'
mount:
path: /data
state: unmounted
when:
- default_vm_disk_mode == "lvm"
- name: Stop the pool
virt_pool:
command: destroy
name: default
- name: Delete the pool, destroying its contents
virt_pool:
command: delete
name: default
- name: Undefine the pool
virt_pool:
command: undefine
name: default
- name: Remove the mount point if default_vm_disk_mode is 'lvm'
mount:
path: /data
state: absent
when:
- default_vm_disk_mode == "lvm"
- name: Reload systemd to remove generated unit files for mount
systemd:
daemon_reload: yes
when:
- default_vm_disk_mode == "lvm"
- name: Remove the volume group if default_vm_disk_mode is 'file'
lvg:
vg: vg01
state: absent
register: _remove_vg
when:
- default_vm_disk_mode == "file"
- name: Remove the existing disk partition
parted:
device: "/dev/{{ mnaio_data_disk | default(lsblk.stdout) }}"
number: 1
state: absent
- name: Setup the data disk partition - name: Setup the data disk partition
parted: parted:
device: "/dev/{{ mnaio_data_disk | default(lsblk.stdout) }}" device: "/dev/{{ mnaio_data_disk | default(lsblk.stdout) }}"
@ -424,35 +335,6 @@
state: present state: present
register: _add_partition register: _add_partition
- name: Prepare the data disk for 'lvm' default_vm_disk_mode
when:
- default_vm_disk_mode == "lvm"
block:
- name: Create the volume group
lvg:
vg: vg01
pvs: "/dev/{{ mnaio_data_disk | default(lsblk.stdout) }}1"
- name: Define the default virt storage pool
virt_pool:
name: default
state: present
xml: |
<pool type='logical'>
<name>default</name>
<source>
<name>vg01</name>
<format type='lvm2'/>
</source>
<target>
<path>/dev/vg01</path>
</target>
</pool>
- name: Prepare the data disk for 'file' default_vm_disk_mode
when:
- default_vm_disk_mode == "file"
block:
- name: Prepare the data disk file system - name: Prepare the data disk file system
filesystem: filesystem:
fstype: ext4 fstype: ext4