7003817b69
As a failsave the migration code can create a backup of the controllers to use in case that the migration fails and leaves the environment on a unusable state. The revert plan has two stages: 1- Backup stage: included on the current ovn-migration.yml. Can be configured using the env variable CREATE_BACKUP (True by default). This stage will run the new ansible role, recovery-backup. It will store the backup on `/ctl_plane_backup` on the host where the BACKUP_MIGRATION_IP belongs to (can be modified by modifing the env var). In order to restore the controllers, boot them using the iso created by ReaR (stored in /ctl_plane_backup) and perform `automatic recover` 2- Revert stage: this stage has its own ansible playbook (revert.yml) This playbook will clean the environment from all the OVN ressources that could had been created (breaking the data plane connectivity) to leave the environment in a stage where an overcloud deploy with the OVS templates can be run. Note: If the user creates new resources after running the backup stage and then performs the recovery of the controllers, those resources will be lost. Change-Id: I7093f6a5f282b06fb2267cf2c88c533c1eae685d |
||
---|---|---|
.. | ||
infrared/tripleo-ovn-migration | ||
tripleo_environment | ||
hosts.sample | ||
migrate-to-ovn.yml | ||
README.rst |
Migration from ML2/OVS to ML2/OVN
Proof-of-concept ansible script for migrating an OpenStack deployment that uses ML2/OVS to OVN.
If you have a tripleo ML2/OVS deployment then please see the folder
tripleo_environment
Prerequisites:
- Ansible 2.2 or greater.
- ML2/OVS must be using the OVS firewall driver.
To use:
Create an ansible inventory with the expected set of groups and variables as indicated by the hosts-sample file.
Run the playbook:
$ ansible-playbook migrate-to-ovn.yml -i hosts
Testing Status:
- Tested on an RDO cloud on CentOS 7.3 based on Ocata.
- The cloud had 3 controller nodes and 6 compute nodes.
- Observed network downtime was 10 seconds.
- The "--forks 10" option was used with ansible-playbook to ensure that commands could be run across the entire environment in parallel.
MTU:
- If migrating an ML2/OVS deployment using VXLAN tenant networks to an OVN deployment using Geneve for tenant networks, we have an unresolved issue around MTU. The VXLAN overhead is 30 bytes. OVN with Geneve has an overhead of 38 bytes. We need the tenant networks MTU adjusted for OVN and then we need all VMs to receive the updated MTU value through DHCP before the migration can take place. For testing purposes, we've just hacked the Neutron code to indicate that the VXLAN overhead was 38 bytes instead of 30, bypassing the issue at migration time.