Migrate volumes The Havana release of OpenStack introduces the ability to migrate volumes between backends. Migrating a volume will transparently move its data from the volume's current backend to a new one. This is an administrator function, and can be used for functions including storage evacuation (for maintenance or decommissioning), or manual optimizations (for example, performance, reliability, or cost). There are three possible flows for a migration: If the storage can migrate the volume on its own, it is given the opportunity to do so. This allows the Block Storage driver to enable optimizations that the storage may be able to perform. If the backend is not able to perform the migration, Block Storage will use one of two generic flows, as follows. If the volume is not attached, the Block Storage service will create a new volume, and copy the data from the original to the new volume. Note that while most backends support this function, not all do. See driver documentation in the OpenStack Configuration Reference for more details. If the volume is attached to a VM instance, the Block Storage service will create a new volume, and call Compute to copy the data from the original to the new volume. Currently this is supported only by the Compute libvirt driver. As an example, we will show a scenario with two LVM backends, and migrate an attached volume from one to the other. This will use the 3rd migration flow. First, we can list the available backends: $ cinder-manage host list server1@lvmstorage-1 zone1 server2@lvmstorage-2 zone1 Next, as the admin user, we can see the current status of the volume (replace the example ID with your own): $ cinder show 6088f80a-f116-4331-ad48-9afb0dfb196c +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [...] | | availability_zone | zone1 | | bootable | False | | created_at | 2013-09-01T14:53:22.000000 | | display_description | test | | display_name | test | | id | 6088f80a-f116-4331-ad48-9afb0dfb196c | | metadata | {} | | os-vol-host-attr:host | server1@lvmstorage-1 | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 6bdd8f41203e4149b5d559769307365e | | size | 2 | | snapshot_id | None | | source_volid | None | | status | in-use | | volume_type | None | +--------------------------------+--------------------------------------+ Of special note are the following attributes: os-vol-host-attr:host - the volume's current backend. os-vol-mig-status-attr:migstat - the status of this volume's migration ('None' means that a migration is not currently in progress). os-vol-mig-status-attr:name_id - the volume ID that this volume's name on the backend is based on. Before a volume is ever migrated, its name on the backend storage may be based on the volume's ID (see the volume_name_template configuration parameter). For example, if volume_name_template is kept at the default value (volume-%s), then our first LVM backend will have a logical volume named volume-6088f80a-f116-4331-ad48-9afb0dfb196c. During the course of a migration, if we create a new volume and copy the data over, we will remain with the original volume's ID, but with the new volume's name. This is exposed by the name_id attribute. Now we will migrate this volume to the second LVM backend: $ cinder migrate 6088f80a-f116-4331-ad48-9afb0dfb196c server2@lvmstorage-2 We can use the cinder show command to see the status of the migration. While migrating, the migstat attribute will show states such as migrating or completing. On error, migstat will be set to None and the host attribute will show the original host. On success, in our example, the output would look like: +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [...] | | availability_zone | zone1 | | bootable | False | | created_at | 2013-09-01T14:53:22.000000 | | display_description | test | | display_name | test | | id | 6088f80a-f116-4331-ad48-9afb0dfb196c | | metadata | {} | | os-vol-host-attr:host | server2@lvmstorage-2 | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | 133d1f56-9ffc-4f57-8798-d5217d851862 | | os-vol-tenant-attr:tenant_id | 6bdd8f41203e4149b5d559769307365e | | size | 2 | | snapshot_id | None | | source_volid | None | | status | in-use | | volume_type | None | +--------------------------------+--------------------------------------+ Note that migstat is None, host is the new host, and name_id holds the ID of the volume created by the migration. If we were to look at the second LVM backend, we would find the logical volume volume-133d1f56-9ffc-4f57-8798-d5217d851862. The migration will not be visible to non-admin users (for example, via the volume's status). However, some operations are not be allowed while a migration is taking place, such as attaching/detaching a volume and deleting a volume. If a user performs such an action during a migration, an error will be returned. Migrating volumes that have snapshots is currently not allowed.