1a8ea0eac4
When rebuilding a volume backed instance, while copying the new
image to the existing volume, we preserve sparseness.
This could be problematic since we don't write the zero blocks of
the new image and the data in the old image can still persist
leading to a data leak scenario.
To prevent this, we are using `-S 0`[1][2] option with the `qemu-img convert`
command to write all the zero bytes into the volume.
In the testing done, this doesn't seem to be a problem with known 'raw'
images but good to handle the case anyway.
Following is the testing performed with 3 images:
1. CIRROS QCOW2 to RAW
======================
Volume size: 1 GiB
Image size (raw): 112 MiB
CREATE VOLUME FROM IMAGE (without -S 0)
LVS (10.94% allocated)
volume-91ea43ef-684c-402f-896e-63e45e5f4fff stack-volumes-lvmdriver-1 Vwi-a-tz-- 1.00g stack-volumes-lvmdriver-1-pool 10.94
REBUILD (with -S 0)
LVS (10.94% allocated)
volume-91ea43ef-684c-402f-896e-63e45e5f4fff stack-volumes-lvmdriver-1 Vwi-aotz-- 1.00g stack-volumes-lvmdriver-1-pool 10.94
Conclusion:
Same space is consumed on the disk with and without preserving sparseness.
2. DEBIAN QCOW2 to RAW
======================
Volume size: 3 GiB
Image size (raw): 2 GiB
CREATE VOLUME FROM IMAGE (without -S 0)
LVS (66.67% allocated)
volume-edc42b6a-df5d-420e-85d3-b3e52bcb735e stack-volumes-lvmdriver-1 Vwi-a-tz-- 3.00g stack-volumes-lvmdriver-1-pool 66.67
REBUILD (with -S 0)
LVS (66.67% allocated)
volume-edc42b6a-df5d-420e-85d3-b3e52bcb735e stack-volumes-lvmdriver-1 Vwi-aotz-- 3.00g stack-volumes-lvmdriver-1-pool 66.67
Conclusion:
Same space is consumed on the disk with and without preserving sparseness.
3. FEDORA QCOW2 TO RAW
======================
CREATE VOLUME FROM IMAGE (without -S 0)
Volume size: 6 GiB
Image size (raw): 5 GiB
LVS (83.33% allocated)
volume-efa1a227-a30d-4385-867a-db22a3e80ad7 stack-volumes-lvmdriver-1 Vwi-a-tz-- 6.00g stack-volumes-lvmdriver-1-pool 83.33
REBUILD (with -S 0)
LVS (83.33% allocated)
volume-efa1a227-a30d-4385-867a-db22a3e80ad7 stack-volumes-lvmdriver-1 Vwi-aotz-- 6.00g stack-volumes-lvmdriver-1-pool 83.33
Conclusion:
Same space is consumed on the disk with and without preserving sparseness.
Another testing was done to check if the `-S 0` option actually
works in OpenStack setup.
Note that we are converting qcow2 to qcow2 image which won't
happen in a real world deployment and only for test purposes.
DEBIAN QCOW2 TO QCOW2
=====================
CREATE VOLUME FROM IMAGE (without -S 0)
LVS (52.61% allocated)
volume-de581f84-e722-4f4a-94fb-10f767069f50 stack-volumes-lvmdriver-1 Vwi-a-tz-- 3.00g stack-volumes-lvmdriver-1-pool 52.61
REBUILD (with -S 0)
LVS (66.68% allocated)
volume-de581f84-e722-4f4a-94fb-10f767069f50 stack-volumes-lvmdriver-1 Vwi-aotz-- 3.00g stack-volumes-lvmdriver-1-pool 66.68
Conclusion:
We can see that the space allocation increased hence we are not preserving sparseness when using the -S 0 option.
[1] https://qemu-project.gitlab.io/qemu/tools/qemu-img.html#cmdoption-qemu-img-common-opts-S
[2] abf635ddfe/qemu-img.c (L182-L186)
Closes-Bug: #2045431
Change-Id: I5be7eaba68a5b8e1c43f0d95486b5c79c14e1b95
13 lines
521 B
YAML
13 lines
521 B
YAML
---
|
|
fixes:
|
|
- |
|
|
`Bug #2045431 <https://bugs.launchpad.net/cinder/+bug/2045431>`_: Fixed
|
|
a data leak scenario where we preserve sparseness when reimaging the
|
|
volume.
|
|
|
|
We currently do a sparse copy when writing an image on the volume. This
|
|
could be a potential data leak scenario where the zero blocks of the new
|
|
image are not written on the existing volume and the data from the old
|
|
image still exists on the volume. We fix the scenario by not doing sparse
|
|
copy when reimaging the volume.
|