Support backup strategy API

Change-Id: I0ddd7214dae6e29ddfaf045fdb282f4980a8afff
This commit is contained in:
Lingxian Kong 2020-07-12 21:26:44 +12:00
parent f8ca333b43
commit 828e873846
34 changed files with 670 additions and 608 deletions

View File

@ -11,7 +11,6 @@
jobs: jobs:
- openstack-tox-cover: - openstack-tox-cover:
voting: false voting: false
- openstack-tox-pylint
- trove-tox-bandit-baseline: - trove-tox-bandit-baseline:
voting: false voting: false
- trove-tempest: - trove-tempest:
@ -21,7 +20,6 @@
gate: gate:
queue: trove queue: trove
jobs: jobs:
- openstack-tox-pylint
- trove-tempest: - trove-tempest:
voting: false voting: false
experimental: experimental:

View File

@ -0,0 +1,111 @@
.. -*- rst -*-
===============
Backup Strategy
===============
Backup strategy allows the user to customize the way of creating backups. Users
can create strategy either in the project scope or for a particular database
instance.
List backup strategies
~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: GET /v1.0/{project_id}/backup_strategies
List backup strategies for a project. You can filter the results by
using query string parameters. The following filters are supported:
- ``instance_id={instance_id}`` - Return the list of backup strategies for a
particular database instance.
- ``project_id={project_id}`` - Return the list of backup strategies for a
particular project, admin only.
Normal response codes: 200
Request
-------
.. rest_parameters:: parameters.yaml
- project_id: project_id
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- backup_strategies: backup_strategy_list
- project_id: project_id
- instance_id: instanceId1
- backend: backup_backend
- swift_container: swift_container_required
Response Example
----------------
.. literalinclude:: samples/backup-strategy-list-response.json
:language: javascript
Create backup strategy
~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: POST /v1.0/{project_id}/backup_strategies
Creates or updates backup strategy for the project or a database instance.
Normal response codes: 202
Request
-------
.. rest_parameters:: parameters.yaml
- project_id: project_id
- instance_id: instance_id_optional
- swift_container: swift_container_required
Request Example
---------------
.. literalinclude:: samples/backup-strategy-create-request.json
:language: javascript
Response Parameters
-------------------
.. rest_parameters:: parameters.yaml
- project_id: project_id
- instance_id: instanceId1
- backend: backup_backend
- swift_container: swift_container_required
Response Example
----------------
.. literalinclude:: samples/backup-strategy-create-response.json
:language: javascript
Delete database strategy
~~~~~~~~~~~~~~~~~~~~~~~~
.. rest_method:: DELETE /v1.0/{project_id}/backup_strategies
Deletes a database strategy for a project. If ``instance_id`` is specified in
the URL query parameters, delete the database strategy for that particular
database instance. Additionally, admin user is allowed to delete backup
strategy of other projects by specifying ``project_id`` in the URL query
parameters.
Normal response codes: 202
Request
-------
.. rest_parameters:: parameters.yaml
- project_id: project_id

View File

@ -70,8 +70,8 @@ Create database backup
Creates a database backup for instance. Creates a database backup for instance.
In the Trove deployment with service tenant enabled, The backup data is In the Trove deployment with service tenant enabled, The backup data is
stored as objects in OpenStack Swift service in the user's stored as objects in OpenStack Swift service in the user's container. If not
container(``database_backups`` by default) specified, the container name is defined by the cloud admin.
Normal response codes: 202 Normal response codes: 202
@ -86,6 +86,7 @@ Request
- parent_id: backup_parentId - parent_id: backup_parentId
- incremental: backup_incremental - incremental: backup_incremental
- description: backup_description - description: backup_description
- swift_container: swift_container
Request Example Request Example
--------------- ---------------

View File

@ -13,6 +13,7 @@
.. include:: instance-actions.inc .. include:: instance-actions.inc
.. include:: instance-logs.inc .. include:: instance-logs.inc
.. include:: backups.inc .. include:: backups.inc
.. include:: backup-strategy.inc
.. include:: configurations.inc .. include:: configurations.inc
.. include:: databases.inc .. include:: databases.inc
.. include:: users.inc .. include:: users.inc

View File

@ -105,6 +105,12 @@ availability_zone:
in: body in: body
required: false required: false
type: string type: string
backup_backend:
description: |
The storage backend of instance backups, currently only swift is supported.
in: body
required: true
type: string
backup_description: backup_description:
description: | description: |
An optional description for the backup. An optional description for the backup.
@ -173,6 +179,12 @@ backup_status:
in: body in: body
required: true required: true
type: string type: string
backup_strategy_list:
description: |
A list of ``backup_strategy`` objects.
in: body
required: true
type: array
characterSet: characterSet:
description: | description: |
A set of symbols and encodings. Default is A set of symbols and encodings. Default is
@ -417,6 +429,12 @@ instance_hostname:
in: body in: body
require: false require: false
type: string type: string
instance_id_optional:
description: |
The ID of the database instance.
in: body
required: false
type: string
instance_ip_address: instance_ip_address:
description: | description: |
The IP address of an instance(deprecated). The IP address of an instance(deprecated).
@ -738,6 +756,16 @@ slave_of:
in: body in: body
required: false required: false
type: string type: string
swift_container:
description: User defined swift container name.
in: body
required: false
type: string
swift_container_required:
description: User defined swift container name.
in: body
required: true
type: string
tenant_id: tenant_id:
description: | description: |
The ID of a tenant. The ID of a tenant.

View File

@ -0,0 +1,6 @@
{
"backup_strategy": {
"instance_id": "0602db72-c63d-11ea-b87c-00224d6b7bc1",
"swift_container": "my_trove_backups"
}
}

View File

@ -0,0 +1,8 @@
{
"backup_strategy": {
"project_id": "922b47766bcb448f83a760358337f2b4",
"instance_id": "0602db72-c63d-11ea-b87c-00224d6b7bc1",
"backend": "swift",
"swift_container": "my_trove_backups"
}
}

View File

@ -0,0 +1,10 @@
{
"backup_strategies": [
{
"backend": "swift",
"instance_id": "0602db72-c63d-11ea-b87c-00224d6b7bc1",
"project_id": "922b47766bcb448f83a760358337f2b4",
"swift_container": "my_trove_backups"
}
]
}

View File

@ -92,7 +92,8 @@ def stream_backup_to_storage(runner_cls, storage):
with runner_cls(filename=CONF.backup_id, **parent_metadata) as bkup: with runner_cls(filename=CONF.backup_id, **parent_metadata) as bkup:
checksum, location = storage.save( checksum, location = storage.save(
bkup, bkup,
metadata=CONF.swift_extra_metadata metadata=CONF.swift_extra_metadata,
container=CONF.swift_container
) )
LOG.info('Backup successfully, checksum: %s, location: %s', LOG.info('Backup successfully, checksum: %s, location: %s',
checksum, location) checksum, location)

View File

@ -1,130 +0,0 @@
=======================
Use incremental backups
=======================
Incremental backups let you chain together a series of backups. You
start with a regular backup. Then, when you want to create a subsequent
incremental backup, you specify the parent backup.
Restoring a database instance from an incremental backup is the same as
creating a database instance from a regular backup—the Database service
handles the complexities of applying the chain of incremental backups.
The artifacts created by backup are stored in OpenStack Swift, by default in a
container named 'database_backups'. As the end user, you are able to access all
the objects but make sure not to delete those objects manually. When a backup
is deleted in Trove, the related objects are automatically removed from Swift.
.. caution::
If the objects in 'database_backups' container are deleted manually, the
database can't be properly restored.
This example shows you how to use incremental backups with a MySQL
database.
**Assumptions.** Assume that you have created a regular
backup for the following database instance:
- Instance name: ``guest1``
- ID of the instance (``INSTANCE_ID``):
``792a6a56-278f-4a01-9997-d997fa126370``
- ID of the regular backup artifact (``BACKUP_ID``):
``6dc3a9b7-1f3e-4954-8582-3f2e4942cddd``
Create and use incremental backups
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#. **Create your first incremental backup**
Use the :command:`openstack database backup create` command and specify:
- The ``INSTANCE_ID`` of the database instance you are doing the
incremental backup for (in this example,
``792a6a56-278f-4a01-9997-d997fa126370``)
- The name of the incremental backup you are creating: ``backup1.1``
- The ``BACKUP_ID`` of the parent backup. In this case, the parent
is the regular backup, with an ID of
``6dc3a9b7-1f3e-4954-8582-3f2e4942cddd``
.. code-block:: console
$ openstack database backup create INSTANCE_ID backup1.1 --parent BACKUP_ID
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| created | 2014-03-19T14:09:13 |
| description | None |
| id | 1d474981-a006-4f62-b25f-43d7b8a7097e |
| instance_id | 792a6a56-278f-4a01-9997-d997fa126370 |
| locationRef | None |
| name | backup1.1 |
| parent_id | 6dc3a9b7-1f3e-4954-8582-3f2e4942cddd |
| size | None |
| status | NEW |
| updated | 2014-03-19T14:09:13 |
+-------------+--------------------------------------+
Note that this command returns both the ID of the database instance
you are incrementally backing up (``instance_id``) and a new ID for
the new incremental backup artifact you just created (``id``).
#. **Create your second incremental backup**
The name of your second incremental backup is ``backup1.2``. This
time, when you specify the parent, pass in the ID of the incremental
backup you just created in the previous step (``backup1.1``). In this
example, it is ``1d474981-a006-4f62-b25f-43d7b8a7097e``.
.. code-block:: console
$ openstack database backup create INSTANCE_ID backup1.2 --parent BACKUP_ID
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| created | 2014-03-19T14:09:13 |
| description | None |
| id | bb84a240-668e-49b5-861e-6a98b67e7a1f |
| instance_id | 792a6a56-278f-4a01-9997-d997fa126370 |
| locationRef | None |
| name | backup1.2 |
| parent_id | 1d474981-a006-4f62-b25f-43d7b8a7097e |
| size | None |
| status | NEW |
| updated | 2014-03-19T14:09:13 |
+-------------+--------------------------------------+
#. **Restore using incremental backups**
Now assume that your ``guest1`` database instance is damaged and you
need to restore it from your incremental backups. In this example,
you use the :command:`openstack database instance create` command to create a new database
instance called ``guest2``.
To incorporate your incremental backups, you simply use the
`--backup`` parameter to pass in the ``BACKUP_ID`` of your most
recent incremental backup. The Database service handles the
complexities of applying the chain of all previous incremental
backups.
.. code-block:: console
$ openstack database instance create guest2 10 --size 1 --nic net-id=$network_id --backup BACKUP_ID
+-------------------+-----------------------------------------------------------+
| Property | Value |
+-------------------+-----------------------------------------------------------+
| created | 2014-03-19T14:10:56 |
| datastore | {u'version': u'mysql-5.5', u'type': u'mysql'} |
| datastore_version | mysql-5.5 |
| flavor | {u'id': u'10'} |
| id | a3680953-eea9-4cf2-918b-5b8e49d7e1b3 |
| name | guest2 |
| status | BUILD |
| updated | 2014-03-19T14:10:56 |
| volume | {u'size': 1} |
+-------------------+-----------------------------------------------------------+

View File

@ -3,64 +3,67 @@ Backup and restore a database
============================= =============================
You can use Database services to backup a database and store the backup You can use Database services to backup a database and store the backup
artifact in the Object Storage service. Later on, if the original artifact in the Object Storage service. Later on, if the original database is
database is damaged, you can use the backup artifact to restore the damaged, you can use the backup artifact to restore the database. The restore
database. The restore process creates a database instance. process creates a new database instance.
The artifacts created by backup are stored in OpenStack Swift, by default in a The backup data is stored in OpenStack Swift, the user is able to customize
container named 'database_backups'. As the end user, you are able to access all which container to store the data. The following ways are described in the
the objects but make sure not to delete those objects manually. When a backup order of precedence from greatest to least:
is deleted in Trove, the related objects are automatically removed from Swift.
1. The container name can be specified when creating backups, this could
override either the backup strategy setting or the default setting in Trove
configuration.
2. Users could create backup strategy either for the project scope or for a
particular instance.
3. If not configured by the end user, will use the default value in Trove
configuration.
.. caution:: .. caution::
If the objects in 'database_backups' container are deleted manually, the If the objects in the backup container are manually deleted, the
database can't be properly restored. database can't be properly restored.
This example shows you how to back up and restore a MySQL database. This example shows you how to create backup strategy, create backup and restore
instance from the backup.
#. **Before creating backup**
1. Make sure you have created an instance, e.g. in this example, we use the following instance:
.. code-block:: console
$ openstack database instance list
+--------------------------------------+--------+-----------+-------------------+--------+-----------+------+
| id | name | datastore | datastore_version | status | flavor_id | size |
+--------------------------------------+--------+-----------+-------------------+--------+-----------+------+
| 97b4b853-80f6-414f-ba6f-c6f455a79ae6 | guest1 | mysql | mysql-5.5 | ACTIVE | 10 | 2 |
+--------------------------------------+--------+-----------+-------------------+--------+-----------+------+
2. Optionally, create a backup strategy for the instance. You can also specify a different swift container name (``--swift-container``) when creating the backup.
.. code-block:: console
$ openstack database backup strategy create --instance-id 97b4b853-80f6-414f-ba6f-c6f455a79ae6 --swift-container my-trove-backups
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| backend | swift |
| instance_id | 97b4b853-80f6-414f-ba6f-c6f455a79ae6 |
| project_id | 922b47766bcb448f83a760358337f2b4 |
| swift_container | my-trove-backups |
+-----------------+--------------------------------------+
#. **Backup the database instance** #. **Backup the database instance**
As background, assume that you have created a database
instance with the following
characteristics:
- Name of the database instance: ``guest1``
- Flavor ID: ``10``
- Root volume size: ``2``
- Databases: ``db1`` and ``db2``
- Users: The ``user1`` user with the ``password`` password
First, get the ID of the ``guest1`` database instance by using the
:command:`openstack database instance list` command:
.. code-block:: console
$ openstack database instance list
+--------------------------------------+--------+-----------+-------------------+--------+-----------+------+
| id | name | datastore | datastore_version | status | flavor_id | size |
+--------------------------------------+--------+-----------+-------------------+--------+-----------+------+
| 97b4b853-80f6-414f-ba6f-c6f455a79ae6 | guest1 | mysql | mysql-5.5 | ACTIVE | 10 | 2 |
+--------------------------------------+--------+-----------+-------------------+--------+-----------+------+
Back up the database instance by using the :command:`openstack database backup create` Back up the database instance by using the :command:`openstack database backup create`
command. In this example, the backup is called ``backup1``. In this command. In this example, the backup is called ``backup1``.
example, replace ``INSTANCE_ID`` with
``97b4b853-80f6-414f-ba6f-c6f455a79ae6``:
.. note::
This command syntax pertains only to python-troveclient version
1.0.6 and later. Earlier versions require you to pass in the backup
name as the first argument.
.. code-block:: console .. code-block:: console
$ openstack database backup create INSTANCE_ID backup1 $ openstack database backup create 97b4b853-80f6-414f-ba6f-c6f455a79ae6 backup1
+-------------+--------------------------------------+ +-------------+--------------------------------------+
| Property | Value | | Property | Value |
+-------------+--------------------------------------+ +-------------+--------------------------------------+
@ -76,11 +79,9 @@ This example shows you how to back up and restore a MySQL database.
| updated | 2014-03-18T17:09:07 | | updated | 2014-03-18T17:09:07 |
+-------------+--------------------------------------+ +-------------+--------------------------------------+
Note that the command returns both the ID of the original instance Later on, use either :command:`openstack database backup list` command or
(``instance_id``) and the ID of the backup artifact (``id``). :command:`openstack database backup show` command to check the backup
status:
Later on, use the :command:`openstack database backup list` command to get this
information:
.. code-block:: console .. code-block:: console
@ -90,14 +91,7 @@ This example shows you how to back up and restore a MySQL database.
+--------------------------------------+--------------------------------------+---------+-----------+-----------+---------------------+ +--------------------------------------+--------------------------------------+---------+-----------+-----------+---------------------+
| 8af30763-61fd-4aab-8fe8-57d528911138 | 97b4b853-80f6-414f-ba6f-c6f455a79ae6 | backup1 | COMPLETED | None | 2014-03-18T17:09:11 | | 8af30763-61fd-4aab-8fe8-57d528911138 | 97b4b853-80f6-414f-ba6f-c6f455a79ae6 | backup1 | COMPLETED | None | 2014-03-18T17:09:11 |
+--------------------------------------+--------------------------------------+---------+-----------+-----------+---------------------+ +--------------------------------------+--------------------------------------+---------+-----------+-----------+---------------------+
$ openstack database backup show 8af30763-61fd-4aab-8fe8-57d528911138
You can get additional information about the backup by using the
:command:`openstack database backup show` command and passing in the ``BACKUP_ID``,
which is ``8af30763-61fd-4aab-8fe8-57d528911138``.
.. code-block:: console
$ openstack database backup show BACKUP_ID
+-------------+----------------------------------------------------+ +-------------+----------------------------------------------------+
| Property | Value | | Property | Value |
+-------------+----------------------------------------------------+ +-------------+----------------------------------------------------+
@ -113,17 +107,36 @@ This example shows you how to back up and restore a MySQL database.
| updated | 2014-03-18T17:09:11 | | updated | 2014-03-18T17:09:11 |
+-------------+----------------------------------------------------+ +-------------+----------------------------------------------------+
#. **Check the backup data in Swift**
Check the container is created and the backup data is saved as objects inside the container.
.. code-block:: console
$ openstack container list
+------------------+
| Name |
+------------------+
| my-trove-backups |
+------------------+
$ openstack object list my-trove-backups
+--------------------------------------------------+
| Name |
+--------------------------------------------------+
| 8af30763-61fd-4aab-8fe8-57d528911138.xbstream.gz |
+--------------------------------------------------+
#. **Restore a database instance** #. **Restore a database instance**
Now assume that your ``guest1`` database instance is damaged and you Now assume that the ``guest1`` database instance is damaged and you
need to restore it. In this example, you use the :command:`openstack database instance create` need to restore it. In this example, you use the :command:`openstack database instance create`
command to create a new database instance called ``guest2``. command to create a new database instance called ``guest2``.
- You specify that the new ``guest2`` instance has the same flavor - Specify that the new ``guest2`` instance has the same flavor
(``10``) and the same root volume size (``2``) as the original (``10``) and the same root volume size (``2``) as the original
``guest1`` instance. ``guest1`` instance.
- You use the ``--backup`` argument to indicate that this new - Use the ``--backup`` argument to indicate that this new
instance is based on the backup artifact identified by instance is based on the backup artifact identified by
``BACKUP_ID``. In this example, replace ``BACKUP_ID`` with ``BACKUP_ID``. In this example, replace ``BACKUP_ID`` with
``8af30763-61fd-4aab-8fe8-57d528911138``. ``8af30763-61fd-4aab-8fe8-57d528911138``.
@ -233,3 +246,33 @@ This example shows you how to back up and restore a MySQL database.
$ openstack database instance delete INSTANCE_ID $ openstack database instance delete INSTANCE_ID
Create incremental backups
--------------------------
Incremental backups let you chain together a series of backups. You start with
a regular backup. Then, when you want to create a subsequent incremental
backup, you specify the parent backup.
Restoring a database instance from an incremental backup is the same as
creating a database instance from a regular backup. the Database service
handles the process of applying the chain of incremental backups.
Create an incremental backup based on a parent backup:
.. code-block:: console
$ openstack database backup create INSTANCE_ID backup1.1 --parent BACKUP_ID
+-------------+--------------------------------------+
| Property | Value |
+-------------+--------------------------------------+
| created | 2014-03-19T14:09:13 |
| description | None |
| id | 1d474981-a006-4f62-b25f-43d7b8a7097e |
| instance_id | 792a6a56-278f-4a01-9997-d997fa126370 |
| locationRef | None |
| name | backup1.1 |
| parent_id | 6dc3a9b7-1f3e-4954-8582-3f2e4942cddd |
| size | None |
| status | NEW |
| updated | 2014-03-19T14:09:13 |
+-------------+--------------------------------------+

View File

@ -14,7 +14,6 @@ handling complex administrative tasks.
create-db.rst create-db.rst
manage-db-and-users.rst manage-db-and-users.rst
backup-db.rst backup-db.rst
backup-db-incremental.rst
manage-db-config.rst manage-db-config.rst
set-up-replication.rst set-up-replication.rst
upgrade-datastore.rst upgrade-datastore.rst

View File

@ -0,0 +1,6 @@
---
features:
- The user can create backup strategy to define the configurations for
creating backups, e.g. the swift container to store the backup data. Users
can also specify the container name when creating backups which takes
precedence over the backup strategy configuration.

View File

@ -1,5 +1,5 @@
[tox] [tox]
envlist = py37,pep8,cover,api-ref,releasenotes,bandit,fakemodetests,pylint envlist = py37,pep8,cover,api-ref,releasenotes,bandit,fakemodetests
minversion = 2.0 minversion = 2.0
skipsdist = True skipsdist = True

View File

@ -23,8 +23,8 @@ from trove.backup.state import BackupState
from trove.common import cfg from trove.common import cfg
from trove.common import clients from trove.common import clients
from trove.common import exception from trove.common import exception
from trove.common.i18n import _
from trove.common import utils from trove.common import utils
from trove.common.i18n import _
from trove.datastore import models as datastore_models from trove.datastore import models as datastore_models
from trove.db.models import DatabaseModelBase from trove.db.models import DatabaseModelBase
from trove.quota.quota import run_with_quotas from trove.quota.quota import run_with_quotas
@ -49,7 +49,7 @@ class Backup(object):
@classmethod @classmethod
def create(cls, context, instance, name, description=None, def create(cls, context, instance, name, description=None,
parent_id=None, incremental=False): parent_id=None, incremental=False, swift_container=None):
""" """
create db record for Backup create db record for Backup
:param cls: :param cls:
@ -60,6 +60,7 @@ class Backup(object):
:param parent_id: :param parent_id:
:param incremental: flag to indicate incremental backup :param incremental: flag to indicate incremental backup
based on previous backup based on previous backup
:param swift_container: Swift container name.
:return: :return:
""" """
@ -73,7 +74,9 @@ class Backup(object):
instance_model.validate_can_perform_action() instance_model.validate_can_perform_action()
cls.validate_can_perform_action( cls.validate_can_perform_action(
instance_model, 'backup_create') instance_model, 'backup_create')
cls.verify_swift_auth_token(context) cls.verify_swift_auth_token(context)
if instance_model.cluster_id is not None: if instance_model.cluster_id is not None:
raise exception.ClusterInstanceOperationNotSupported() raise exception.ClusterInstanceOperationNotSupported()
@ -121,6 +124,7 @@ class Backup(object):
'parent': parent, 'parent': parent,
'datastore': ds.name, 'datastore': ds.name,
'datastore_version': ds_version.name, 'datastore_version': ds_version.name,
'swift_container': swift_container
} }
api.API(context).create_backup(backup_info, instance_id) api.API(context).create_backup(backup_info, instance_id)
return db_info return db_info
@ -295,8 +299,55 @@ class Backup(object):
raise exception.SwiftConnectionError() raise exception.SwiftConnectionError()
class BackupStrategy(object):
@classmethod
def create(cls, context, instance_id, swift_container):
try:
existing = DBBackupStrategy.find_by(tenant_id=context.project_id,
instance_id=instance_id)
existing.swift_container = swift_container
existing.save()
return existing
except exception.NotFound:
return DBBackupStrategy.create(
tenant_id=context.project_id,
instance_id=instance_id,
backend='swift',
swift_container=swift_container,
)
@classmethod
def list(cls, context, tenant_id, instance_id=None):
kwargs = {'tenant_id': tenant_id}
if instance_id:
kwargs['instance_id'] = instance_id
result = DBBackupStrategy.find_by_filter(**kwargs)
return result
@classmethod
def get(cls, context, instance_id):
try:
return DBBackupStrategy.find_by(tenant_id=context.project_id,
instance_id=instance_id)
except exception.NotFound:
try:
return DBBackupStrategy.find_by(tenant_id=context.project_id,
instance_id='')
except exception.NotFound:
return None
@classmethod
def delete(cls, context, tenant_id, instance_id):
try:
existing = DBBackupStrategy.find_by(tenant_id=tenant_id,
instance_id=instance_id)
existing.delete()
except exception.NotFound:
pass
def persisted_models(): def persisted_models():
return {'backups': DBBackup} return {'backups': DBBackup, 'backup_strategy': DBBackupStrategy}
class DBBackup(DatabaseModelBase): class DBBackup(DatabaseModelBase):
@ -331,6 +382,13 @@ class DBBackup(DatabaseModelBase):
else: else:
return None return None
@property
def container_name(self):
if self.location:
return self.location.split('/')[-2]
else:
return None
@property @property
def datastore(self): def datastore(self):
if self.datastore_version_id: if self.datastore_version_id:
@ -366,3 +424,9 @@ class DBBackup(DatabaseModelBase):
return False return False
else: else:
raise exception.SwiftAuthError(tenant_id=context.project_id) raise exception.SwiftAuthError(tenant_id=context.project_id)
class DBBackupStrategy(DatabaseModelBase):
"""A table for backup strategy records."""
_data_fields = ['tenant_id', 'instance_id', 'backend', 'swift_container']
_table_name = 'backup_strategy'

View File

@ -16,14 +16,17 @@
from oslo_log import log as logging from oslo_log import log as logging
from oslo_utils import strutils from oslo_utils import strutils
from trove.backup.models import Backup
from trove.backup import views from trove.backup import views
from trove.backup.models import Backup
from trove.backup.models import BackupStrategy
from trove.common import apischema from trove.common import apischema
from trove.common import exception
from trove.common import notification from trove.common import notification
from trove.common.notification import StartNotification
from trove.common import pagination from trove.common import pagination
from trove.common import policy from trove.common import policy
from trove.common import utils
from trove.common import wsgi from trove.common import wsgi
from trove.common.notification import StartNotification
LOG = logging.getLogger(__name__) LOG = logging.getLogger(__name__)
@ -80,12 +83,23 @@ class BackupController(wsgi.Controller):
desc = data.get('description') desc = data.get('description')
parent = data.get('parent_id') parent = data.get('parent_id')
incremental = data.get('incremental') incremental = data.get('incremental')
swift_container = data.get('swift_container')
context.notification = notification.DBaaSBackupCreate(context, context.notification = notification.DBaaSBackupCreate(context,
request=req) request=req)
if not swift_container:
instance_id = utils.get_id_from_href(instance)
backup_strategy = BackupStrategy.get(context, instance_id)
if backup_strategy:
swift_container = backup_strategy.swift_container
with StartNotification(context, name=name, instance_id=instance, with StartNotification(context, name=name, instance_id=instance,
description=desc, parent_id=parent): description=desc, parent_id=parent):
backup = Backup.create(context, instance, name, desc, backup = Backup.create(context, instance, name, desc,
parent_id=parent, incremental=incremental) parent_id=parent, incremental=incremental,
swift_container=swift_container)
return wsgi.Result(views.BackupView(backup).data(), 202) return wsgi.Result(views.BackupView(backup).data(), 202)
def delete(self, req, tenant_id, id): def delete(self, req, tenant_id, id):
@ -101,3 +115,56 @@ class BackupController(wsgi.Controller):
with StartNotification(context, backup_id=id): with StartNotification(context, backup_id=id):
Backup.delete(context, id) Backup.delete(context, id)
return wsgi.Result(None, 202) return wsgi.Result(None, 202)
class BackupStrategyController(wsgi.Controller):
schemas = apischema.backup_strategy
def create(self, req, body, tenant_id):
LOG.info("Creating or updating a backup strategy for tenant %s, "
"body: %s", tenant_id, body)
context = req.environ[wsgi.CONTEXT_KEY]
policy.authorize_on_tenant(context, 'backup_strategy:create')
data = body['backup_strategy']
instance_id = data.get('instance_id', '')
swift_container = data.get('swift_container')
backup_strategy = BackupStrategy.create(context, instance_id,
swift_container)
return wsgi.Result(
views.BackupStrategyView(backup_strategy).data(), 202)
def index(self, req, tenant_id):
context = req.environ[wsgi.CONTEXT_KEY]
instance_id = req.GET.get('instance_id')
tenant_id = req.GET.get('project_id', context.project_id)
LOG.info("Listing backup strateies for tenant %s", tenant_id)
if tenant_id != context.project_id and not context.is_admin:
raise exception.TroveOperationAuthError(
tenant_id=context.project_id
)
policy.authorize_on_tenant(context, 'backup_strategy:index')
result = BackupStrategy.list(context, tenant_id,
instance_id=instance_id)
view = views.BackupStrategiesView(result)
return wsgi.Result(view.data(), 200)
def delete(self, req, tenant_id):
context = req.environ[wsgi.CONTEXT_KEY]
instance_id = req.GET.get('instance_id', '')
tenant_id = req.GET.get('project_id', context.project_id)
LOG.info('Deleting backup strategies for tenant %s, instance_id=%s',
tenant_id, instance_id)
if tenant_id != context.project_id and not context.is_admin:
raise exception.TroveOperationAuthError(
tenant_id=context.project_id
)
policy.authorize_on_tenant(context, 'backup_strategy:delete')
BackupStrategy.delete(context, tenant_id, instance_id)
return wsgi.Result(None, 202)

View File

@ -54,3 +54,31 @@ class BackupViews(object):
for b in self.backups: for b in self.backups:
backups.append(BackupView(b).data()["backup"]) backups.append(BackupView(b).data()["backup"])
return {"backups": backups} return {"backups": backups}
class BackupStrategyView(object):
def __init__(self, backup_strategy):
self.backup_strategy = backup_strategy
def data(self):
result = {
"backup_strategy": {
"project_id": self.backup_strategy.tenant_id,
"instance_id": self.backup_strategy.instance_id,
'backend': self.backup_strategy.backend,
"swift_container": self.backup_strategy.swift_container,
}
}
return result
class BackupStrategiesView(object):
def __init__(self, backup_strategies):
self.backup_strategies = backup_strategies
def data(self):
backup_strategies = []
for item in self.backup_strategies:
backup_strategies.append(
BackupStrategyView(item).data()["backup_strategy"])
return {"backup_strategies": backup_strategies}

View File

@ -15,6 +15,7 @@
import routes import routes
from trove.backup.service import BackupController from trove.backup.service import BackupController
from trove.backup.service import BackupStrategyController
from trove.cluster.service import ClusterController from trove.cluster.service import ClusterController
from trove.common import wsgi from trove.common import wsgi
from trove.configuration.service import ConfigurationsController from trove.configuration.service import ConfigurationsController
@ -37,6 +38,7 @@ class API(wsgi.Router):
self._versions_router(mapper) self._versions_router(mapper)
self._limits_router(mapper) self._limits_router(mapper)
self._backups_router(mapper) self._backups_router(mapper)
self._backup_strategy_router(mapper)
self._configurations_router(mapper) self._configurations_router(mapper)
self._modules_router(mapper) self._modules_router(mapper)
@ -192,6 +194,21 @@ class API(wsgi.Router):
action="delete", action="delete",
conditions={'method': ['DELETE']}) conditions={'method': ['DELETE']})
def _backup_strategy_router(self, mapper):
backup_strategy_resource = BackupStrategyController().create_resource()
mapper.connect("/{tenant_id}/backup_strategies",
controller=backup_strategy_resource,
action="create",
conditions={'method': ['POST']})
mapper.connect("/{tenant_id}/backup_strategies",
controller=backup_strategy_resource,
action="index",
conditions={'method': ['GET']})
mapper.connect("/{tenant_id}/backup_strategies",
controller=backup_strategy_resource,
action="delete",
conditions={'method': ['DELETE']})
def _modules_router(self, mapper): def _modules_router(self, mapper):
modules_resource = ModuleController().create_resource() modules_resource = ModuleController().create_resource()

View File

@ -601,13 +601,33 @@ backup = {
"instance": uuid, "instance": uuid,
"name": non_empty_string, "name": non_empty_string,
"parent_id": uuid, "parent_id": uuid,
"incremental": boolean_string "incremental": boolean_string,
"swift_container": non_empty_string
} }
} }
} }
} }
} }
backup_strategy = {
"create": {
"name": "backup_strategy:create",
"type": "object",
"required": ["backup_strategy"],
"properties": {
"backup_strategy": {
"type": "object",
"additionalProperties": False,
"required": ["swift_container"],
"properties": {
"instance_id": uuid,
"swift_container": non_empty_string
}
}
},
}
}
guest_log = { guest_log = {
"action": { "action": {
"name": "guest_log:action", "name": "guest_log:action",

View File

@ -471,7 +471,7 @@ common_opts = [
help='Key (OpenSSL aes_cbc) for instance RPC encryption.'), help='Key (OpenSSL aes_cbc) for instance RPC encryption.'),
cfg.StrOpt('database_service_uid', default='1001', cfg.StrOpt('database_service_uid', default='1001',
help='The UID(GID) of database service user.'), help='The UID(GID) of database service user.'),
cfg.StrOpt('backup_docker_image', default='openstacktrove/db-backup:1.0.0', cfg.StrOpt('backup_docker_image', default='openstacktrove/db-backup:1.0.1',
help='The docker image used for backup and restore.'), help='The docker image used for backup and restore.'),
cfg.ListOpt('reserved_network_cidrs', default=[], cfg.ListOpt('reserved_network_cidrs', default=[],
help='Network CIDRs reserved for Trove guest instance ' help='Network CIDRs reserved for Trove guest instance '

View File

@ -457,6 +457,12 @@ class BackupDatastoreMismatchError(TroveError):
" datastore of %(datastore2)s.") " datastore of %(datastore2)s.")
class BackupTooLarge(TroveError):
message = _("Backup is too large for given flavor or volume. "
"Backup size: %(backup_size)s GBs. "
"Available size: %(disk_size)s GBs.")
class ReplicaCreateWithUsersDatabasesError(TroveError): class ReplicaCreateWithUsersDatabasesError(TroveError):
message = _("Cannot create a replica with users or databases.") message = _("Cannot create a replica with users or databases.")
@ -688,12 +694,6 @@ class ClusterDatastoreNotSupported(TroveError):
"%(datastore)s-%(datastore_version)s.") "%(datastore)s-%(datastore_version)s.")
class BackupTooLarge(TroveError):
message = _("Backup is too large for given flavor or volume. "
"Backup size: %(backup_size)s GBs. "
"Available size: %(disk_size)s GBs.")
class ImageNotFound(NotFound): class ImageNotFound(NotFound):
message = _("Image %(uuid)s cannot be found.") message = _("Image %(uuid)s cannot be found.")

View File

@ -12,7 +12,7 @@
from oslo_policy import policy from oslo_policy import policy
from trove.common.policies.base import PATH_BACKUPS, PATH_BACKUP from trove.common.policies import base
rules = [ rules = [
policy.DocumentedRuleDefault( policy.DocumentedRuleDefault(
@ -21,7 +21,7 @@ rules = [
description='Create a backup of a database instance.', description='Create a backup of a database instance.',
operations=[ operations=[
{ {
'path': PATH_BACKUPS, 'path': base.PATH_BACKUPS,
'method': 'POST' 'method': 'POST'
} }
]), ]),
@ -31,7 +31,7 @@ rules = [
description='Delete a backup of a database instance.', description='Delete a backup of a database instance.',
operations=[ operations=[
{ {
'path': PATH_BACKUP, 'path': base.PATH_BACKUP,
'method': 'DELETE' 'method': 'DELETE'
} }
]), ]),
@ -41,7 +41,7 @@ rules = [
description='List all backups.', description='List all backups.',
operations=[ operations=[
{ {
'path': PATH_BACKUPS, 'path': base.PATH_BACKUPS,
'method': 'GET' 'method': 'GET'
} }
]), ]),
@ -51,7 +51,7 @@ rules = [
description='List backups for all the projects.', description='List backups for all the projects.',
operations=[ operations=[
{ {
'path': PATH_BACKUPS, 'path': base.PATH_BACKUPS,
'method': 'GET' 'method': 'GET'
} }
]), ]),
@ -61,10 +61,40 @@ rules = [
description='Get informations of a backup.', description='Get informations of a backup.',
operations=[ operations=[
{ {
'path': PATH_BACKUP, 'path': base.PATH_BACKUP,
'method': 'GET' 'method': 'GET'
} }
]) ]),
policy.DocumentedRuleDefault(
name='backup_strategy:create',
check_str='rule:admin_or_owner',
description='Create a backup strategy.',
operations=[
{
'path': base.PATH_BACKUP_STRATEGIES,
'method': 'POST'
}
]),
policy.DocumentedRuleDefault(
name='backup_strategy:index',
check_str='rule:admin_or_owner',
description='List all backup strategies.',
operations=[
{
'path': base.PATH_BACKUP_STRATEGIES,
'method': 'GET'
}
]),
policy.DocumentedRuleDefault(
name='backup_strategy:delete',
check_str='rule:admin_or_owner',
description='Delete backup strategies.',
operations=[
{
'path': base.PATH_BACKUP_STRATEGIES,
'method': 'DELETE'
}
]),
] ]

View File

@ -33,6 +33,8 @@ PATH_CLUSTER_INSTANCE = PATH_CLUSTER_INSTANCES + '/{instance}'
PATH_BACKUPS = PATH_BASE + '/backups' PATH_BACKUPS = PATH_BASE + '/backups'
PATH_BACKUP = PATH_BACKUPS + '/{backup}' PATH_BACKUP = PATH_BACKUPS + '/{backup}'
PATH_BACKUP_STRATEGIES = PATH_BASE + '/backup_strategies'
PATH_CONFIGS = PATH_BASE + '/configurations' PATH_CONFIGS = PATH_BASE + '/configurations'
PATH_CONFIG = PATH_CONFIGS + '/{config}' PATH_CONFIG = PATH_CONFIGS + '/{config}'

View File

@ -1,25 +0,0 @@
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
from oslo_log import log as logging
from trove.common.strategies.strategy import Strategy
LOG = logging.getLogger(__name__)
def get_storage_strategy(storage_driver, ns=__name__):
return Strategy.get_strategy(storage_driver, ns)

View File

@ -1,44 +0,0 @@
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import abc
from trove.common.strategies.strategy import Strategy
class Storage(Strategy):
"""Base class for Storage Strategy implementation."""
__strategy_type__ = 'storage'
__strategy_ns__ = 'trove.common.strategies.storage'
def __init__(self, context):
self.context = context
super(Storage, self).__init__()
@abc.abstractmethod
def save(self, filename, stream, metadata=None):
"""Persist information from the stream."""
@abc.abstractmethod
def load(self, location, backup_checksum):
"""Load a stream from a persisted storage location."""
@abc.abstractmethod
def load_metadata(self, location, backup_checksum):
"""Load metadata for a persisted object."""
@abc.abstractmethod
def save_metadata(self, location, metadata={}):
"""Save metadata for a persisted object."""

View File

@ -1,302 +0,0 @@
# Copyright 2013 Hewlett-Packard Development Company, L.P.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import hashlib
import json
from oslo_log import log as logging
import six
from trove.common import cfg
from trove.common.clients import create_swift_client
from trove.common.i18n import _
from trove.common.strategies.storage import base
LOG = logging.getLogger(__name__)
CONF = cfg.CONF
CHUNK_SIZE = CONF.backup_chunk_size
MAX_FILE_SIZE = CONF.backup_segment_max_size
BACKUP_CONTAINER = CONF.backup_swift_container
class DownloadError(Exception):
"""Error running the Swift Download Command."""
class SwiftDownloadIntegrityError(Exception):
"""Integrity error while running the Swift Download Command."""
class StreamReader(object):
"""Wrap the stream from the backup process and chunk it into segements."""
def __init__(self, stream, filename, max_file_size=MAX_FILE_SIZE):
self.stream = stream
self.filename = filename
self.container = BACKUP_CONTAINER
self.max_file_size = max_file_size
self.segment_length = 0
self.process = None
self.file_number = 0
self.end_of_file = False
self.end_of_segment = False
self.segment_checksum = hashlib.md5()
@property
def base_filename(self):
"""Filename with extensions removed."""
return self.filename.split('.')[0]
@property
def segment(self):
return '%s_%08d' % (self.base_filename, self.file_number)
@property
def first_segment(self):
return '%s_%08d' % (self.base_filename, 0)
@property
def segment_path(self):
return '%s/%s' % (self.container, self.segment)
def read(self, chunk_size=CHUNK_SIZE):
if self.end_of_segment:
self.segment_length = 0
self.segment_checksum = hashlib.md5()
self.end_of_segment = False
# Upload to a new file if we are starting or too large
if self.segment_length > (self.max_file_size - chunk_size):
self.file_number += 1
self.end_of_segment = True
return ''
chunk = self.stream.read(chunk_size)
if not chunk:
self.end_of_file = True
return ''
self.segment_checksum.update(chunk)
self.segment_length += len(chunk)
return chunk
class SwiftStorage(base.Storage):
"""Implementation of Storage Strategy for Swift."""
__strategy_name__ = 'swift'
def __init__(self, *args, **kwargs):
super(SwiftStorage, self).__init__(*args, **kwargs)
self.connection = create_swift_client(self.context)
def save(self, filename, stream, metadata=None):
"""Persist information from the stream to swift.
The file is saved to the location <BACKUP_CONTAINER>/<filename>.
It will be a Swift Static Large Object (SLO).
The filename is defined on the backup runner manifest property
which is typically in the format '<backup_id>.<ext>.gz'
"""
LOG.info('Saving %(filename)s to %(container)s in swift.',
{'filename': filename, 'container': BACKUP_CONTAINER})
# Create the container if it doesn't already exist
LOG.debug('Creating container %s.', BACKUP_CONTAINER)
self.connection.put_container(BACKUP_CONTAINER)
# Swift Checksum is the checksum of the concatenated segment checksums
swift_checksum = hashlib.md5()
# Wrap the output of the backup process to segment it for swift
stream_reader = StreamReader(stream, filename, MAX_FILE_SIZE)
LOG.debug('Using segment size %s', stream_reader.max_file_size)
url = self.connection.url
# Full location where the backup manifest is stored
location = "%s/%s/%s" % (url, BACKUP_CONTAINER, filename)
# Information about each segment upload job
segment_results = []
# Read from the stream and write to the container in swift
while not stream_reader.end_of_file:
LOG.debug('Saving segment %s.', stream_reader.segment)
path = stream_reader.segment_path
etag = self.connection.put_object(BACKUP_CONTAINER,
stream_reader.segment,
stream_reader)
segment_checksum = stream_reader.segment_checksum.hexdigest()
# Check each segment MD5 hash against swift etag
# Raise an error and mark backup as failed
if etag != segment_checksum:
LOG.error("Error saving data segment to swift. "
"ETAG: %(tag)s Segment MD5: %(checksum)s.",
{'tag': etag, 'checksum': segment_checksum})
return False, "Error saving data to Swift!", None, location
segment_results.append({
'path': path,
'etag': etag,
'size_bytes': stream_reader.segment_length
})
if six.PY3:
swift_checksum.update(segment_checksum.encode())
else:
swift_checksum.update(segment_checksum)
# All segments uploaded.
num_segments = len(segment_results)
LOG.debug('File uploaded in %s segments.', num_segments)
# An SLO will be generated if the backup was more than one segment in
# length.
large_object = num_segments > 1
# Meta data is stored as headers
if metadata is None:
metadata = {}
metadata.update(stream.metadata())
headers = {}
for key, value in metadata.items():
headers[self._set_attr(key)] = value
LOG.debug('Metadata headers: %s', str(headers))
if large_object:
LOG.info('Creating the manifest file.')
manifest_data = json.dumps(segment_results)
LOG.debug('Manifest contents: %s', manifest_data)
# The etag returned from the manifest PUT is the checksum of the
# manifest object (which is empty); this is not the checksum we
# want.
self.connection.put_object(BACKUP_CONTAINER,
filename,
manifest_data,
query_string='multipart-manifest=put')
# Validation checksum is the Swift Checksum
final_swift_checksum = swift_checksum.hexdigest()
else:
LOG.info('Backup fits in a single segment. Moving segment '
'%(segment)s to %(filename)s.',
{'segment': stream_reader.first_segment,
'filename': filename})
segment_result = segment_results[0]
# Just rename it via a special put copy.
headers['X-Copy-From'] = segment_result['path']
self.connection.put_object(BACKUP_CONTAINER,
filename, '',
headers=headers)
# Delete the old segment file that was copied
LOG.debug('Deleting the old segment file %s.',
stream_reader.first_segment)
self.connection.delete_object(BACKUP_CONTAINER,
stream_reader.first_segment)
final_swift_checksum = segment_result['etag']
# Validate the object by comparing checksums
# Get the checksum according to Swift
resp = self.connection.head_object(BACKUP_CONTAINER, filename)
# swift returns etag in double quotes
# e.g. '"dc3b0827f276d8d78312992cc60c2c3f"'
etag = resp['etag'].strip('"')
# Raise an error and mark backup as failed
if etag != final_swift_checksum:
LOG.error(
("Error saving data to swift. Manifest "
"ETAG: %(tag)s Swift MD5: %(checksum)s"),
{'tag': etag, 'checksum': final_swift_checksum})
return False, "Error saving data to Swift!", None, location
return (True, "Successfully saved data to Swift!",
final_swift_checksum, location)
def _explodeLocation(self, location):
storage_url = "/".join(location.split('/')[:-2])
container = location.split('/')[-2]
filename = location.split('/')[-1]
return storage_url, container, filename
def _verify_checksum(self, etag, checksum):
etag_checksum = etag.strip('"')
if etag_checksum != checksum:
log_fmt = ("Original checksum: %(original)s does not match"
" the current checksum: %(current)s")
exc_fmt = _("Original checksum: %(original)s does not match"
" the current checksum: %(current)s")
msg_content = {
'original': etag_checksum,
'current': checksum}
LOG.error(log_fmt, msg_content)
raise SwiftDownloadIntegrityError(exc_fmt % msg_content)
return True
def load(self, location, backup_checksum):
"""Restore a backup from the input stream to the restore_location."""
storage_url, container, filename = self._explodeLocation(location)
headers, info = self.connection.get_object(container, filename,
resp_chunk_size=CHUNK_SIZE)
if CONF.verify_swift_checksum_on_restore:
self._verify_checksum(headers.get('etag', ''), backup_checksum)
return info
def _get_attr(self, original):
"""Get a friendly name from an object header key."""
key = original.replace('-', '_')
key = key.replace('x_object_meta_', '')
return key
def _set_attr(self, original):
"""Return a swift friendly header key."""
key = original.replace('_', '-')
return 'X-Object-Meta-%s' % key
def load_metadata(self, location, backup_checksum):
"""Load metadata from swift."""
storage_url, container, filename = self._explodeLocation(location)
headers = self.connection.head_object(container, filename)
if CONF.verify_swift_checksum_on_restore:
self._verify_checksum(headers.get('etag', ''), backup_checksum)
_meta = {}
for key, value in headers.items():
if key.startswith('x-object-meta'):
_meta[self._get_attr(key)] = value
return _meta
def save_metadata(self, location, metadata={}):
"""Save metadata to a swift object."""
storage_url, container, filename = self._explodeLocation(location)
headers = {}
for key, value in metadata.items():
headers[self._set_attr(key)] = value
LOG.info("Writing metadata: %s", str(headers))
self.connection.post_object(container, filename, headers=headers)

View File

@ -54,6 +54,8 @@ def map(engine, models):
Table('reservations', meta, autoload=True)) Table('reservations', meta, autoload=True))
orm.mapper(models['backups'], orm.mapper(models['backups'],
Table('backups', meta, autoload=True)) Table('backups', meta, autoload=True))
orm.mapper(models['backup_strategy'],
Table('backup_strategy', meta, autoload=True))
orm.mapper(models['security_groups'], orm.mapper(models['security_groups'],
Table('security_groups', meta, autoload=True)) Table('security_groups', meta, autoload=True))
orm.mapper(models['security_group_rules'], orm.mapper(models['security_group_rules'],

View File

@ -0,0 +1,46 @@
# Copyright 2020 Catalyst Cloud
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from sqlalchemy.schema import Column
from sqlalchemy.schema import Index
from sqlalchemy.schema import MetaData
from sqlalchemy.schema import UniqueConstraint
from trove.db.sqlalchemy.migrate_repo.schema import create_tables
from trove.db.sqlalchemy.migrate_repo.schema import DateTime
from trove.db.sqlalchemy.migrate_repo.schema import String
from trove.db.sqlalchemy.migrate_repo.schema import Table
meta = MetaData()
backup_strategy = Table(
'backup_strategy',
meta,
Column('id', String(36), primary_key=True, nullable=False),
Column('tenant_id', String(36), nullable=False),
Column('instance_id', String(36), nullable=False, default=''),
Column('backend', String(255), nullable=False),
Column('swift_container', String(255), nullable=True),
Column('created', DateTime()),
UniqueConstraint(
'tenant_id', 'instance_id',
name='UQ_backup_strategy_tenant_id_instance_id'),
Index("backup_strategy_tenant_id_instance_id", "tenant_id", "instance_id"),
)
def upgrade(migrate_engine):
meta.bind = migrate_engine
create_tables([backup_strategy])

View File

@ -741,8 +741,19 @@ class BaseMySqlApp(object):
user_token = context.auth_token user_token = context.auth_token
auth_url = CONF.service_credentials.auth_url auth_url = CONF.service_credentials.auth_url
user_tenant = context.project_id user_tenant = context.project_id
metadata = f'datastore:{backup_info["datastore"]},' \
f'datastore_version:{backup_info["datastore_version"]}' swift_metadata = (
f'datastore:{backup_info["datastore"]},'
f'datastore_version:{backup_info["datastore_version"]}'
)
swift_params = f'--swift-extra-metadata={swift_metadata}'
swift_container = backup_info.get('swift_container',
CONF.backup_swift_container)
if backup_info.get('swift_container'):
swift_params = (
f'{swift_params} '
f'--swift-container {swift_container}'
)
command = ( command = (
f'/usr/bin/python3 main.py --backup --backup-id={backup_id} ' f'/usr/bin/python3 main.py --backup --backup-id={backup_id} '
@ -751,7 +762,7 @@ class BaseMySqlApp(object):
f'--db-host=127.0.0.1 ' f'--db-host=127.0.0.1 '
f'--os-token={user_token} --os-auth-url={auth_url} ' f'--os-token={user_token} --os-auth-url={auth_url} '
f'--os-tenant-id={user_tenant} ' f'--os-tenant-id={user_tenant} '
f'--swift-extra-metadata={metadata} ' f'{swift_params} '
f'{incremental}' f'{incremental}'
) )
@ -792,6 +803,7 @@ class BaseMySqlApp(object):
'state': BackupState.COMPLETED, 'state': BackupState.COMPLETED,
}) })
else: else:
LOG.error(f'Cannot parse backup output: {result}')
backup_state.update({ backup_state.update({
'success': False, 'success': False,
'state': BackupState.FAILED, 'state': BackupState.FAILED,

View File

@ -1371,8 +1371,8 @@ class BackupTasks(object):
return container, prefix return container, prefix
@classmethod @classmethod
def delete_files_from_swift(cls, context, filename): def delete_files_from_swift(cls, context, container, filename):
container = CONF.backup_swift_container container = container or CONF.backup_swift_container
client = clients.create_swift_client(context) client = clients.create_swift_client(context)
obj = client.head_object(container, filename) obj = client.head_object(container, filename)
if 'x-static-large-object' in obj: if 'x-static-large-object' in obj:
@ -1404,7 +1404,9 @@ class BackupTasks(object):
try: try:
filename = backup.filename filename = backup.filename
if filename: if filename:
BackupTasks.delete_files_from_swift(context, filename) BackupTasks.delete_files_from_swift(context,
backup.container_name,
filename)
except ValueError: except ValueError:
_delete(backup) _delete(backup)
except ClientException as e: except ClientException as e:

View File

@ -267,14 +267,14 @@ class RebootTestBase(ActionTestBase):
poll_until(is_finished_rebooting, time_out=TIME_OUT_TIME) poll_until(is_finished_rebooting, time_out=TIME_OUT_TIME)
def wait_for_status(self, status, timeout=60): def wait_for_status(self, status, timeout=60, sleep_time=5):
def is_status(): def is_status():
instance = self.instance instance = self.instance
if instance.status in status: if instance.status in status:
return True return True
return False return False
poll_until(is_status, time_out=timeout) poll_until(is_status, time_out=timeout, sleep_time=sleep_time)
@test(groups=[tests.DBAAS_API_INSTANCE_ACTIONS], @test(groups=[tests.DBAAS_API_INSTANCE_ACTIONS],
@ -323,7 +323,9 @@ class StopTests(RebootTestBase):
def test_stop_mysql(self): def test_stop_mysql(self):
"""Stops MySQL by admin.""" """Stops MySQL by admin."""
instance_info.dbaas_admin.management.stop(self.instance_id) instance_info.dbaas_admin.management.stop(self.instance_id)
self.wait_for_status(['SHUTDOWN'], timeout=60)
# The instance status will only be updated by guest agent.
self.wait_for_status(['SHUTDOWN'], timeout=90, sleep_time=10)
@test(depends_on=[test_stop_mysql]) @test(depends_on=[test_stop_mysql])
def test_volume_info_while_mysql_is_down(self): def test_volume_info_while_mysql_is_down(self):

View File

@ -741,8 +741,9 @@ class GuestLogRunner(TestRunner):
self.admin_client, self.admin_client,
log_name, log_name,
expected_type=guest_log.LogType.SYS.name, expected_type=guest_log.LogType.SYS.name,
expected_status=guest_log.LogStatus.Partial.name, expected_status=[guest_log.LogStatus.Partial.name,
expected_published=1, expected_pending=1, guest_log.LogStatus.Published.name],
expected_published=1, expected_pending=None,
is_admin=True) is_admin=True)
def run_test_log_publish_again_sys(self): def run_test_log_publish_again_sys(self):
@ -751,9 +752,10 @@ class GuestLogRunner(TestRunner):
self.admin_client, self.admin_client,
log_name, log_name,
expected_type=guest_log.LogType.SYS.name, expected_type=guest_log.LogType.SYS.name,
expected_status=guest_log.LogStatus.Partial.name, expected_status=[guest_log.LogStatus.Partial.name,
guest_log.LogStatus.Published.name],
expected_published=self._get_last_log_published(log_name) + 1, expected_published=self._get_last_log_published(log_name) + 1,
expected_pending=1, expected_pending=None,
is_admin=True) is_admin=True)
def run_test_log_generator_sys(self): def run_test_log_generator_sys(self):

View File

@ -557,3 +557,48 @@ class OrderingTests(trove_testtools.TestCase):
actual = [b.name for b in backups] actual = [b.name for b in backups]
expected = [u'one', u'two', u'three', u'four'] expected = [u'one', u'two', u'three', u'four']
self.assertEqual(expected, actual) self.assertEqual(expected, actual)
class TestBackupStrategy(trove_testtools.TestCase):
def setUp(self):
super(TestBackupStrategy, self).setUp()
util.init_db()
self.context, self.instance_id = _prep_conf(timeutils.utcnow())
def test_create(self):
db_backstg = models.BackupStrategy.create(self.context,
self.instance_id,
'test-container')
self.addCleanup(models.BackupStrategy.delete, self.context,
self.context.project_id, self.instance_id)
self.assertEqual('test-container', db_backstg.swift_container)
def test_list(self):
models.BackupStrategy.create(self.context, self.instance_id,
'test_list')
self.addCleanup(models.BackupStrategy.delete, self.context,
self.context.project_id, self.instance_id)
db_backstgs = models.BackupStrategy.list(self.context,
self.context.project_id,
self.instance_id).all()
self.assertEqual(1, len(db_backstgs))
self.assertEqual('test_list', db_backstgs[0].swift_container)
def test_delete(self):
models.BackupStrategy.create(self.context, self.instance_id,
'test_delete')
db_backstgs = models.BackupStrategy.list(self.context,
self.context.project_id,
self.instance_id).all()
self.assertEqual(1, len(db_backstgs))
models.BackupStrategy.delete(self.context, self.context.project_id,
self.instance_id)
db_backstgs = models.BackupStrategy.list(self.context,
self.context.project_id,
self.instance_id).all()
self.assertEqual(0, len(db_backstgs))

View File

@ -14,15 +14,15 @@
import os import os
from tempfile import NamedTemporaryFile from tempfile import NamedTemporaryFile
from unittest import mock from unittest import mock
from cinderclient import exceptions as cinder_exceptions
from cinderclient.v2 import volumes as cinderclient_volumes
import cinderclient.v2.client as cinderclient
from unittest.mock import call from unittest.mock import call
from unittest.mock import MagicMock from unittest.mock import MagicMock
from unittest.mock import Mock from unittest.mock import Mock
from unittest.mock import patch from unittest.mock import patch
from unittest.mock import PropertyMock from unittest.mock import PropertyMock
from cinderclient import exceptions as cinder_exceptions
from cinderclient.v2 import volumes as cinderclient_volumes
import cinderclient.v2.client as cinderclient
import neutronclient.v2_0.client as neutronclient import neutronclient.v2_0.client as neutronclient
from novaclient import exceptions as nova_exceptions from novaclient import exceptions as nova_exceptions
import novaclient.v2.flavors import novaclient.v2.flavors
@ -999,7 +999,7 @@ class BackupTasksTest(trove_testtools.TestCase):
self.backup.id = 'backup_id' self.backup.id = 'backup_id'
self.backup.name = 'backup_test', self.backup.name = 'backup_test',
self.backup.description = 'test desc' self.backup.description = 'test desc'
self.backup.location = 'http://xxx/z_CLOUD/12e48.xbstream.gz' self.backup.location = 'http://xxx/z_CLOUD/container/12e48.xbstream.gz'
self.backup.instance_id = 'instance id' self.backup.instance_id = 'instance id'
self.backup.created = 'yesterday' self.backup.created = 'yesterday'
self.backup.updated = 'today' self.backup.updated = 'today'
@ -1049,6 +1049,18 @@ class BackupTasksTest(trove_testtools.TestCase):
"backup should be in DELETE_FAILED status" "backup should be in DELETE_FAILED status"
) )
@patch('trove.common.clients.create_swift_client')
def test_delete_backup_delete_swift(self, mock_swift_client):
client_mock = MagicMock()
mock_swift_client.return_value = client_mock
taskmanager_models.BackupTasks.delete_backup(mock.ANY, self.backup.id)
client_mock.head_object.assert_called_once_with('container',
'12e48.xbstream.gz')
client_mock.delete_object.assert_called_once_with('container',
'12e48.xbstream.gz')
def test_parse_manifest(self): def test_parse_manifest(self):
manifest = 'container/prefix' manifest = 'container/prefix'
cont, prefix = taskmanager_models.BackupTasks._parse_manifest(manifest) cont, prefix = taskmanager_models.BackupTasks._parse_manifest(manifest)