When resizing volume for an instance which is the primary of a
replication, cluster, Trove also resizes the volume for all the replicas
automatically.
Change-Id: I2e719772fe7abc719255ea2a705d9ec342aced2a
Trove now supports to resize volume without downtime. To use this
feature, the version of Nova and Cinder needs to be at least Pike, the
config option ``cinder_service_type`` needs to be set to ``volumev3``.
The cloud admin can disable this feature by setting
``online_volume_resize=False``, default is enabled.
Change-Id: I000a4e90800454972dd39f2f82d286571bc0b96c
Support ``subnet_id`` and ``ip_address`` for creating instance. When
creating instance, trove will check the network conflicts between user's
network and the management network, additionally, the cloud admin is
able to define other reserved networks by configuring
``reserved_network_cidrs``.
Change-Id: Icc4eece2f265cb5a5c48c4f1024a9189d11b4687
Include address type in getting instance response.
* Deprecate confip option network_label_regex as we don't reply on Nova
to get addresses, network names don't make any sense.
* Add 'addresses' in instance API response, keep 'ip' as is but mark
it deprecated in API doc, python-troveclient shouldn't break.
Story: 2007562
Task: 39445
Change-Id: Ia0458b5ddae8959ce29c17e444e1a51a026283cd
* Hard delete the datastore_configuration_parameters table record.
* Make 'datastore_version_id' nullable for 'instances' table.
* Check if the datastore version is still being used before removal.
Story: 2007563
Task: 39451
Change-Id: I84e4a31f14f9327cc01ff2d699167d91112e1565
A new field named ``service_status_updated`` is added to the instance
API response which e.g. could be used to validate if the instance
'HEALTHY' status is stale or not.
Change-Id: Iabcfad81343a71304b843b3a7778486253220d20
- 'HEALTHY' means the db service is responsive, 'ACTIVE' means the db
service is alive.
- Remove the CI job fakemodetests, but will add similar testing task in
the future.
- Fix the periodic CI job
- Remove MongoDB and related jobs
Change-Id: I5abe9091ba203297dc87db5fba139179166321f7
Allow the cloud admin to control the security groups on the management
port of Trove instance, a new config option `management_security_groups`
is introduced for that purpose.
Change-Id: I4b22b87d37792be700d4ec7f78a7ea479ddb5814
Story: 2006466
Task: 36395
This patch extends adds a new field 'instance_ids' in payloads of two
cluster events:
- DBaaSClusterShrink (during start and end notification),
- DBaaSClusterGrow (during end notification).
Moreover, additional end notifications after growing and shrinking
cluster have been added.
The purpose of this change if to enable better integration with
tools for monitoring resources usage.
Change-Id: I2c39b2c3bff65f88e46944eda22209bdc92803bc
Signed-off-by: Kasper Hasior <k.hasior@samsung.com>
Co-Authored-By: Kasper Hasior <k.hasior@samsung.com>
Story: #2005520
Task: #30639
Now Trove doesn't support to specify keypair when creating the db
instance, the ssh key is injected into the guest agent image at the
build time, which makes it very hard to manage.
This patch adds a config option `nova_keypair` that is used as keypair
name when creating db instance. The old way of the image building will
be changed in the subsequent patches.
Change-Id: I41d4e41fc4bc413cdd48b8d761429b0204481932
Story: #2005429
Task: #30462
Use `management_networks` instead. `management_networks`will be used
as admin networks which will be attached to Trove instance
automatically.
Change-Id: I5c6004b568c3a428bc0f0a8b0e36665d3c5b3087
This adds basic framework for trove-status upgrade
check commands. For now it has only "check_placeholder"
check implemented.
Real checks can be added to this tool in the future.
Change-Id: Idfeab4c06cba6f841c17ab6e255a29e8707bfa55
Story: 2003657
Task: 26162
Currently, listing instances only allows to get basic information about
entities. To get the details, one need to query instance "show" endpoint
for each instance separately. This is inefficient and exposes API to a
heavier load.
There are use cases in which we want to obtain detailed information
about all instances. In particular, in services integrating with Trove.
For example, Vitrage project requires this information to build vertices
and edges in the resource graph for RCA analysis.
Change-Id: I33252cce41c27cc7302c860dde1f6448ecdf3991
Signed-off-by: Bartosz Zurkowski <b.zurkowski@samsung.com>
Currently we are not able to specify the endpoint_type
for Neutron, Nova and Cinder clients with single tenant.
publicURL is configured by default but it could be nice
to have the possibility to choose anything else.
Change-Id: Ibb791cacc0e08de2d87b4348f84c9e573849ec51
Closes-Bug: #1776229
Currently when create a mongodb cluster, mongos and configsvr
use the volume_size of replica-set node. But mongos and configvr
are not data node, they don't need volume space as large as data
node. This patch attend to help user specify the number, the volume
size and the volume type of mongos/configserver with
extended_properties[1] argument when creating mongodb. Currently,
the supported parameters are, num_configsvr, num_mongos,
configsvr_volume_size, configsvr_volume_type, mongos_volume_size
and mongos_volume_type.
[1] https://review.openstack.org/#/c/206931/
Closes-Bug: #1734907
Signed-off-by: zhanggang <zhanggang@cmss.chinamobile.com>
Change-Id: Ie48f3961b21f926f983c6713a76b0492952cf4c7
When promoting one slave to the new master in a replication group,
previously the old master will be attached to the new one right after
the new master is on. For MariaDB, attaching the old master to the new
one, new GTID may be created on the old master and also may be synced
to some of the other replicas, as they're still connecting to the old
master. The new GTID does not exists in the new master, making these
slaves diverged from the master. After that, when the diverged slave
connects to the new master, 'START SLAVE' will fail with logs like:
[ERROR] Error reading packet from server: Error: connecting slave
requested to start from GTID X-XXXXXXXXXX-XX, which is not in the
master's binlog. Since the master's binlog contains GTIDs with
higher sequence numbers, it probably means that the slave has
diverged due to executing extra erroneous transactions
(server_errno=1236)
And these slaves will be left orphan and errored after
promote_to_replica_source finishs.
Attaching the other replicas to the new master before dealing with the
old master will fix this problem and the failure of the
trove-scenario-mariadb-multi Zuul job as well.
Closes-Bug: #1754539
Change-Id: Ib9c01b07c832f117f712fd613ae55c7de3561116
Signed-off-by: Zhao Chao <zhaochao1984@gmail.com>
As no content will be returned to the client if a root-disable request
succeeds, a HTTP 204 (Not Content) response is more appropriate.
Redis root-disable scenario test fails because it's return HTTP 204, but
all API related tests are expecting a HTTP 200. Although changing Redis
root-disable API is a much simpler way to resolve the problem, migrating
from HTTP 200 to HTTP 204 should be a better solution. Related tests and
documents are also updated accordingly.
APIImpact
Change-Id: If732a578009fd35436e810fb7ceceefd1ada3778
Signed-off-by: Zhao Chao <zhaochao1984@gmail.com>
Current Nova server volume support is broken. Nova also declared the
'os-volumes_boot' will be deprecated in the future. As creating volumes
by cinderclient has been supoorted for a long time, we could just drop
support of Nova server volume.
This patch also migrate to the new block_device_mapping_v2 parameter of
Nova servers creating API.
Closes-Bug: #1673408
Change-Id: I74d86241a5a0d0b1804b959313432168f68faf89
Signed-off-by: Zhao Chao <zhaochao1984@gmail.com>
trove/common/strategies/cluster/experimental/galera_common/api.py.
Method "shrink" in class GaleraCommonCluster,when use DBInstance.find_all
should set argument deleted=False, otherwise it may missing raise a
ClusterShrinkMustNotLeaveClusterEmpty exception.
Same problem at galera_common/taskmanager.py. Method "shrink_cluster" in
GaleraCommonClusterTasks, call DBInstance.findall() with deleted=False
to exclude deleted nodes and that can avoid a NotFound error.
Change-Id: Ibb377630b830da06485fc17a1a723dc1055d9b01
Closes-Bug: 1699953
Redis configuration validation for the 'repl-backlog-size' parameter
uses a wrong MIN value of '0'.When set to less than 16384 value,
I can see that the value in redis.conf[1], but through the
'config get *' see are 16384[2]. Because the minimum default value
in redis is 16384[3]. So I want to modify Min value to 16384.
[1]: repl-backlog-size 0
[2]: 59) "repl-backlog-size"
60) "16384"
[3]:58f79e2ff4/src/server.h (L110)
Closes-Bug: #1697596
Change-Id: I81cb1c02943edf0af3d7bf67ff2f083a4c07d518
Server side support for the new 'reapply' command.
This reapplies a given module to all instances that it had
previously been applied to.
Originally, a module designated live-update would automatically
be re-applied whenever it was updated. Adding a specific
command however, allows operators/users more control over
how the new payload would be distributed. Old 'modules'
could be left if desired, or updated with the new command.
Scenario tests were updated to test the new command.
DocImpact: update documentation to reflect module-reapply command
Change-Id: I4aea674ebe873a96ed22b5714263d0eea532a4ca
Depends-On: Ic4cc9e9085cb40f1afbec05caeb04886137027a4
Closes-Bug: #1554903
Fixed the module-instances command to return a paginated
list of instances. Also added a --count_only flag to the
command to return a summary of the applied instances
based on the MD5 of the module (this is most useful
for live_update modules, to see which ones haven't been
updated).
Also cleaned up the code a bit, putting some methods
into files where they made more sense (and would cause
less potential collisions during import).
Change-Id: I963e0f03875a1b93e2e1214bcb6580c507fa45fe
Closes-Bug: #1554900