Subcloud Name Reconfiguration
This change adds the capability to rename the subcloud after bootstrap or during subcloud rehome operation. Added a field in the database to separate the region name from the subcloud name. The region name determines the subcloud reference in the Openstack core, through which it is possible to access the endpoints of a given subcloud. Since the region name cannot be changed, this commit adds the ability to maintain a unique region name based on the UUID format, and allows subcloud renaming when necessary without any endpoint impact. The region is randomly generated to configure the subcloud when it is created and only applies to future subclouds. For those systems that have existing subclouds, the region will be the same as on day 0, that is, region will keep the same name as the subcloud, but subclouds can be renamed. This topic involves changes to dcmanager, dcmanager-client and GUI. To ensure the region name reference needed by the cert-monitor, a mechanism to determine if the request is coming from the cert-monitor has been created. Usage for subcloud rename: dcmanager subcloud update <subcloud-name> --name <new-name> Usage for subcloud rehoming: dcmanager subcloud add --name <subcloud-name> --migrate ... Note: Upgrade test from StarlingX 8 -> 9 for this commit is deferred until upgrade functionality in master is restored. Any issue found during upgrade test will be addressed in a separate commit Test Plan: PASS: Run dcmanager subcloud passing subcommands: - add/delete/migrate/list/show/show --detail - errors/manage/unmanage/reinstall/reconfig - update/deploy PASS: Run dcmanager subcloud add supplying --name parameter and validate the operation is not allowed PASS: Run dcmanager supplying subcommands: - kube/patch/prestage strategies PASS: Run dcmanager to apply patch and remove it PASS: Run dcmanager subcloud-backup: - create/delete/restore/show/upload PASS: Run subcloud-group: - add/delete/list/list-subclouds/show/update PASS: Run dcmanager subcloud strategy for: - patch/kubernetes/firmware PASS: Run dcmanager subcloud update command passing --name parameter supplying the following values: - current subcloud name (not changed) - different existing subcloud name PASS: Run dcmanager to migrate a subcloud passing --name parameter supplying a new subcloud name PASS: Run dcmanager to migrate a subcloud without --name parameter PASS: Run dcmanager to migrate a subcloud passing --name parameter supplying a new subcloud name and different subcloud name in bootstrap file PASS: Test dcmanager API response using cURL command line to validate new region name field PASS: Run full DC sanity and regression Story: 2010788 Task: 48217 Signed-off-by: Cristian Mondo <cristian.mondo@windriver.com> Change-Id: Id04f42504b8e325d9ec3880c240fe4a06e3a20b7
This commit is contained in:
parent
9d1c9ccd23
commit
a6a6b84258
@ -78,6 +78,7 @@ Response
|
|||||||
- deploy-status: deploy_status
|
- deploy-status: deploy_status
|
||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
|
- region-name: region_name
|
||||||
- openstack-installed: openstack_installed
|
- openstack-installed: openstack_installed
|
||||||
- management-state: management_state
|
- management-state: management_state
|
||||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||||
@ -173,6 +174,7 @@ Request Example
|
|||||||
- management-gateway-ip: management_gateway_ip
|
- management-gateway-ip: management_gateway_ip
|
||||||
- management-start-ip: management_start_ip
|
- management-start-ip: management_start_ip
|
||||||
- management-end-ip: management_end_ip
|
- management-end-ip: management_end_ip
|
||||||
|
- region-name: region_name
|
||||||
|
|
||||||
Response Example
|
Response Example
|
||||||
----------------
|
----------------
|
||||||
@ -283,6 +285,7 @@ This operation does not accept a request body.
|
|||||||
- deploy-status: deploy_status
|
- deploy-status: deploy_status
|
||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
|
- region-name: region_name
|
||||||
- openstack-installed: openstack_installed
|
- openstack-installed: openstack_installed
|
||||||
- management-state: management_state
|
- management-state: management_state
|
||||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||||
@ -314,6 +317,8 @@ Modifies a specific subcloud
|
|||||||
|
|
||||||
The attributes of a subcloud which are modifiable:
|
The attributes of a subcloud which are modifiable:
|
||||||
|
|
||||||
|
- name
|
||||||
|
|
||||||
- description
|
- description
|
||||||
|
|
||||||
- location
|
- location
|
||||||
@ -349,6 +354,7 @@ serviceUnavailable (503)
|
|||||||
.. rest_parameters:: parameters.yaml
|
.. rest_parameters:: parameters.yaml
|
||||||
|
|
||||||
- subcloud: subcloud_uri
|
- subcloud: subcloud_uri
|
||||||
|
- name: subcloud_name
|
||||||
- description: subcloud_description
|
- description: subcloud_description
|
||||||
- location: subcloud_location
|
- location: subcloud_location
|
||||||
- management-state: subcloud_management_state
|
- management-state: subcloud_management_state
|
||||||
@ -382,6 +388,7 @@ Request Example
|
|||||||
- deploy-status: deploy_status
|
- deploy-status: deploy_status
|
||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
|
- region-name: region_name
|
||||||
- openstack-installed: openstack_installed
|
- openstack-installed: openstack_installed
|
||||||
- management-state: management_state
|
- management-state: management_state
|
||||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||||
@ -526,6 +533,7 @@ Request Example
|
|||||||
- deploy-status: deploy_status
|
- deploy-status: deploy_status
|
||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
|
- region-name: region_name
|
||||||
- openstack-installed: openstack_installed
|
- openstack-installed: openstack_installed
|
||||||
- management-state: management_state
|
- management-state: management_state
|
||||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||||
@ -857,6 +865,7 @@ This operation does not accept a request body.
|
|||||||
- deploy-status: deploy_status
|
- deploy-status: deploy_status
|
||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
|
- region-name: region_name
|
||||||
- openstack-installed: openstack_installed
|
- openstack-installed: openstack_installed
|
||||||
- management-state: management_state
|
- management-state: management_state
|
||||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||||
@ -1025,6 +1034,7 @@ Request Example
|
|||||||
- deploy-status: deploy_status
|
- deploy-status: deploy_status
|
||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
|
- region-name: region_name
|
||||||
- openstack-installed: openstack_installed
|
- openstack-installed: openstack_installed
|
||||||
- management-state: management_state
|
- management-state: management_state
|
||||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||||
@ -1136,6 +1146,7 @@ Request Example
|
|||||||
- deploy-status: deploy_status
|
- deploy-status: deploy_status
|
||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
|
- region-name: region_name
|
||||||
- openstack-installed: openstack_installed
|
- openstack-installed: openstack_installed
|
||||||
- management-state: management_state
|
- management-state: management_state
|
||||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||||
@ -1830,6 +1841,7 @@ Request Example
|
|||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
- error-description: error_description
|
- error-description: error_description
|
||||||
|
- region-name: region_name
|
||||||
- management-subnet: management_subnet
|
- management-subnet: management_subnet
|
||||||
- management-start-ip: management_start_ip
|
- management-start-ip: management_start_ip
|
||||||
- management-end-ip: management_end_ip
|
- management-end-ip: management_end_ip
|
||||||
@ -1897,6 +1909,7 @@ Request Example
|
|||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
- error-description: error_description
|
- error-description: error_description
|
||||||
|
- region-name: region_name
|
||||||
- management-subnet: management_subnet
|
- management-subnet: management_subnet
|
||||||
- management-start-ip: management_start_ip
|
- management-start-ip: management_start_ip
|
||||||
- management-end-ip: management_end_ip
|
- management-end-ip: management_end_ip
|
||||||
@ -1963,6 +1976,7 @@ Request Example
|
|||||||
- deploy-status: deploy_status
|
- deploy-status: deploy_status
|
||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
|
- region-name: region_name
|
||||||
- openstack-installed: openstack_installed
|
- openstack-installed: openstack_installed
|
||||||
- management-state: management_state
|
- management-state: management_state
|
||||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||||
@ -2036,6 +2050,7 @@ Request Example
|
|||||||
- deploy-status: deploy_status
|
- deploy-status: deploy_status
|
||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
|
- region-name: region_name
|
||||||
- openstack-installed: openstack_installed
|
- openstack-installed: openstack_installed
|
||||||
- management-state: management_state
|
- management-state: management_state
|
||||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||||
@ -2103,6 +2118,7 @@ Request Example
|
|||||||
- deploy-status: deploy_status
|
- deploy-status: deploy_status
|
||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
|
- region-name: region_name
|
||||||
- openstack-installed: openstack_installed
|
- openstack-installed: openstack_installed
|
||||||
- management-state: management_state
|
- management-state: management_state
|
||||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||||
@ -2170,6 +2186,7 @@ Request Example
|
|||||||
- deploy-status: deploy_status
|
- deploy-status: deploy_status
|
||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
|
- region-name: region_name
|
||||||
- openstack-installed: openstack_installed
|
- openstack-installed: openstack_installed
|
||||||
- management-state: management_state
|
- management-state: management_state
|
||||||
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
- systemcontroller-gateway-ip: systemcontroller_gateway_ip
|
||||||
@ -2246,6 +2263,7 @@ Request Example
|
|||||||
- backup-status: backup_status
|
- backup-status: backup_status
|
||||||
- backup-datetime: backup_datetime
|
- backup-datetime: backup_datetime
|
||||||
- error-description: error_description
|
- error-description: error_description
|
||||||
|
- region-name: region_name
|
||||||
- management-subnet: management_subnet
|
- management-subnet: management_subnet
|
||||||
- management-start-ip: management_start_ip
|
- management-start-ip: management_start_ip
|
||||||
- management-end-ip: management_end_ip
|
- management-end-ip: management_end_ip
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"deploy-status": "aborting-install",
|
"deploy-status": "aborting-install",
|
||||||
"backup-status": null,
|
"backup-status": null,
|
||||||
"backup-datetime": null,
|
"backup-datetime": null,
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"description": "Ottawa Site",
|
"description": "Ottawa Site",
|
||||||
"group_id": 1,
|
"group_id": 1,
|
||||||
"location": "YOW",
|
"location": "YOW",
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"deploy-status": "complete",
|
"deploy-status": "complete",
|
||||||
"backup-status": "complete",
|
"backup-status": "complete",
|
||||||
"backup-datetime": "2023-05-02 11:23:58.132134",
|
"backup-datetime": "2023-05-02 11:23:58.132134",
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"description": "Ottawa Site",
|
"description": "Ottawa Site",
|
||||||
"group_id": 1,
|
"group_id": 1,
|
||||||
"location": "YOW",
|
"location": "YOW",
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"deploy-status": "complete",
|
"deploy-status": "complete",
|
||||||
"backup-status": "complete",
|
"backup-status": "complete",
|
||||||
"backup-datetime": "2023-05-02 11:23:58.132134",
|
"backup-datetime": "2023-05-02 11:23:58.132134",
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"description": "Ottawa Site",
|
"description": "Ottawa Site",
|
||||||
"group_id": 1,
|
"group_id": 1,
|
||||||
"location": "YOW",
|
"location": "YOW",
|
||||||
|
@ -10,6 +10,7 @@
|
|||||||
"backup-status": null,
|
"backup-status": null,
|
||||||
"backup-datetime": null,
|
"backup-datetime": null,
|
||||||
"error-description": "No errors present",
|
"error-description": "No errors present",
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"management-subnet": "192.168.102.0/24",
|
"management-subnet": "192.168.102.0/24",
|
||||||
"management-start-ip": "192.168.102.2",
|
"management-start-ip": "192.168.102.2",
|
||||||
"management-end-ip": "192.168.102.50",
|
"management-end-ip": "192.168.102.50",
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"deploy-status": "pre-install",
|
"deploy-status": "pre-install",
|
||||||
"backup-status": null,
|
"backup-status": null,
|
||||||
"backup-datetime": null,
|
"backup-datetime": null,
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"description": "Ottawa Site",
|
"description": "Ottawa Site",
|
||||||
"group_id": 1,
|
"group_id": 1,
|
||||||
"location": "YOW",
|
"location": "YOW",
|
||||||
|
@ -10,6 +10,7 @@
|
|||||||
"backup-status": null,
|
"backup-status": null,
|
||||||
"backup-datetime": null,
|
"backup-datetime": null,
|
||||||
"error-description": "No errors present",
|
"error-description": "No errors present",
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"management-subnet": "192.168.102.0/24",
|
"management-subnet": "192.168.102.0/24",
|
||||||
"management-start-ip": "192.168.102.2",
|
"management-start-ip": "192.168.102.2",
|
||||||
"management-end-ip": "192.168.102.50",
|
"management-end-ip": "192.168.102.50",
|
||||||
|
@ -14,6 +14,7 @@
|
|||||||
"error-description": "",
|
"error-description": "",
|
||||||
"backup-status": "complete",
|
"backup-status": "complete",
|
||||||
"backup-datetime": "2022-07-08 11:23:58.132134",
|
"backup-datetime": "2022-07-08 11:23:58.132134",
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"description": "Ottawa Site",
|
"description": "Ottawa Site",
|
||||||
"group_id": 1,
|
"group_id": 1,
|
||||||
"location": "YOW",
|
"location": "YOW",
|
||||||
|
@ -14,6 +14,7 @@
|
|||||||
"error-description": "",
|
"error-description": "",
|
||||||
"backup-status": "complete",
|
"backup-status": "complete",
|
||||||
"backup-datetime": "2022-07-08 11:23:58.132134",
|
"backup-datetime": "2022-07-08 11:23:58.132134",
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"description": "Ottawa Site",
|
"description": "Ottawa Site",
|
||||||
"group_id": 1,
|
"group_id": 1,
|
||||||
"location": "YOW",
|
"location": "YOW",
|
||||||
|
@ -12,6 +12,7 @@
|
|||||||
"deploy-status": "complete",
|
"deploy-status": "complete",
|
||||||
"backup-status": "complete",
|
"backup-status": "complete",
|
||||||
"backup-datetime": "2022-07-08 11:23:58.132134",
|
"backup-datetime": "2022-07-08 11:23:58.132134",
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"openstack-installed": false,
|
"openstack-installed": false,
|
||||||
"management-state": "managed",
|
"management-state": "managed",
|
||||||
"systemcontroller-gateway-ip": "192.168.204.101",
|
"systemcontroller-gateway-ip": "192.168.204.101",
|
||||||
|
@ -21,7 +21,8 @@
|
|||||||
"data_install": null,
|
"data_install": null,
|
||||||
"data_upgrade": null,
|
"data_upgrade": null,
|
||||||
"oam_floating_ip": "192.168.101.2",
|
"oam_floating_ip": "192.168.101.2",
|
||||||
"deploy_config_sync_status": "Deployment: configurations up-to-date"
|
"deploy_config_sync_status": "Deployment: configurations up-to-date",
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"endpoint_sync_status": [
|
"endpoint_sync_status": [
|
||||||
{
|
{
|
||||||
"sync_status": "in-sync",
|
"sync_status": "in-sync",
|
||||||
|
@ -11,6 +11,7 @@
|
|||||||
"deploy-status": "complete",
|
"deploy-status": "complete",
|
||||||
"backup-status": "complete",
|
"backup-status": "complete",
|
||||||
"backup-datetime": "2022-07-08 11:23:58.132134",
|
"backup-datetime": "2022-07-08 11:23:58.132134",
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"description": "Ottawa Site",
|
"description": "Ottawa Site",
|
||||||
"group_id": 1,
|
"group_id": 1,
|
||||||
"location": "YOW",
|
"location": "YOW",
|
||||||
|
@ -9,6 +9,7 @@
|
|||||||
"deploy-status": "complete",
|
"deploy-status": "complete",
|
||||||
"backup-status": "complete",
|
"backup-status": "complete",
|
||||||
"backup-datetime": "2022-07-08 11:23:58.132134",
|
"backup-datetime": "2022-07-08 11:23:58.132134",
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"description": "Ottawa Site",
|
"description": "Ottawa Site",
|
||||||
"group_id": 1,
|
"group_id": 1,
|
||||||
"location": "YOW",
|
"location": "YOW",
|
||||||
|
@ -12,6 +12,7 @@
|
|||||||
"deploy-status": "complete",
|
"deploy-status": "complete",
|
||||||
"backup-status": "complete",
|
"backup-status": "complete",
|
||||||
"backup-datetime": "2022-07-08 11:23:58.132134",
|
"backup-datetime": "2022-07-08 11:23:58.132134",
|
||||||
|
"region-name": "bbadb3e8e2ab473792c80ef09c5a12a4",
|
||||||
"openstack-installed": false,
|
"openstack-installed": false,
|
||||||
"management-state": "managed",
|
"management-state": "managed",
|
||||||
"systemcontroller-gateway-ip": "192.168.204.101",
|
"systemcontroller-gateway-ip": "192.168.204.101",
|
||||||
|
@ -13,5 +13,6 @@
|
|||||||
"management-gateway-ip": "192.168.205.1",
|
"management-gateway-ip": "192.168.205.1",
|
||||||
"management-end-ip": "192.168.205.160",
|
"management-end-ip": "192.168.205.160",
|
||||||
"id": 4,
|
"id": 4,
|
||||||
"name": "subcloud7"
|
"name": "subcloud7",
|
||||||
|
"region-name": "b098933127ed408e9ad7f6e81c587edb"
|
||||||
}
|
}
|
||||||
|
@ -171,6 +171,8 @@ class PhasedSubcloudDeployController(object):
|
|||||||
|
|
||||||
payload = get_create_payload(request)
|
payload = get_create_payload(request)
|
||||||
|
|
||||||
|
psd_common.subcloud_region_create(payload, context)
|
||||||
|
|
||||||
psd_common.pre_deploy_create(payload, context, request)
|
psd_common.pre_deploy_create(payload, context, request)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
|
@ -201,7 +201,7 @@ class SubcloudBackupController(object):
|
|||||||
and request_entity.type == 'subcloud'):
|
and request_entity.type == 'subcloud'):
|
||||||
# Check the system health only if the command was issued
|
# Check the system health only if the command was issued
|
||||||
# to a single subcloud to avoid huge delays.
|
# to a single subcloud to avoid huge delays.
|
||||||
if not utils.is_subcloud_healthy(subcloud.name):
|
if not utils.is_subcloud_healthy(subcloud.region_name):
|
||||||
msg = _('Subcloud %s must be in good health for '
|
msg = _('Subcloud %s must be in good health for '
|
||||||
'subcloud-backup create.' % subcloud.name)
|
'subcloud-backup create.' % subcloud.name)
|
||||||
pecan.abort(400, msg)
|
pecan.abort(400, msg)
|
||||||
|
@ -404,6 +404,10 @@ class SubcloudsController(object):
|
|||||||
first_time = False
|
first_time = False
|
||||||
|
|
||||||
for s in subcloud_list:
|
for s in subcloud_list:
|
||||||
|
# This is to reduce changes on cert-mon
|
||||||
|
# Overwrites the name value with region
|
||||||
|
if utils.is_req_from_cert_mon_agent(request):
|
||||||
|
s['name'] = s['region-name']
|
||||||
result['subclouds'].append(s)
|
result['subclouds'].append(s)
|
||||||
|
|
||||||
return result
|
return result
|
||||||
@ -421,11 +425,19 @@ class SubcloudsController(object):
|
|||||||
except exceptions.SubcloudNotFound:
|
except exceptions.SubcloudNotFound:
|
||||||
pecan.abort(404, _('Subcloud not found'))
|
pecan.abort(404, _('Subcloud not found'))
|
||||||
else:
|
else:
|
||||||
# Look up subcloud by name
|
|
||||||
try:
|
try:
|
||||||
subcloud = db_api.subcloud_get_by_name(context,
|
# This method replaces subcloud_get_by_name, since it
|
||||||
subcloud_ref)
|
# allows to lookup the subcloud either by region name
|
||||||
except exceptions.SubcloudNameNotFound:
|
# or subcloud name.
|
||||||
|
# When the request comes from the cert-monitor, it is
|
||||||
|
# based on the region name (which is UUID format).
|
||||||
|
# Whereas, if the request comes from a client other
|
||||||
|
# than cert-monitor, it will do the lookup based on
|
||||||
|
# the subcloud name.
|
||||||
|
subcloud = db_api.subcloud_get_by_name_or_region_name(
|
||||||
|
context,
|
||||||
|
subcloud_ref)
|
||||||
|
except exceptions.SubcloudNameOrRegionNameNotFound:
|
||||||
pecan.abort(404, _('Subcloud not found'))
|
pecan.abort(404, _('Subcloud not found'))
|
||||||
|
|
||||||
subcloud_id = subcloud.id
|
subcloud_id = subcloud.id
|
||||||
@ -448,6 +460,8 @@ class SubcloudsController(object):
|
|||||||
|
|
||||||
self._append_static_err_content(subcloud_dict)
|
self._append_static_err_content(subcloud_dict)
|
||||||
|
|
||||||
|
subcloud_region = subcloud.region_name
|
||||||
|
subcloud_dict.pop('region-name')
|
||||||
if detail is not None:
|
if detail is not None:
|
||||||
oam_floating_ip = "unavailable"
|
oam_floating_ip = "unavailable"
|
||||||
deploy_config_sync_status = "unknown"
|
deploy_config_sync_status = "unknown"
|
||||||
@ -455,19 +469,20 @@ class SubcloudsController(object):
|
|||||||
|
|
||||||
# Get the keystone client that will be used
|
# Get the keystone client that will be used
|
||||||
# for _get_deploy_config_sync_status and _get_oam_addresses
|
# for _get_deploy_config_sync_status and _get_oam_addresses
|
||||||
sc_ks_client = psd_common.get_ks_client(subcloud.name)
|
sc_ks_client = psd_common.get_ks_client(subcloud_region)
|
||||||
oam_addresses = self._get_oam_addresses(context,
|
oam_addresses = self._get_oam_addresses(context,
|
||||||
subcloud.name, sc_ks_client)
|
subcloud_region, sc_ks_client)
|
||||||
if oam_addresses is not None:
|
if oam_addresses is not None:
|
||||||
oam_floating_ip = oam_addresses.oam_floating_ip
|
oam_floating_ip = oam_addresses.oam_floating_ip
|
||||||
|
|
||||||
deploy_config_state = self._get_deploy_config_sync_status(
|
deploy_config_state = self._get_deploy_config_sync_status(
|
||||||
context, subcloud.name, sc_ks_client)
|
context, subcloud_region, sc_ks_client)
|
||||||
if deploy_config_state is not None:
|
if deploy_config_state is not None:
|
||||||
deploy_config_sync_status = deploy_config_state
|
deploy_config_sync_status = deploy_config_state
|
||||||
|
|
||||||
extra_details = {"oam_floating_ip": oam_floating_ip,
|
extra_details = {"oam_floating_ip": oam_floating_ip,
|
||||||
"deploy_config_sync_status": deploy_config_sync_status}
|
"deploy_config_sync_status": deploy_config_sync_status,
|
||||||
|
"region_name": subcloud_region}
|
||||||
|
|
||||||
subcloud_dict.update(extra_details)
|
subcloud_dict.update(extra_details)
|
||||||
return subcloud_dict
|
return subcloud_dict
|
||||||
@ -481,6 +496,8 @@ class SubcloudsController(object):
|
|||||||
restcomm.extract_credentials_for_policy())
|
restcomm.extract_credentials_for_policy())
|
||||||
context = restcomm.extract_context_from_environ()
|
context = restcomm.extract_context_from_environ()
|
||||||
|
|
||||||
|
bootstrap_sc_name = psd_common.get_bootstrap_subcloud_name(request)
|
||||||
|
|
||||||
payload = psd_common.get_request_data(request, None,
|
payload = psd_common.get_request_data(request, None,
|
||||||
SUBCLOUD_ADD_GET_FILE_CONTENTS)
|
SUBCLOUD_ADD_GET_FILE_CONTENTS)
|
||||||
|
|
||||||
@ -488,10 +505,19 @@ class SubcloudsController(object):
|
|||||||
|
|
||||||
psd_common.validate_secondary_parameter(payload, request)
|
psd_common.validate_secondary_parameter(payload, request)
|
||||||
|
|
||||||
|
# Compares to match both supplied and bootstrap name param
|
||||||
|
# of the subcloud if migrate is on
|
||||||
|
if payload.get('migrate') == 'true' and bootstrap_sc_name is not None:
|
||||||
|
if bootstrap_sc_name != payload.get('name'):
|
||||||
|
pecan.abort(400, _('subcloud name does not match the '
|
||||||
|
'name defined in bootstrap file'))
|
||||||
|
|
||||||
# No need sysadmin_password when add a secondary subcloud
|
# No need sysadmin_password when add a secondary subcloud
|
||||||
if 'secondary' not in payload:
|
if 'secondary' not in payload:
|
||||||
psd_common.validate_sysadmin_password(payload)
|
psd_common.validate_sysadmin_password(payload)
|
||||||
|
|
||||||
|
psd_common.subcloud_region_create(payload, context)
|
||||||
|
|
||||||
psd_common.pre_deploy_create(payload, context, request)
|
psd_common.pre_deploy_create(payload, context, request)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@ -537,12 +563,21 @@ class SubcloudsController(object):
|
|||||||
except exceptions.SubcloudNotFound:
|
except exceptions.SubcloudNotFound:
|
||||||
pecan.abort(404, _('Subcloud not found'))
|
pecan.abort(404, _('Subcloud not found'))
|
||||||
else:
|
else:
|
||||||
# Look up subcloud by name
|
|
||||||
try:
|
try:
|
||||||
subcloud = db_api.subcloud_get_by_name(context,
|
# This method replaces subcloud_get_by_name, since it
|
||||||
subcloud_ref)
|
# allows to lookup the subcloud either by region name
|
||||||
except exceptions.SubcloudNameNotFound:
|
# or subcloud name.
|
||||||
pecan.abort(404, _('Subcloud not found'))
|
# When the request comes from the cert-monitor, it is
|
||||||
|
# based on the region name (which is UUID format).
|
||||||
|
# Whereas, if the request comes from a client other
|
||||||
|
# than cert-monitor, it will do the lookup based on
|
||||||
|
# the subcloud name.
|
||||||
|
subcloud = db_api.subcloud_get_by_name_or_region_name(
|
||||||
|
context,
|
||||||
|
subcloud_ref)
|
||||||
|
except exceptions.SubcloudNameOrRegionNameNotFound:
|
||||||
|
pecan.abort(404, _('Subcloud not found'))
|
||||||
|
|
||||||
subcloud_id = subcloud.id
|
subcloud_id = subcloud.id
|
||||||
|
|
||||||
if verb is None:
|
if verb is None:
|
||||||
@ -551,6 +586,43 @@ class SubcloudsController(object):
|
|||||||
if not payload:
|
if not payload:
|
||||||
pecan.abort(400, _('Body required'))
|
pecan.abort(400, _('Body required'))
|
||||||
|
|
||||||
|
# Rename the subcloud
|
||||||
|
new_subcloud_name = payload.get('name')
|
||||||
|
if new_subcloud_name is not None:
|
||||||
|
# To be renamed the subcloud must be in unmanaged and valid deploy state
|
||||||
|
if subcloud.management_state != dccommon_consts.MANAGEMENT_UNMANAGED \
|
||||||
|
or subcloud.deploy_status not in consts.STATES_FOR_SUBCLOUD_RENAME:
|
||||||
|
msg = ('Subcloud %s must be unmanaged and in a valid deploy state '
|
||||||
|
'for the subcloud rename operation.' % subcloud.name)
|
||||||
|
|
||||||
|
# Validates new name
|
||||||
|
if not utils.is_subcloud_name_format_valid(new_subcloud_name):
|
||||||
|
pecan.abort(400, _("new name must contain alphabetic characters"))
|
||||||
|
|
||||||
|
# Checks if new subcloud name is the same as the current subcloud
|
||||||
|
if new_subcloud_name == subcloud.name:
|
||||||
|
pecan.abort(400, _('Provided subcloud name %s is the same as the '
|
||||||
|
'current subcloud %s. A different name is '
|
||||||
|
'required to rename the subcloud' %
|
||||||
|
(new_subcloud_name, subcloud.name)))
|
||||||
|
|
||||||
|
error_msg = ('Unable to rename subcloud %s with their region %s to %s' %
|
||||||
|
(subcloud.name, subcloud.region_name, new_subcloud_name))
|
||||||
|
try:
|
||||||
|
LOG.info("Renaming subcloud %s to: %s\n" % (subcloud.name,
|
||||||
|
new_subcloud_name))
|
||||||
|
sc = self.dcmanager_rpc_client.rename_subcloud(context,
|
||||||
|
subcloud_id,
|
||||||
|
subcloud.name,
|
||||||
|
new_subcloud_name)
|
||||||
|
subcloud.name = sc['name']
|
||||||
|
except RemoteError as e:
|
||||||
|
LOG.error(error_msg)
|
||||||
|
pecan.abort(422, e.value)
|
||||||
|
except Exception:
|
||||||
|
LOG.error(error_msg)
|
||||||
|
pecan.abort(500, _('Unable to rename subcloud'))
|
||||||
|
|
||||||
# Check if exist any network reconfiguration parameters
|
# Check if exist any network reconfiguration parameters
|
||||||
reconfigure_network = any(payload.get(value) is not None for value in (
|
reconfigure_network = any(payload.get(value) is not None for value in (
|
||||||
SUBCLOUD_MANDATORY_NETWORK_PARAMS))
|
SUBCLOUD_MANDATORY_NETWORK_PARAMS))
|
||||||
@ -562,6 +634,7 @@ class SubcloudsController(object):
|
|||||||
system_controller_mgmt_pool = psd_common.get_network_address_pool()
|
system_controller_mgmt_pool = psd_common.get_network_address_pool()
|
||||||
# Required parameters
|
# Required parameters
|
||||||
payload['name'] = subcloud.name
|
payload['name'] = subcloud.name
|
||||||
|
payload['region_name'] = subcloud.region_name
|
||||||
payload['system_controller_network'] = (
|
payload['system_controller_network'] = (
|
||||||
system_controller_mgmt_pool.network)
|
system_controller_mgmt_pool.network)
|
||||||
payload['system_controller_network_prefix'] = (
|
payload['system_controller_network_prefix'] = (
|
||||||
@ -715,7 +788,7 @@ class SubcloudsController(object):
|
|||||||
'Please use /v1.0/subclouds/{subcloud}/redeploy'))
|
'Please use /v1.0/subclouds/{subcloud}/redeploy'))
|
||||||
|
|
||||||
elif verb == 'update_status':
|
elif verb == 'update_status':
|
||||||
res = self.updatestatus(subcloud.name)
|
res = self.updatestatus(subcloud.name, subcloud.region_name)
|
||||||
return res
|
return res
|
||||||
elif verb == 'prestage':
|
elif verb == 'prestage':
|
||||||
if utils.subcloud_is_secondary_state(subcloud.deploy_status):
|
if utils.subcloud_is_secondary_state(subcloud.deploy_status):
|
||||||
@ -816,10 +889,11 @@ class SubcloudsController(object):
|
|||||||
LOG.exception(e)
|
LOG.exception(e)
|
||||||
pecan.abort(500, _('Unable to delete subcloud'))
|
pecan.abort(500, _('Unable to delete subcloud'))
|
||||||
|
|
||||||
def updatestatus(self, subcloud_name):
|
def updatestatus(self, subcloud_name, subcloud_region):
|
||||||
"""Update subcloud sync status
|
"""Update subcloud sync status
|
||||||
|
|
||||||
:param subcloud_name: name of the subcloud
|
:param subcloud_name: name of the subcloud
|
||||||
|
:param subcloud_region: name of the subcloud region
|
||||||
:return: json result object for the operation on success
|
:return: json result object for the operation on success
|
||||||
"""
|
"""
|
||||||
|
|
||||||
@ -848,7 +922,7 @@ class SubcloudsController(object):
|
|||||||
LOG.info('update %s set %s=%s' % (subcloud_name, endpoint, status))
|
LOG.info('update %s set %s=%s' % (subcloud_name, endpoint, status))
|
||||||
context = restcomm.extract_context_from_environ()
|
context = restcomm.extract_context_from_environ()
|
||||||
self.dcmanager_state_rpc_client.update_subcloud_endpoint_status(
|
self.dcmanager_state_rpc_client.update_subcloud_endpoint_status(
|
||||||
context, subcloud_name, endpoint, status)
|
context, subcloud_name, subcloud_region, endpoint, status)
|
||||||
|
|
||||||
result = {'result': 'OK'}
|
result = {'result': 'OK'}
|
||||||
return result
|
return result
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#
|
#
|
||||||
# Copyright (c) 2021-2022 Wind River Systems, Inc.
|
# Copyright (c) 2021-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
@ -20,21 +20,22 @@ class Auditor(object):
|
|||||||
self.state_rpc_client = dcmanager_state_rpc_client
|
self.state_rpc_client = dcmanager_state_rpc_client
|
||||||
self.endpoint_type = endpoint_type
|
self.endpoint_type = endpoint_type
|
||||||
|
|
||||||
def _set_subcloud_sync_status(self, sc_name, sc_sync_status):
|
def _set_subcloud_sync_status(self, sc_name, sc_region, sc_sync_status):
|
||||||
"""Update the sync status for endpoint."""
|
"""Update the sync status for endpoint."""
|
||||||
self.state_rpc_client.update_subcloud_endpoint_status(
|
self.state_rpc_client.update_subcloud_endpoint_status(
|
||||||
self.context,
|
self.context,
|
||||||
subcloud_name=sc_name,
|
subcloud_name=sc_name,
|
||||||
|
subcloud_region=sc_region,
|
||||||
endpoint_type=self.endpoint_type,
|
endpoint_type=self.endpoint_type,
|
||||||
sync_status=sc_sync_status)
|
sync_status=sc_sync_status)
|
||||||
|
|
||||||
def set_subcloud_endpoint_in_sync(self, sc_name):
|
def set_subcloud_endpoint_in_sync(self, sc_name, sc_region):
|
||||||
"""Set the endpoint sync status of this subcloud to be in sync"""
|
"""Set the endpoint sync status of this subcloud to be in sync"""
|
||||||
self._set_subcloud_sync_status(sc_name, dccommon_consts.SYNC_STATUS_IN_SYNC)
|
self._set_subcloud_sync_status(sc_name, sc_region, dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
|
|
||||||
def set_subcloud_endpoint_out_of_sync(self, sc_name):
|
def set_subcloud_endpoint_out_of_sync(self, sc_name, sc_region):
|
||||||
"""Set the endpoint sync status of this subcloud to be out of sync"""
|
"""Set the endpoint sync status of this subcloud to be out of sync"""
|
||||||
self._set_subcloud_sync_status(sc_name,
|
self._set_subcloud_sync_status(sc_name, sc_region,
|
||||||
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
||||||
|
|
||||||
@abc.abstractmethod
|
@abc.abstractmethod
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
# Copyright 2017 Ericsson AB.
|
# Copyright 2017 Ericsson AB.
|
||||||
# Copyright (c) 2017-2022 Wind River Systems, Inc.
|
# Copyright (c) 2017-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -75,11 +75,12 @@ class FirmwareAudit(object):
|
|||||||
self.state_rpc_client = dcmanager_state_rpc_client
|
self.state_rpc_client = dcmanager_state_rpc_client
|
||||||
self.audit_count = 0
|
self.audit_count = 0
|
||||||
|
|
||||||
def _update_subcloud_sync_status(self, sc_name, sc_endpoint_type,
|
def _update_subcloud_sync_status(self, sc_name, sc_region, sc_endpoint_type,
|
||||||
sc_status):
|
sc_status):
|
||||||
self.state_rpc_client.update_subcloud_endpoint_status(
|
self.state_rpc_client.update_subcloud_endpoint_status(
|
||||||
self.context,
|
self.context,
|
||||||
subcloud_name=sc_name,
|
subcloud_name=sc_name,
|
||||||
|
subcloud_region=sc_region,
|
||||||
endpoint_type=sc_endpoint_type,
|
endpoint_type=sc_endpoint_type,
|
||||||
sync_status=sc_status)
|
sync_status=sc_status)
|
||||||
|
|
||||||
@ -225,19 +226,20 @@ class FirmwareAudit(object):
|
|||||||
return False
|
return False
|
||||||
return True
|
return True
|
||||||
|
|
||||||
def subcloud_firmware_audit(self, subcloud_name, audit_data):
|
def subcloud_firmware_audit(self, subcloud_name, subcloud_region, audit_data):
|
||||||
LOG.info('Triggered firmware audit for: %s.' % subcloud_name)
|
LOG.info('Triggered firmware audit for: %s.' % subcloud_name)
|
||||||
if not audit_data:
|
if not audit_data:
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
LOG.debug('No images to audit, exiting firmware audit')
|
LOG.debug('No images to audit, exiting firmware audit')
|
||||||
return
|
return
|
||||||
try:
|
try:
|
||||||
sc_os_client = OpenStackDriver(region_name=subcloud_name,
|
sc_os_client = OpenStackDriver(region_name=subcloud_region,
|
||||||
region_clients=None).keystone_client
|
region_clients=None).keystone_client
|
||||||
endpoint = sc_os_client.endpoint_cache.get_endpoint('sysinv')
|
endpoint = sc_os_client.endpoint_cache.get_endpoint('sysinv')
|
||||||
sysinv_client = SysinvClient(subcloud_name, sc_os_client.session,
|
sysinv_client = SysinvClient(subcloud_region, sc_os_client.session,
|
||||||
endpoint=endpoint)
|
endpoint=endpoint)
|
||||||
except (keystone_exceptions.EndpointNotFound,
|
except (keystone_exceptions.EndpointNotFound,
|
||||||
keystone_exceptions.ConnectFailure,
|
keystone_exceptions.ConnectFailure,
|
||||||
@ -267,7 +269,8 @@ class FirmwareAudit(object):
|
|||||||
LOG.info("No enabled devices on the subcloud %s,"
|
LOG.info("No enabled devices on the subcloud %s,"
|
||||||
"exiting firmware audit" % subcloud_name)
|
"exiting firmware audit" % subcloud_name)
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
return
|
return
|
||||||
|
|
||||||
@ -312,10 +315,12 @@ class FirmwareAudit(object):
|
|||||||
|
|
||||||
if out_of_sync:
|
if out_of_sync:
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
||||||
else:
|
else:
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
LOG.info('Firmware audit completed for: %s.' % subcloud_name)
|
LOG.info('Firmware audit completed for: %s.' % subcloud_name)
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#
|
#
|
||||||
# Copyright (c) 2021-2022 Wind River Systems, Inc.
|
# Copyright (c) 2021-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
@ -46,20 +46,21 @@ class KubeRootcaUpdateAudit(Auditor):
|
|||||||
"""
|
"""
|
||||||
return []
|
return []
|
||||||
|
|
||||||
def subcloud_audit(self, subcloud_name, region_one_audit_data):
|
def subcloud_audit(self, subcloud_name, subcloud_region, region_one_audit_data):
|
||||||
"""Perform an audit of kube root CA update info in a subcloud.
|
"""Perform an audit of kube root CA update info in a subcloud.
|
||||||
|
|
||||||
:param subcloud_name: the name of the subcloud
|
:param subcloud_name: the name of the subcloud
|
||||||
|
:param subcloud_region: the region of the subcloud
|
||||||
:param region_one_audit_data: ignored. Always an empty list
|
:param region_one_audit_data: ignored. Always an empty list
|
||||||
"""
|
"""
|
||||||
LOG.info("Triggered %s audit for: %s" % (self.audit_type,
|
LOG.info("Triggered %s audit for: %s" % (self.audit_type,
|
||||||
subcloud_name))
|
subcloud_name))
|
||||||
# check for a particular alarm in the subcloud
|
# check for a particular alarm in the subcloud
|
||||||
try:
|
try:
|
||||||
sc_os_client = OpenStackDriver(region_name=subcloud_name,
|
sc_os_client = OpenStackDriver(region_name=subcloud_region,
|
||||||
region_clients=None)
|
region_clients=None)
|
||||||
session = sc_os_client.keystone_client.session
|
session = sc_os_client.keystone_client.session
|
||||||
fm_client = FmClient(subcloud_name, session)
|
fm_client = FmClient(subcloud_region, session)
|
||||||
except (keystone_exceptions.EndpointNotFound,
|
except (keystone_exceptions.EndpointNotFound,
|
||||||
keystone_exceptions.ConnectFailure,
|
keystone_exceptions.ConnectFailure,
|
||||||
keystone_exceptions.ConnectTimeout,
|
keystone_exceptions.ConnectTimeout,
|
||||||
@ -75,8 +76,8 @@ class KubeRootcaUpdateAudit(Auditor):
|
|||||||
out_of_sync = True
|
out_of_sync = True
|
||||||
break
|
break
|
||||||
if out_of_sync:
|
if out_of_sync:
|
||||||
self.set_subcloud_endpoint_out_of_sync(subcloud_name)
|
self.set_subcloud_endpoint_out_of_sync(subcloud_name, subcloud_region)
|
||||||
else:
|
else:
|
||||||
self.set_subcloud_endpoint_in_sync(subcloud_name)
|
self.set_subcloud_endpoint_in_sync(subcloud_name, subcloud_region)
|
||||||
LOG.info("%s audit completed for: %s" % (self.audit_type,
|
LOG.info("%s audit completed for: %s" % (self.audit_type,
|
||||||
subcloud_name))
|
subcloud_name))
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
# Copyright 2017 Ericsson AB.
|
# Copyright 2017 Ericsson AB.
|
||||||
# Copyright (c) 2017-2022 Wind River Systems, Inc.
|
# Copyright (c) 2017-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -55,11 +55,12 @@ class KubernetesAudit(object):
|
|||||||
self.state_rpc_client = dcmanager_state_rpc_client
|
self.state_rpc_client = dcmanager_state_rpc_client
|
||||||
self.audit_count = 0
|
self.audit_count = 0
|
||||||
|
|
||||||
def _update_subcloud_sync_status(self, sc_name, sc_endpoint_type,
|
def _update_subcloud_sync_status(self, sc_name, sc_region, sc_endpoint_type,
|
||||||
sc_status):
|
sc_status):
|
||||||
self.state_rpc_client.update_subcloud_endpoint_status(
|
self.state_rpc_client.update_subcloud_endpoint_status(
|
||||||
self.context,
|
self.context,
|
||||||
subcloud_name=sc_name,
|
subcloud_name=sc_name,
|
||||||
|
subcloud_region=sc_region,
|
||||||
endpoint_type=sc_endpoint_type,
|
endpoint_type=sc_endpoint_type,
|
||||||
sync_status=sc_status)
|
sync_status=sc_status)
|
||||||
|
|
||||||
@ -90,19 +91,20 @@ class KubernetesAudit(object):
|
|||||||
LOG.debug("RegionOne kubernetes versions: %s" % region_one_data)
|
LOG.debug("RegionOne kubernetes versions: %s" % region_one_data)
|
||||||
return region_one_data
|
return region_one_data
|
||||||
|
|
||||||
def subcloud_kubernetes_audit(self, subcloud_name, audit_data):
|
def subcloud_kubernetes_audit(self, subcloud_name, subcloud_region, audit_data):
|
||||||
LOG.info('Triggered kubernetes audit for: %s' % subcloud_name)
|
LOG.info('Triggered kubernetes audit for: %s' % subcloud_name)
|
||||||
if not audit_data:
|
if not audit_data:
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
||||||
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
LOG.debug('No region one audit data, exiting kubernetes audit')
|
LOG.debug('No region one audit data, exiting kubernetes audit')
|
||||||
return
|
return
|
||||||
try:
|
try:
|
||||||
sc_os_client = OpenStackDriver(region_name=subcloud_name,
|
sc_os_client = OpenStackDriver(region_name=subcloud_region,
|
||||||
region_clients=None).keystone_client
|
region_clients=None).keystone_client
|
||||||
endpoint = sc_os_client.endpoint_cache.get_endpoint('sysinv')
|
endpoint = sc_os_client.endpoint_cache.get_endpoint('sysinv')
|
||||||
sysinv_client = SysinvClient(subcloud_name, sc_os_client.session,
|
sysinv_client = SysinvClient(subcloud_region, sc_os_client.session,
|
||||||
endpoint=endpoint)
|
endpoint=endpoint)
|
||||||
except (keystone_exceptions.EndpointNotFound,
|
except (keystone_exceptions.EndpointNotFound,
|
||||||
keystone_exceptions.ConnectFailure,
|
keystone_exceptions.ConnectFailure,
|
||||||
@ -152,10 +154,12 @@ class KubernetesAudit(object):
|
|||||||
|
|
||||||
if out_of_sync:
|
if out_of_sync:
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
||||||
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
||||||
else:
|
else:
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
||||||
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
LOG.info('Kubernetes audit completed for: %s' % subcloud_name)
|
LOG.info('Kubernetes audit completed for: %s' % subcloud_name)
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
# Copyright 2017 Ericsson AB.
|
# Copyright 2017 Ericsson AB.
|
||||||
# Copyright (c) 2017-2022 Wind River Systems, Inc.
|
# Copyright (c) 2017-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
# you may not use this file except in compliance with the License.
|
# you may not use this file except in compliance with the License.
|
||||||
@ -62,11 +62,12 @@ class PatchAudit(object):
|
|||||||
self.state_rpc_client = dcmanager_state_rpc_client
|
self.state_rpc_client = dcmanager_state_rpc_client
|
||||||
self.audit_count = 0
|
self.audit_count = 0
|
||||||
|
|
||||||
def _update_subcloud_sync_status(self, sc_name, sc_endpoint_type,
|
def _update_subcloud_sync_status(self, sc_name, sc_region, sc_endpoint_type,
|
||||||
sc_status):
|
sc_status):
|
||||||
self.state_rpc_client.update_subcloud_endpoint_status(
|
self.state_rpc_client.update_subcloud_endpoint_status(
|
||||||
self.context,
|
self.context,
|
||||||
subcloud_name=sc_name,
|
subcloud_name=sc_name,
|
||||||
|
subcloud_region=sc_region,
|
||||||
endpoint_type=sc_endpoint_type,
|
endpoint_type=sc_endpoint_type,
|
||||||
sync_status=sc_status)
|
sync_status=sc_status)
|
||||||
|
|
||||||
@ -132,19 +133,19 @@ class PatchAudit(object):
|
|||||||
return PatchAuditData(regionone_patches, applied_patch_ids,
|
return PatchAuditData(regionone_patches, applied_patch_ids,
|
||||||
committed_patch_ids, regionone_software_version)
|
committed_patch_ids, regionone_software_version)
|
||||||
|
|
||||||
def subcloud_patch_audit(self, subcloud_name, audit_data, do_load_audit):
|
def subcloud_patch_audit(self, subcloud_name, subcloud_region, audit_data, do_load_audit):
|
||||||
LOG.info('Triggered patch audit for: %s.' % subcloud_name)
|
LOG.info('Triggered patch audit for: %s.' % subcloud_name)
|
||||||
try:
|
try:
|
||||||
sc_os_client = OpenStackDriver(region_name=subcloud_name,
|
sc_os_client = OpenStackDriver(region_name=subcloud_region,
|
||||||
region_clients=None).keystone_client
|
region_clients=None).keystone_client
|
||||||
session = sc_os_client.session
|
session = sc_os_client.session
|
||||||
patching_endpoint = sc_os_client.endpoint_cache.get_endpoint('patching')
|
patching_endpoint = sc_os_client.endpoint_cache.get_endpoint('patching')
|
||||||
sysinv_endpoint = sc_os_client.endpoint_cache.get_endpoint('sysinv')
|
sysinv_endpoint = sc_os_client.endpoint_cache.get_endpoint('sysinv')
|
||||||
patching_client = PatchingClient(
|
patching_client = PatchingClient(
|
||||||
subcloud_name, session,
|
subcloud_region, session,
|
||||||
endpoint=patching_endpoint)
|
endpoint=patching_endpoint)
|
||||||
sysinv_client = SysinvClient(
|
sysinv_client = SysinvClient(
|
||||||
subcloud_name, session,
|
subcloud_region, session,
|
||||||
endpoint=sysinv_endpoint)
|
endpoint=sysinv_endpoint)
|
||||||
except (keystone_exceptions.EndpointNotFound,
|
except (keystone_exceptions.EndpointNotFound,
|
||||||
keystone_exceptions.ConnectFailure,
|
keystone_exceptions.ConnectFailure,
|
||||||
@ -227,11 +228,13 @@ class PatchAudit(object):
|
|||||||
|
|
||||||
if out_of_sync:
|
if out_of_sync:
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
||||||
else:
|
else:
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
|
|
||||||
# Check subcloud software version every other audit cycle
|
# Check subcloud software version every other audit cycle
|
||||||
@ -251,16 +254,19 @@ class PatchAudit(object):
|
|||||||
|
|
||||||
if subcloud_software_version == audit_data.software_version:
|
if subcloud_software_version == audit_data.software_version:
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_LOAD,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
else:
|
else:
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_LOAD,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
||||||
else:
|
else:
|
||||||
# As upgrade is still in progress, set the subcloud load
|
# As upgrade is still in progress, set the subcloud load
|
||||||
# status as out-of-sync.
|
# status as out-of-sync.
|
||||||
self._update_subcloud_sync_status(
|
self._update_subcloud_sync_status(
|
||||||
subcloud_name, dccommon_consts.ENDPOINT_TYPE_LOAD,
|
subcloud_name,
|
||||||
|
subcloud_region, dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
||||||
LOG.info('Patch audit completed for: %s.' % subcloud_name)
|
LOG.info('Patch audit completed for: %s.' % subcloud_name)
|
||||||
|
@ -153,7 +153,7 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
# Create a new greenthread for each subcloud to allow the audits
|
# Create a new greenthread for each subcloud to allow the audits
|
||||||
# to be done in parallel. If there are not enough greenthreads
|
# to be done in parallel. If there are not enough greenthreads
|
||||||
# in the pool, this will block until one becomes available.
|
# in the pool, this will block until one becomes available.
|
||||||
self.subcloud_workers[subcloud.name] = \
|
self.subcloud_workers[subcloud.region_name] = \
|
||||||
self.thread_group_manager.start(self._do_audit_subcloud,
|
self.thread_group_manager.start(self._do_audit_subcloud,
|
||||||
subcloud,
|
subcloud,
|
||||||
update_subcloud_state,
|
update_subcloud_state,
|
||||||
@ -204,12 +204,13 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
'audit_fail_count for subcloud: %s' % subcloud.name)
|
'audit_fail_count for subcloud: %s' % subcloud.name)
|
||||||
|
|
||||||
def _update_subcloud_availability(self, subcloud_name,
|
def _update_subcloud_availability(self, subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
availability_status=None,
|
availability_status=None,
|
||||||
update_state_only=False,
|
update_state_only=False,
|
||||||
audit_fail_count=None):
|
audit_fail_count=None):
|
||||||
try:
|
try:
|
||||||
self.state_rpc_client.update_subcloud_availability(
|
self.state_rpc_client.update_subcloud_availability(
|
||||||
self.context, subcloud_name, availability_status,
|
self.context, subcloud_name, subcloud_region, availability_status,
|
||||||
update_state_only, audit_fail_count)
|
update_state_only, audit_fail_count)
|
||||||
LOG.info('Notifying dcmanager-state, subcloud:%s, availability:%s' %
|
LOG.info('Notifying dcmanager-state, subcloud:%s, availability:%s' %
|
||||||
(subcloud_name,
|
(subcloud_name,
|
||||||
@ -339,7 +340,7 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
db_api.subcloud_audits_end_audit(self.context,
|
db_api.subcloud_audits_end_audit(self.context,
|
||||||
subcloud.id, audits_done)
|
subcloud.id, audits_done)
|
||||||
# Remove the worker for this subcloud
|
# Remove the worker for this subcloud
|
||||||
self.subcloud_workers.pop(subcloud.name, None)
|
self.subcloud_workers.pop(subcloud.region_name, None)
|
||||||
LOG.debug("PID: %s, done auditing subcloud: %s." %
|
LOG.debug("PID: %s, done auditing subcloud: %s." %
|
||||||
(self.pid, subcloud.name))
|
(self.pid, subcloud.name))
|
||||||
|
|
||||||
@ -361,6 +362,7 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
avail_status_current = subcloud.availability_status
|
avail_status_current = subcloud.availability_status
|
||||||
audit_fail_count = subcloud.audit_fail_count
|
audit_fail_count = subcloud.audit_fail_count
|
||||||
subcloud_name = subcloud.name
|
subcloud_name = subcloud.name
|
||||||
|
subcloud_region = subcloud.region_name
|
||||||
audits_done = list()
|
audits_done = list()
|
||||||
failures = list()
|
failures = list()
|
||||||
|
|
||||||
@ -371,7 +373,7 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
fm_client = None
|
fm_client = None
|
||||||
avail_to_set = dccommon_consts.AVAILABILITY_OFFLINE
|
avail_to_set = dccommon_consts.AVAILABILITY_OFFLINE
|
||||||
try:
|
try:
|
||||||
os_client = OpenStackDriver(region_name=subcloud_name,
|
os_client = OpenStackDriver(region_name=subcloud_region,
|
||||||
thread_name='subcloud-audit',
|
thread_name='subcloud-audit',
|
||||||
region_clients=['fm', 'sysinv'])
|
region_clients=['fm', 'sysinv'])
|
||||||
sysinv_client = os_client.sysinv_client
|
sysinv_client = os_client.sysinv_client
|
||||||
@ -452,6 +454,7 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
(avail_to_set, subcloud_name))
|
(avail_to_set, subcloud_name))
|
||||||
self._update_subcloud_availability(
|
self._update_subcloud_availability(
|
||||||
subcloud_name,
|
subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
availability_status=avail_to_set,
|
availability_status=avail_to_set,
|
||||||
audit_fail_count=audit_fail_count)
|
audit_fail_count=audit_fail_count)
|
||||||
|
|
||||||
@ -470,6 +473,7 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
% subcloud_name)
|
% subcloud_name)
|
||||||
self._update_subcloud_availability(
|
self._update_subcloud_availability(
|
||||||
subcloud_name,
|
subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
availability_status=avail_status_current,
|
availability_status=avail_status_current,
|
||||||
update_state_only=True)
|
update_state_only=True)
|
||||||
|
|
||||||
@ -488,6 +492,7 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
if do_patch_audit and patch_audit_data:
|
if do_patch_audit and patch_audit_data:
|
||||||
try:
|
try:
|
||||||
self.patch_audit.subcloud_patch_audit(subcloud_name,
|
self.patch_audit.subcloud_patch_audit(subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
patch_audit_data,
|
patch_audit_data,
|
||||||
do_load_audit)
|
do_load_audit)
|
||||||
audits_done.append('patch')
|
audits_done.append('patch')
|
||||||
@ -504,6 +509,7 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
if do_firmware_audit:
|
if do_firmware_audit:
|
||||||
try:
|
try:
|
||||||
self.firmware_audit.subcloud_firmware_audit(subcloud_name,
|
self.firmware_audit.subcloud_firmware_audit(subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
firmware_audit_data)
|
firmware_audit_data)
|
||||||
audits_done.append('firmware')
|
audits_done.append('firmware')
|
||||||
except Exception:
|
except Exception:
|
||||||
@ -514,6 +520,7 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
try:
|
try:
|
||||||
self.kubernetes_audit.subcloud_kubernetes_audit(
|
self.kubernetes_audit.subcloud_kubernetes_audit(
|
||||||
subcloud_name,
|
subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
kubernetes_audit_data)
|
kubernetes_audit_data)
|
||||||
audits_done.append('kubernetes')
|
audits_done.append('kubernetes')
|
||||||
except Exception:
|
except Exception:
|
||||||
@ -524,6 +531,7 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
try:
|
try:
|
||||||
self.kube_rootca_update_audit.subcloud_audit(
|
self.kube_rootca_update_audit.subcloud_audit(
|
||||||
subcloud_name,
|
subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
kube_rootca_update_audit_data)
|
kube_rootca_update_audit_data)
|
||||||
audits_done.append('kube-rootca-update')
|
audits_done.append('kube-rootca-update')
|
||||||
except Exception:
|
except Exception:
|
||||||
@ -536,7 +544,7 @@ class SubcloudAuditWorkerManager(manager.Manager):
|
|||||||
# audits_done to be empty:
|
# audits_done to be empty:
|
||||||
try:
|
try:
|
||||||
self._audit_subcloud_openstack_app(
|
self._audit_subcloud_openstack_app(
|
||||||
subcloud_name, sysinv_client, subcloud.openstack_installed)
|
subcloud_region, sysinv_client, subcloud.openstack_installed)
|
||||||
except Exception:
|
except Exception:
|
||||||
LOG.exception(failmsg % (subcloud.name, 'openstack'))
|
LOG.exception(failmsg % (subcloud.name, 'openstack'))
|
||||||
failures.append('openstack')
|
failures.append('openstack')
|
||||||
|
@ -418,3 +418,9 @@ ALTERNATE_DEPLOY_PLAYBOOK_DIR = ALTERNATE_DEPLOY_FILES_DIR + '/playbooks'
|
|||||||
DEPLOY_PLAYBOOK_POSTFIX = 'deployment-manager.yaml'
|
DEPLOY_PLAYBOOK_POSTFIX = 'deployment-manager.yaml'
|
||||||
|
|
||||||
SUPPORTED_UPGRADES_METADATA_FILE_PATH = '/usr/rootdirs/opt/upgrades/metadata.xml'
|
SUPPORTED_UPGRADES_METADATA_FILE_PATH = '/usr/rootdirs/opt/upgrades/metadata.xml'
|
||||||
|
|
||||||
|
# Required for subcloud name configuration
|
||||||
|
CERT_MON_HTTP_AGENT = 'cert-mon/1.0'
|
||||||
|
OS_REGION_NAME = "OS_REGION_NAME"
|
||||||
|
STATES_FOR_SUBCLOUD_RENAME = [DEPLOY_STATE_DONE,
|
||||||
|
PRESTAGE_STATE_COMPLETE]
|
||||||
|
@ -115,6 +115,18 @@ class SubcloudNameNotFound(NotFound):
|
|||||||
message = _("Subcloud with name %(name)s doesn't exist.")
|
message = _("Subcloud with name %(name)s doesn't exist.")
|
||||||
|
|
||||||
|
|
||||||
|
class SubcloudRegionNameNotFound(NotFound):
|
||||||
|
message = _("Subcloud with region name %(region_name)s doesn't exist.")
|
||||||
|
|
||||||
|
|
||||||
|
class SubcloudNameOrRegionNameNotFound(NotFound):
|
||||||
|
message = _("Subcloud with name or region name %(name)s doesn't exist.")
|
||||||
|
|
||||||
|
|
||||||
|
class SubcloudOrRegionNameAlreadyExists(Conflict):
|
||||||
|
message = _("Subcloud with name or region name %(name)s already exist.")
|
||||||
|
|
||||||
|
|
||||||
class SubcloudNotOnline(DCManagerException):
|
class SubcloudNotOnline(DCManagerException):
|
||||||
message = _("Subcloud is not online.")
|
message = _("Subcloud is not online.")
|
||||||
|
|
||||||
|
@ -11,6 +11,7 @@ import typing
|
|||||||
|
|
||||||
import netaddr
|
import netaddr
|
||||||
from oslo_log import log as logging
|
from oslo_log import log as logging
|
||||||
|
from oslo_utils import uuidutils
|
||||||
import pecan
|
import pecan
|
||||||
import tsconfig.tsconfig as tsc
|
import tsconfig.tsconfig as tsc
|
||||||
import yaml
|
import yaml
|
||||||
@ -730,7 +731,18 @@ def upload_deploy_config_file(request, payload):
|
|||||||
pecan.abort(400, _("No %s file uploaded" % consts.DEPLOY_CONFIG))
|
pecan.abort(400, _("No %s file uploaded" % consts.DEPLOY_CONFIG))
|
||||||
|
|
||||||
file_item.file.seek(0, os.SEEK_SET)
|
file_item.file.seek(0, os.SEEK_SET)
|
||||||
contents = file_item.file.read()
|
file_lines = file_item.file.readlines()
|
||||||
|
|
||||||
|
# Updates the OS_REGION_NAME param which is required for deployment
|
||||||
|
contents = ""
|
||||||
|
for line in file_lines:
|
||||||
|
dec_line = line.decode('utf8')
|
||||||
|
if consts.OS_REGION_NAME in dec_line:
|
||||||
|
os_reg_item = dec_line.split(":")
|
||||||
|
dec_line = os_reg_item[0] + ": " + payload['region_name'] + "\n"
|
||||||
|
contents = contents + dec_line
|
||||||
|
contents = contents.encode()
|
||||||
|
|
||||||
# the deploy config needs to upload to the override location
|
# the deploy config needs to upload to the override location
|
||||||
fn = get_config_file_path(payload['name'], consts.DEPLOY_CONFIG)
|
fn = get_config_file_path(payload['name'], consts.DEPLOY_CONFIG)
|
||||||
upload_config_file(contents, fn, consts.DEPLOY_CONFIG)
|
upload_config_file(contents, fn, consts.DEPLOY_CONFIG)
|
||||||
@ -844,6 +856,9 @@ def add_subcloud_to_database(context, payload):
|
|||||||
if 'install_values' in payload:
|
if 'install_values' in payload:
|
||||||
data_install = json.dumps(payload['install_values'])
|
data_install = json.dumps(payload['install_values'])
|
||||||
|
|
||||||
|
LOG.info("Creating subcloud %s with region: %s", payload.get('name'),
|
||||||
|
payload.get('region_name'))
|
||||||
|
|
||||||
subcloud = db_api.subcloud_create(
|
subcloud = db_api.subcloud_create(
|
||||||
context,
|
context,
|
||||||
payload['name'],
|
payload['name'],
|
||||||
@ -857,6 +872,7 @@ def add_subcloud_to_database(context, payload):
|
|||||||
payload['systemcontroller_gateway_address'],
|
payload['systemcontroller_gateway_address'],
|
||||||
consts.DEPLOY_STATE_NONE,
|
consts.DEPLOY_STATE_NONE,
|
||||||
consts.ERROR_DESC_EMPTY,
|
consts.ERROR_DESC_EMPTY,
|
||||||
|
payload['region_name'],
|
||||||
False,
|
False,
|
||||||
group_id,
|
group_id,
|
||||||
data_install=data_install)
|
data_install=data_install)
|
||||||
@ -1023,3 +1039,118 @@ def pre_deploy_bootstrap(context: RequestContext, payload: dict,
|
|||||||
# again:
|
# again:
|
||||||
validate_system_controller_patch_status("bootstrap")
|
validate_system_controller_patch_status("bootstrap")
|
||||||
validate_k8s_version(payload)
|
validate_k8s_version(payload)
|
||||||
|
|
||||||
|
|
||||||
|
def get_bootstrap_subcloud_name(request: pecan.Request):
|
||||||
|
bootstrap_values = request.POST.get(consts.BOOTSTRAP_VALUES)
|
||||||
|
bootstrap_sc_name = None
|
||||||
|
if bootstrap_values is not None:
|
||||||
|
bootstrap_values.file.seek(0, os.SEEK_SET)
|
||||||
|
contents = bootstrap_values.file.read()
|
||||||
|
data = yaml.safe_load(contents.decode('utf8'))
|
||||||
|
bootstrap_sc_name = data.get('name')
|
||||||
|
|
||||||
|
return bootstrap_sc_name
|
||||||
|
|
||||||
|
|
||||||
|
def get_region_value_from_subcloud(payload: dict):
|
||||||
|
subcloud_region = None
|
||||||
|
# It connects to the subcloud via the bootstrap-address IP and tries
|
||||||
|
# to get the region from it
|
||||||
|
if payload['bootstrap-address'] is not None:
|
||||||
|
try:
|
||||||
|
subcloud_region = utils.\
|
||||||
|
get_region_from_subcloud_address(payload)
|
||||||
|
LOG.info("Retrieved region value from subcloud to be migrated: %s"
|
||||||
|
% subcloud_region)
|
||||||
|
if subcloud_region is None:
|
||||||
|
msg = ("Cannot find subcloud's region name from address: %s"
|
||||||
|
% payload['bootstrap-address'])
|
||||||
|
LOG.error(msg)
|
||||||
|
raise exceptions.InvalidParameterValue(err=msg)
|
||||||
|
except Exception:
|
||||||
|
LOG.error("Unable to retrieve the region value from subcloud "
|
||||||
|
"address %s" % payload['bootstrap-address'])
|
||||||
|
raise
|
||||||
|
return subcloud_region
|
||||||
|
|
||||||
|
|
||||||
|
def is_migrate_scenario(payload: dict):
|
||||||
|
migrate = False
|
||||||
|
migrate_str = payload.get('migrate')
|
||||||
|
|
||||||
|
if migrate_str is not None:
|
||||||
|
if migrate_str == "true":
|
||||||
|
migrate = True
|
||||||
|
return migrate
|
||||||
|
|
||||||
|
|
||||||
|
def generate_subcloud_unique_region(context: RequestContext, payload: dict):
|
||||||
|
LOG.debug("Begin generate subcloud unique region for subcloud %s"
|
||||||
|
% payload['name'])
|
||||||
|
|
||||||
|
is_migrate = is_migrate_scenario(payload)
|
||||||
|
migrate_sc_region = None
|
||||||
|
|
||||||
|
# If migration flag is present, tries to connect to subcloud to
|
||||||
|
# get the region value
|
||||||
|
if is_migrate:
|
||||||
|
LOG.debug("The scenario matches that of the subcloud migration, "
|
||||||
|
"therefore it will try to obtain the value of the "
|
||||||
|
"region from subcloud %s..." % payload['name'])
|
||||||
|
migrate_sc_region = get_region_value_from_subcloud(payload)
|
||||||
|
else:
|
||||||
|
LOG.debug("The scenario matches that of creating a new subcloud, "
|
||||||
|
"so a region will be generated randomly for "
|
||||||
|
"subcloud %s..." % payload['name'])
|
||||||
|
while True:
|
||||||
|
# If migrate flag is not present, creates a random region value
|
||||||
|
if not is_migrate:
|
||||||
|
subcloud_region = uuidutils.generate_uuid().replace("-", "")
|
||||||
|
else:
|
||||||
|
# In the migration/rehome scenario uses the region value
|
||||||
|
# returned by queried subcloud
|
||||||
|
subcloud_region = migrate_sc_region
|
||||||
|
# Lookup region to check if exists
|
||||||
|
try:
|
||||||
|
db_api.subcloud_get_by_region_name(context,
|
||||||
|
subcloud_region)
|
||||||
|
LOG.info("Subcloud region: %s already exists. "
|
||||||
|
"Generating new one..." % (subcloud_region))
|
||||||
|
# In the migration scenario, it is intended to use the
|
||||||
|
# same region that the current subcloud has, therefore
|
||||||
|
# another region value cannot be generated.
|
||||||
|
if is_migrate:
|
||||||
|
LOG.error("Subcloud region to migrate: %s already exists "
|
||||||
|
"and it is not allowed to generate a new region "
|
||||||
|
"for a subcloud migration" % (subcloud_region))
|
||||||
|
raise exceptions.SubcloudAlreadyExists(
|
||||||
|
region_name=subcloud_region)
|
||||||
|
except exceptions.SubcloudRegionNameNotFound:
|
||||||
|
break
|
||||||
|
except Exception:
|
||||||
|
message = "Unable to generate subcloud region"
|
||||||
|
LOG.error(message)
|
||||||
|
raise
|
||||||
|
if not is_migrate:
|
||||||
|
LOG.info("Generated region for new subcloud %s: %s"
|
||||||
|
% (payload.get('name'), subcloud_region))
|
||||||
|
else:
|
||||||
|
LOG.info("Region for subcloud %s to be migrated: %s"
|
||||||
|
% (payload.get('name'), subcloud_region))
|
||||||
|
return subcloud_region
|
||||||
|
|
||||||
|
|
||||||
|
def subcloud_region_create(payload: dict, context: RequestContext):
|
||||||
|
try:
|
||||||
|
# Generates a unique region value
|
||||||
|
payload['region_name'] = generate_subcloud_unique_region(context,
|
||||||
|
payload)
|
||||||
|
except Exception:
|
||||||
|
# For logging purpose only
|
||||||
|
msg = "Unable to generate or retrieve region value"
|
||||||
|
if not is_migrate_scenario(payload):
|
||||||
|
msg = "Unable to generate region value to update deploy \
|
||||||
|
config for subcloud %s" % payload.get('name')
|
||||||
|
LOG.exception(msg)
|
||||||
|
pecan.abort(400, _(msg))
|
||||||
|
@ -188,7 +188,7 @@ def validate_prestage(subcloud, payload):
|
|||||||
initial_subcloud_validate(subcloud, installed_loads, software_version)
|
initial_subcloud_validate(subcloud, installed_loads, software_version)
|
||||||
|
|
||||||
subcloud_type, system_health, oam_floating_ip = \
|
subcloud_type, system_health, oam_floating_ip = \
|
||||||
_get_prestage_subcloud_info(subcloud.name)
|
_get_prestage_subcloud_info(subcloud)
|
||||||
|
|
||||||
if subcloud_type != consts.SYSTEM_MODE_SIMPLEX:
|
if subcloud_type != consts.SYSTEM_MODE_SIMPLEX:
|
||||||
raise exceptions.PrestagePreCheckFailedException(
|
raise exceptions.PrestagePreCheckFailedException(
|
||||||
@ -287,18 +287,18 @@ def _prestage_standalone_thread(context, subcloud, payload):
|
|||||||
raise
|
raise
|
||||||
|
|
||||||
|
|
||||||
def _get_prestage_subcloud_info(subcloud_name):
|
def _get_prestage_subcloud_info(subcloud):
|
||||||
"""Retrieve prestage data from the subcloud.
|
"""Retrieve prestage data from the subcloud.
|
||||||
|
|
||||||
Pull all required data here in order to minimize keystone/sysinv client
|
Pull all required data here in order to minimize keystone/sysinv client
|
||||||
interactions.
|
interactions.
|
||||||
"""
|
"""
|
||||||
try:
|
try:
|
||||||
os_client = OpenStackDriver(region_name=subcloud_name,
|
os_client = OpenStackDriver(region_name=subcloud.region_name,
|
||||||
region_clients=None)
|
region_clients=None)
|
||||||
keystone_client = os_client.keystone_client
|
keystone_client = os_client.keystone_client
|
||||||
endpoint = keystone_client.endpoint_cache.get_endpoint('sysinv')
|
endpoint = keystone_client.endpoint_cache.get_endpoint('sysinv')
|
||||||
sysinv_client = SysinvClient(subcloud_name,
|
sysinv_client = SysinvClient(subcloud.region_name,
|
||||||
keystone_client.session,
|
keystone_client.session,
|
||||||
endpoint=endpoint)
|
endpoint=endpoint)
|
||||||
mode = sysinv_client.get_system().system_mode
|
mode = sysinv_client.get_system().system_mode
|
||||||
@ -309,7 +309,7 @@ def _get_prestage_subcloud_info(subcloud_name):
|
|||||||
except Exception as e:
|
except Exception as e:
|
||||||
LOG.exception(e)
|
LOG.exception(e)
|
||||||
raise exceptions.PrestagePreCheckFailedException(
|
raise exceptions.PrestagePreCheckFailedException(
|
||||||
subcloud=subcloud_name,
|
subcloud=subcloud.name,
|
||||||
details="Failed to retrieve subcloud system mode and system health.")
|
details="Failed to retrieve subcloud system mode and system health.")
|
||||||
|
|
||||||
|
|
||||||
|
@ -55,6 +55,7 @@ DC_MANAGER_GRPNAME = "root"
|
|||||||
|
|
||||||
# Max lines output msg from logs
|
# Max lines output msg from logs
|
||||||
MAX_LINES_MSG = 10
|
MAX_LINES_MSG = 10
|
||||||
|
REGION_VALUE_CMD = "grep " + consts.OS_REGION_NAME + " /etc/platform/openrc"
|
||||||
|
|
||||||
ABORT_UPDATE_STATUS = {
|
ABORT_UPDATE_STATUS = {
|
||||||
consts.DEPLOY_STATE_INSTALLING: consts.DEPLOY_STATE_ABORTING_INSTALL,
|
consts.DEPLOY_STATE_INSTALLING: consts.DEPLOY_STATE_ABORTING_INSTALL,
|
||||||
@ -552,23 +553,23 @@ def subcloud_db_list_to_dict(subclouds):
|
|||||||
for subcloud in subclouds]}
|
for subcloud in subclouds]}
|
||||||
|
|
||||||
|
|
||||||
def get_oam_addresses(subcloud_name, sc_ks_client):
|
def get_oam_addresses(subcloud, sc_ks_client):
|
||||||
"""Get the subclouds oam addresses"""
|
"""Get the subclouds oam addresses"""
|
||||||
|
|
||||||
# First need to retrieve the Subcloud's Keystone session
|
# First need to retrieve the Subcloud's Keystone session
|
||||||
try:
|
try:
|
||||||
endpoint = sc_ks_client.endpoint_cache.get_endpoint('sysinv')
|
endpoint = sc_ks_client.endpoint_cache.get_endpoint('sysinv')
|
||||||
sysinv_client = SysinvClient(subcloud_name,
|
sysinv_client = SysinvClient(subcloud.region_name,
|
||||||
sc_ks_client.session,
|
sc_ks_client.session,
|
||||||
endpoint=endpoint)
|
endpoint=endpoint)
|
||||||
return sysinv_client.get_oam_addresses()
|
return sysinv_client.get_oam_addresses()
|
||||||
except (keystone_exceptions.EndpointNotFound, IndexError) as e:
|
except (keystone_exceptions.EndpointNotFound, IndexError) as e:
|
||||||
message = ("Identity endpoint for subcloud: %s not found. %s" %
|
message = ("Identity endpoint for subcloud: %s not found. %s" %
|
||||||
(subcloud_name, e))
|
(subcloud.name, e))
|
||||||
LOG.error(message)
|
LOG.error(message)
|
||||||
except dccommon_exceptions.OAMAddressesNotFound:
|
except dccommon_exceptions.OAMAddressesNotFound:
|
||||||
message = ("OAM addresses for subcloud: %s not found." %
|
message = ("OAM addresses for subcloud: %s not found." %
|
||||||
subcloud_name)
|
subcloud.name)
|
||||||
LOG.error(message)
|
LOG.error(message)
|
||||||
return None
|
return None
|
||||||
|
|
||||||
@ -596,6 +597,65 @@ def pre_check_management_affected_alarm(system_health):
|
|||||||
return True
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def is_subcloud_name_format_valid(name):
|
||||||
|
"""Validates subcloud name format
|
||||||
|
|
||||||
|
Regex based on RFC 1123 subdomain validation
|
||||||
|
|
||||||
|
param: name = Subcloud name
|
||||||
|
returns True if name is valid, otherwise it returns false.
|
||||||
|
"""
|
||||||
|
rex = r"[a-z0-9]([-a-z0-9]*[a-z0-9])?(\.[a-z0-9]([-a-z0-9]*[a-z0-9])?)*"
|
||||||
|
|
||||||
|
pat = re.compile(rex)
|
||||||
|
if re.fullmatch(pat, name):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def get_region_from_subcloud_address(payload):
|
||||||
|
"""Retrieves the current region from the subcloud being migrated
|
||||||
|
|
||||||
|
param: payload = Subcloud payload
|
||||||
|
returns the OS_REGION_NAME value from subcloud
|
||||||
|
"""
|
||||||
|
cmd = [
|
||||||
|
"sshpass",
|
||||||
|
"-p",
|
||||||
|
str(payload['sysadmin_password']),
|
||||||
|
"ssh",
|
||||||
|
"-q",
|
||||||
|
"sysadmin@" + str(payload['bootstrap-address']),
|
||||||
|
REGION_VALUE_CMD,
|
||||||
|
]
|
||||||
|
|
||||||
|
try:
|
||||||
|
LOG.info("Getting region value from subcloud %s" % payload['name'])
|
||||||
|
task = subprocess.check_output(
|
||||||
|
cmd,
|
||||||
|
stderr=subprocess.STDOUT).decode('utf-8')
|
||||||
|
if len(task) < 1:
|
||||||
|
return None
|
||||||
|
subcloud_region = str(task.split("=")[1]).strip()
|
||||||
|
except Exception:
|
||||||
|
LOG.error("Unable to get region value from subcloud %s"
|
||||||
|
% payload['name'])
|
||||||
|
raise
|
||||||
|
|
||||||
|
system_regions = [dccommon_consts.DEFAULT_REGION_NAME,
|
||||||
|
dccommon_consts.SYSTEM_CONTROLLER_NAME]
|
||||||
|
|
||||||
|
if subcloud_region in system_regions:
|
||||||
|
LOG.error("Invalid region value: %s" % subcloud_region)
|
||||||
|
raise exceptions.InvalidParameterValue(
|
||||||
|
err="Invalid region value: %s" % subcloud_region)
|
||||||
|
|
||||||
|
# Returns the region value from result:
|
||||||
|
# Current systems: export OS_REGION_NAME=subcloudX
|
||||||
|
# New systems: export OS_REGION_NAME=abcdefghhijlkmnopqrstuvqxyz12342
|
||||||
|
return subcloud_region
|
||||||
|
|
||||||
|
|
||||||
def find_ansible_error_msg(subcloud_name, log_file, stage=None):
|
def find_ansible_error_msg(subcloud_name, log_file, stage=None):
|
||||||
"""Find errors into ansible logs.
|
"""Find errors into ansible logs.
|
||||||
|
|
||||||
@ -817,15 +877,15 @@ def get_matching_iso(software_version=None):
|
|||||||
return None, str(e)
|
return None, str(e)
|
||||||
|
|
||||||
|
|
||||||
def is_subcloud_healthy(subcloud_name):
|
def is_subcloud_healthy(subcloud_region):
|
||||||
|
|
||||||
system_health = ""
|
system_health = ""
|
||||||
try:
|
try:
|
||||||
os_client = OpenStackDriver(region_name=subcloud_name,
|
os_client = OpenStackDriver(region_name=subcloud_region,
|
||||||
region_clients=None)
|
region_clients=None)
|
||||||
keystone_client = os_client.keystone_client
|
keystone_client = os_client.keystone_client
|
||||||
endpoint = keystone_client.endpoint_cache.get_endpoint('sysinv')
|
endpoint = keystone_client.endpoint_cache.get_endpoint('sysinv')
|
||||||
sysinv_client = SysinvClient(subcloud_name,
|
sysinv_client = SysinvClient(subcloud_region,
|
||||||
keystone_client.session,
|
keystone_client.session,
|
||||||
endpoint=endpoint)
|
endpoint=endpoint)
|
||||||
system_health = sysinv_client.get_system_health()
|
system_health = sysinv_client.get_system_health()
|
||||||
@ -1083,19 +1143,19 @@ def decode_and_normalize_passwd(input_passwd):
|
|||||||
return passwd
|
return passwd
|
||||||
|
|
||||||
|
|
||||||
def get_failure_msg(subcloud_name):
|
def get_failure_msg(subcloud_region):
|
||||||
try:
|
try:
|
||||||
os_client = OpenStackDriver(region_name=subcloud_name,
|
os_client = OpenStackDriver(region_name=subcloud_region,
|
||||||
region_clients=None)
|
region_clients=None)
|
||||||
keystone_client = os_client.keystone_client
|
keystone_client = os_client.keystone_client
|
||||||
endpoint = keystone_client.endpoint_cache.get_endpoint('sysinv')
|
endpoint = keystone_client.endpoint_cache.get_endpoint('sysinv')
|
||||||
sysinv_client = SysinvClient(subcloud_name,
|
sysinv_client = SysinvClient(subcloud_region,
|
||||||
keystone_client.session,
|
keystone_client.session,
|
||||||
endpoint=endpoint)
|
endpoint=endpoint)
|
||||||
msg = sysinv_client.get_error_msg()
|
msg = sysinv_client.get_error_msg()
|
||||||
return msg
|
return msg
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
LOG.exception("{}: {}".format(subcloud_name, e))
|
LOG.exception("{}: {}".format(subcloud_region, e))
|
||||||
return consts.ERROR_DESC_FAILED
|
return consts.ERROR_DESC_FAILED
|
||||||
|
|
||||||
|
|
||||||
@ -1181,3 +1241,19 @@ def get_current_supported_upgrade_versions():
|
|||||||
supported_versions.append(version.strip())
|
supported_versions.append(version.strip())
|
||||||
|
|
||||||
return supported_versions
|
return supported_versions
|
||||||
|
|
||||||
|
|
||||||
|
# Feature: Subcloud Name Reconfiguration
|
||||||
|
# This method is useful to determine the origin of the request
|
||||||
|
# towards the api. The goal was to avoid any code changes in
|
||||||
|
# the cert-monitor module, since it only needs the region reference.
|
||||||
|
# When this method is called, the condition is applied to replace the
|
||||||
|
# value of the "name" field with the value of the "region_name" field
|
||||||
|
# in the response. In this way, the cert-monitor does not lose the
|
||||||
|
# region reference in subcloud rename operation.
|
||||||
|
def is_req_from_cert_mon_agent(request):
|
||||||
|
ua = request.headers.get("User-Agent")
|
||||||
|
if ua == consts.CERT_MON_HTTP_AGENT:
|
||||||
|
return True
|
||||||
|
else:
|
||||||
|
return False
|
||||||
|
@ -112,6 +112,7 @@ def subcloud_db_model_to_dict(subcloud):
|
|||||||
"backup-status": subcloud.backup_status,
|
"backup-status": subcloud.backup_status,
|
||||||
"backup-datetime": subcloud.backup_datetime,
|
"backup-datetime": subcloud.backup_datetime,
|
||||||
"error-description": subcloud.error_description,
|
"error-description": subcloud.error_description,
|
||||||
|
'region-name': subcloud.region_name,
|
||||||
"management-subnet": subcloud.management_subnet,
|
"management-subnet": subcloud.management_subnet,
|
||||||
"management-start-ip": subcloud.management_start_ip,
|
"management-start-ip": subcloud.management_start_ip,
|
||||||
"management-end-ip": subcloud.management_end_ip,
|
"management-end-ip": subcloud.management_end_ip,
|
||||||
@ -132,14 +133,14 @@ def subcloud_create(context, name, description, location, software_version,
|
|||||||
management_subnet, management_gateway_ip,
|
management_subnet, management_gateway_ip,
|
||||||
management_start_ip, management_end_ip,
|
management_start_ip, management_end_ip,
|
||||||
systemcontroller_gateway_ip, deploy_status, error_description,
|
systemcontroller_gateway_ip, deploy_status, error_description,
|
||||||
openstack_installed, group_id, data_install=None):
|
region_name, openstack_installed, group_id, data_install=None):
|
||||||
"""Create a subcloud."""
|
"""Create a subcloud."""
|
||||||
return IMPL.subcloud_create(context, name, description, location,
|
return IMPL.subcloud_create(context, name, description, location,
|
||||||
software_version,
|
software_version,
|
||||||
management_subnet, management_gateway_ip,
|
management_subnet, management_gateway_ip,
|
||||||
management_start_ip, management_end_ip,
|
management_start_ip, management_end_ip,
|
||||||
systemcontroller_gateway_ip, deploy_status,
|
systemcontroller_gateway_ip, deploy_status,
|
||||||
error_description, openstack_installed, group_id,
|
error_description, region_name, openstack_installed, group_id,
|
||||||
data_install)
|
data_install)
|
||||||
|
|
||||||
|
|
||||||
@ -158,6 +159,16 @@ def subcloud_get_by_name(context, name) -> models.Subcloud:
|
|||||||
return IMPL.subcloud_get_by_name(context, name)
|
return IMPL.subcloud_get_by_name(context, name)
|
||||||
|
|
||||||
|
|
||||||
|
def subcloud_get_by_region_name(context, region_name):
|
||||||
|
"""Retrieve a subcloud by region name or raise if it does not exist."""
|
||||||
|
return IMPL.subcloud_get_by_region_name(context, region_name)
|
||||||
|
|
||||||
|
|
||||||
|
def subcloud_get_by_name_or_region_name(context, name):
|
||||||
|
"""Retrieve a subcloud by name or region name or raise if it does not exist."""
|
||||||
|
return IMPL.subcloud_get_by_name_or_region_name(context, name)
|
||||||
|
|
||||||
|
|
||||||
def subcloud_get_all(context):
|
def subcloud_get_all(context):
|
||||||
"""Retrieve all subclouds."""
|
"""Retrieve all subclouds."""
|
||||||
return IMPL.subcloud_get_all(context)
|
return IMPL.subcloud_get_all(context)
|
||||||
@ -174,7 +185,7 @@ def subcloud_get_all_with_status(context):
|
|||||||
|
|
||||||
|
|
||||||
def subcloud_update(context, subcloud_id, management_state=None,
|
def subcloud_update(context, subcloud_id, management_state=None,
|
||||||
availability_status=None, software_version=None,
|
availability_status=None, software_version=None, name=None,
|
||||||
description=None, management_subnet=None, management_gateway_ip=None,
|
description=None, management_subnet=None, management_gateway_ip=None,
|
||||||
management_start_ip=None, management_end_ip=None,
|
management_start_ip=None, management_end_ip=None,
|
||||||
location=None, audit_fail_count=None,
|
location=None, audit_fail_count=None,
|
||||||
@ -187,7 +198,7 @@ def subcloud_update(context, subcloud_id, management_state=None,
|
|||||||
rehome_data=None):
|
rehome_data=None):
|
||||||
"""Update a subcloud or raise if it does not exist."""
|
"""Update a subcloud or raise if it does not exist."""
|
||||||
return IMPL.subcloud_update(context, subcloud_id, management_state,
|
return IMPL.subcloud_update(context, subcloud_id, management_state,
|
||||||
availability_status, software_version,
|
availability_status, software_version, name,
|
||||||
description, management_subnet, management_gateway_ip,
|
description, management_subnet, management_gateway_ip,
|
||||||
management_start_ip, management_end_ip, location,
|
management_start_ip, management_end_ip, location,
|
||||||
audit_fail_count, deploy_status, backup_status,
|
audit_fail_count, deploy_status, backup_status,
|
||||||
@ -677,3 +688,7 @@ def subcloud_alarms_update(context, name, values):
|
|||||||
|
|
||||||
def subcloud_alarms_delete(context, name):
|
def subcloud_alarms_delete(context, name):
|
||||||
return IMPL.subcloud_alarms_delete(context, name)
|
return IMPL.subcloud_alarms_delete(context, name)
|
||||||
|
|
||||||
|
|
||||||
|
def subcloud_rename_alarms(context, subcloud_name, new_name):
|
||||||
|
return IMPL.subcloud_rename_alarms(context, subcloud_name, new_name)
|
||||||
|
@ -32,6 +32,7 @@ from oslo_utils import strutils
|
|||||||
from oslo_utils import uuidutils
|
from oslo_utils import uuidutils
|
||||||
|
|
||||||
from sqlalchemy import desc
|
from sqlalchemy import desc
|
||||||
|
from sqlalchemy import or_
|
||||||
from sqlalchemy.orm.exc import MultipleResultsFound
|
from sqlalchemy.orm.exc import MultipleResultsFound
|
||||||
from sqlalchemy.orm.exc import NoResultFound
|
from sqlalchemy.orm.exc import NoResultFound
|
||||||
from sqlalchemy.orm import joinedload_all
|
from sqlalchemy.orm import joinedload_all
|
||||||
@ -317,6 +318,32 @@ def subcloud_get_by_name(context, name):
|
|||||||
return result
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@require_context
|
||||||
|
def subcloud_get_by_region_name(context, region_name):
|
||||||
|
result = model_query(context, models.Subcloud). \
|
||||||
|
filter_by(deleted=0). \
|
||||||
|
filter_by(region_name=region_name). \
|
||||||
|
first()
|
||||||
|
|
||||||
|
if not result:
|
||||||
|
raise exception.SubcloudRegionNameNotFound(region_name=region_name)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
@require_context
|
||||||
|
def subcloud_get_by_name_or_region_name(context, name):
|
||||||
|
result = model_query(context, models.Subcloud). \
|
||||||
|
filter_by(deleted=0). \
|
||||||
|
filter(or_(models.Subcloud.name == name, models.Subcloud.region_name == name)). \
|
||||||
|
first()
|
||||||
|
|
||||||
|
if not result:
|
||||||
|
raise exception.SubcloudNameOrRegionNameNotFound(name=name)
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
@require_context
|
@require_context
|
||||||
def subcloud_get_all(context):
|
def subcloud_get_all(context):
|
||||||
return model_query(context, models.Subcloud). \
|
return model_query(context, models.Subcloud). \
|
||||||
@ -349,7 +376,7 @@ def subcloud_create(context, name, description, location, software_version,
|
|||||||
management_subnet, management_gateway_ip,
|
management_subnet, management_gateway_ip,
|
||||||
management_start_ip, management_end_ip,
|
management_start_ip, management_end_ip,
|
||||||
systemcontroller_gateway_ip, deploy_status, error_description,
|
systemcontroller_gateway_ip, deploy_status, error_description,
|
||||||
openstack_installed, group_id,
|
region_name, openstack_installed, group_id,
|
||||||
data_install=None):
|
data_install=None):
|
||||||
with write_session() as session:
|
with write_session() as session:
|
||||||
subcloud_ref = models.Subcloud()
|
subcloud_ref = models.Subcloud()
|
||||||
@ -366,6 +393,7 @@ def subcloud_create(context, name, description, location, software_version,
|
|||||||
subcloud_ref.systemcontroller_gateway_ip = systemcontroller_gateway_ip
|
subcloud_ref.systemcontroller_gateway_ip = systemcontroller_gateway_ip
|
||||||
subcloud_ref.deploy_status = deploy_status
|
subcloud_ref.deploy_status = deploy_status
|
||||||
subcloud_ref.error_description = error_description
|
subcloud_ref.error_description = error_description
|
||||||
|
subcloud_ref.region_name = region_name
|
||||||
subcloud_ref.audit_fail_count = 0
|
subcloud_ref.audit_fail_count = 0
|
||||||
subcloud_ref.openstack_installed = openstack_installed
|
subcloud_ref.openstack_installed = openstack_installed
|
||||||
subcloud_ref.group_id = group_id
|
subcloud_ref.group_id = group_id
|
||||||
@ -381,7 +409,7 @@ def subcloud_create(context, name, description, location, software_version,
|
|||||||
@require_admin_context
|
@require_admin_context
|
||||||
def subcloud_update(context, subcloud_id, management_state=None,
|
def subcloud_update(context, subcloud_id, management_state=None,
|
||||||
availability_status=None, software_version=None,
|
availability_status=None, software_version=None,
|
||||||
description=None, management_subnet=None,
|
name=None, description=None, management_subnet=None,
|
||||||
management_gateway_ip=None, management_start_ip=None,
|
management_gateway_ip=None, management_start_ip=None,
|
||||||
management_end_ip=None, location=None, audit_fail_count=None,
|
management_end_ip=None, location=None, audit_fail_count=None,
|
||||||
deploy_status=None, backup_status=None,
|
deploy_status=None, backup_status=None,
|
||||||
@ -401,6 +429,8 @@ def subcloud_update(context, subcloud_id, management_state=None,
|
|||||||
subcloud_ref.availability_status = availability_status
|
subcloud_ref.availability_status = availability_status
|
||||||
if software_version is not None:
|
if software_version is not None:
|
||||||
subcloud_ref.software_version = software_version
|
subcloud_ref.software_version = software_version
|
||||||
|
if name is not None:
|
||||||
|
subcloud_ref.name = name
|
||||||
if description is not None:
|
if description is not None:
|
||||||
subcloud_ref.description = description
|
subcloud_ref.description = description
|
||||||
if management_subnet is not None:
|
if management_subnet is not None:
|
||||||
@ -1221,3 +1251,12 @@ def subcloud_alarms_delete(context, name):
|
|||||||
with write_session() as session:
|
with write_session() as session:
|
||||||
session.query(models.SubcloudAlarmSummary).\
|
session.query(models.SubcloudAlarmSummary).\
|
||||||
filter_by(name=name).delete()
|
filter_by(name=name).delete()
|
||||||
|
|
||||||
|
|
||||||
|
@require_admin_context
|
||||||
|
def subcloud_rename_alarms(context, subcloud_name, new_name):
|
||||||
|
with write_session() as session:
|
||||||
|
result = _subcloud_alarms_get(context, subcloud_name)
|
||||||
|
result.name = new_name
|
||||||
|
result.save(session)
|
||||||
|
return result
|
||||||
|
@ -0,0 +1,37 @@
|
|||||||
|
# Copyright (c) 2023 Wind River Systems, Inc.
|
||||||
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
|
# not use this file except in compliance with the License. You may obtain
|
||||||
|
# a copy of the License at
|
||||||
|
#
|
||||||
|
# http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
#
|
||||||
|
# Unless required by applicable law or agreed to in writing, software
|
||||||
|
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
|
||||||
|
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
|
||||||
|
# License for the specific language governing permissions and limitations
|
||||||
|
# under the License.
|
||||||
|
#
|
||||||
|
|
||||||
|
from sqlalchemy import Column, MetaData, String, Table
|
||||||
|
|
||||||
|
|
||||||
|
def upgrade(migrate_engine):
|
||||||
|
meta = MetaData()
|
||||||
|
meta.bind = migrate_engine
|
||||||
|
|
||||||
|
subclouds = Table('subclouds', meta, autoload=True)
|
||||||
|
|
||||||
|
# Add the 'region_name' column to the subclouds table.
|
||||||
|
subclouds.create_column(Column('region_name',
|
||||||
|
String(255)))
|
||||||
|
|
||||||
|
# populates region_name with name field value for existing subclouds
|
||||||
|
if migrate_engine.name == 'postgresql':
|
||||||
|
with migrate_engine.begin() as conn:
|
||||||
|
conn.execute("UPDATE subclouds SET region_name = name")
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def downgrade(migrate_engine):
|
||||||
|
raise NotImplementedError('Database downgrade is unsupported.')
|
@ -148,6 +148,7 @@ class Subcloud(BASE, DCManagerBase):
|
|||||||
backup_status = Column(String(255))
|
backup_status = Column(String(255))
|
||||||
backup_datetime = Column(DateTime(timezone=False))
|
backup_datetime = Column(DateTime(timezone=False))
|
||||||
error_description = Column(String(2048))
|
error_description = Column(String(2048))
|
||||||
|
region_name = Column(String(255), unique=True)
|
||||||
data_upgrade = Column(String())
|
data_upgrade = Column(String())
|
||||||
management_subnet = Column(String(255))
|
management_subnet = Column(String(255))
|
||||||
management_gateway_ip = Column(String(255))
|
management_gateway_ip = Column(String(255))
|
||||||
|
@ -110,6 +110,27 @@ class DCManagerService(service.Service):
|
|||||||
LOG.info("Handling delete_subcloud request for: %s" % subcloud_id)
|
LOG.info("Handling delete_subcloud request for: %s" % subcloud_id)
|
||||||
return self.subcloud_manager.delete_subcloud(context, subcloud_id)
|
return self.subcloud_manager.delete_subcloud(context, subcloud_id)
|
||||||
|
|
||||||
|
@request_context
|
||||||
|
def rename_subcloud(self, context, subcloud_id, curr_subcloud_name,
|
||||||
|
new_subcloud_name=None):
|
||||||
|
# Rename a subcloud
|
||||||
|
LOG.info("Handling rename_subcloud request for: %s" %
|
||||||
|
curr_subcloud_name)
|
||||||
|
subcloud = self.subcloud_manager.rename_subcloud(context,
|
||||||
|
subcloud_id,
|
||||||
|
curr_subcloud_name,
|
||||||
|
new_subcloud_name)
|
||||||
|
return subcloud
|
||||||
|
|
||||||
|
@request_context
|
||||||
|
def get_subcloud_name_by_region_name(self, context, subcloud_region):
|
||||||
|
# get subcloud by region name
|
||||||
|
LOG.debug("Handling get_subcloud_name_by_region_name request for "
|
||||||
|
"region: %s" % subcloud_region)
|
||||||
|
subcloud = self.subcloud_manager.get_subcloud_name_by_region_name(context,
|
||||||
|
subcloud_region)
|
||||||
|
return subcloud
|
||||||
|
|
||||||
@request_context
|
@request_context
|
||||||
def update_subcloud(self, context, subcloud_id, management_state=None,
|
def update_subcloud(self, context, subcloud_id, management_state=None,
|
||||||
description=None, location=None,
|
description=None, location=None,
|
||||||
|
@ -179,10 +179,10 @@ class SubcloudManager(manager.Manager):
|
|||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _create_intermediate_ca_cert(payload):
|
def _create_intermediate_ca_cert(payload):
|
||||||
subcloud_name = payload["name"]
|
subcloud_region = payload["region_name"]
|
||||||
cert_name = SubcloudManager._get_subcloud_cert_name(subcloud_name)
|
cert_name = SubcloudManager._get_subcloud_cert_name(subcloud_region)
|
||||||
secret_name = SubcloudManager._get_subcloud_cert_secret_name(
|
secret_name = SubcloudManager._get_subcloud_cert_secret_name(
|
||||||
subcloud_name)
|
subcloud_region)
|
||||||
|
|
||||||
cert = {
|
cert = {
|
||||||
"apiVersion": "%s/%s" % (kubeoperator.CERT_MANAGER_GROUP,
|
"apiVersion": "%s/%s" % (kubeoperator.CERT_MANAGER_GROUP,
|
||||||
@ -255,6 +255,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
return install_command
|
return install_command
|
||||||
|
|
||||||
def compose_bootstrap_command(self, subcloud_name,
|
def compose_bootstrap_command(self, subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
ansible_subcloud_inventory_file,
|
ansible_subcloud_inventory_file,
|
||||||
software_version=None):
|
software_version=None):
|
||||||
bootstrap_command = [
|
bootstrap_command = [
|
||||||
@ -268,7 +269,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
# which overrides to load
|
# which overrides to load
|
||||||
bootstrap_command += [
|
bootstrap_command += [
|
||||||
"-e", str("override_files_dir='%s' region_name=%s") % (
|
"-e", str("override_files_dir='%s' region_name=%s") % (
|
||||||
dccommon_consts.ANSIBLE_OVERRIDES_PATH, subcloud_name),
|
dccommon_consts.ANSIBLE_OVERRIDES_PATH, subcloud_region),
|
||||||
"-e", "install_release_version=%s" %
|
"-e", "install_release_version=%s" %
|
||||||
software_version if software_version else SW_VERSION]
|
software_version if software_version else SW_VERSION]
|
||||||
return bootstrap_command
|
return bootstrap_command
|
||||||
@ -324,7 +325,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
subcloud_name + "_update_values.yml"]
|
subcloud_name + "_update_values.yml"]
|
||||||
return subcloud_update_command
|
return subcloud_update_command
|
||||||
|
|
||||||
def compose_rehome_command(self, subcloud_name,
|
def compose_rehome_command(self, subcloud_name, subcloud_region,
|
||||||
ansible_subcloud_inventory_file,
|
ansible_subcloud_inventory_file,
|
||||||
software_version):
|
software_version):
|
||||||
rehome_command = [
|
rehome_command = [
|
||||||
@ -335,7 +336,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
"--limit", subcloud_name,
|
"--limit", subcloud_name,
|
||||||
"--timeout", REHOME_PLAYBOOK_TIMEOUT,
|
"--timeout", REHOME_PLAYBOOK_TIMEOUT,
|
||||||
"-e", str("override_files_dir='%s' region_name=%s") % (
|
"-e", str("override_files_dir='%s' region_name=%s") % (
|
||||||
dccommon_consts.ANSIBLE_OVERRIDES_PATH, subcloud_name)]
|
dccommon_consts.ANSIBLE_OVERRIDES_PATH, subcloud_region)]
|
||||||
return rehome_command
|
return rehome_command
|
||||||
|
|
||||||
def migrate_subcloud(self, context, subcloud_ref, payload):
|
def migrate_subcloud(self, context, subcloud_ref, payload):
|
||||||
@ -394,6 +395,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
|
|
||||||
rehome_command = self.compose_rehome_command(
|
rehome_command = self.compose_rehome_command(
|
||||||
subcloud.name,
|
subcloud.name,
|
||||||
|
subcloud.region_name,
|
||||||
ansible_subcloud_inventory_file,
|
ansible_subcloud_inventory_file,
|
||||||
subcloud.software_version)
|
subcloud.software_version)
|
||||||
|
|
||||||
@ -407,7 +409,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
:param subcloud_id: id of the subcloud
|
:param subcloud_id: id of the subcloud
|
||||||
:param payload: subcloud configuration
|
:param payload: subcloud configuration
|
||||||
"""
|
"""
|
||||||
LOG.info(f"Adding subcloud {payload['name']}.")
|
LOG.info(f"Adding subcloud {payload['name']} with region {payload['region_name']}.")
|
||||||
|
|
||||||
rehoming = payload.get('migrate', '').lower() == "true"
|
rehoming = payload.get('migrate', '').lower() == "true"
|
||||||
secondary = (payload.get('secondary', '').lower() == "true")
|
secondary = (payload.get('secondary', '').lower() == "true")
|
||||||
@ -653,6 +655,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
|
|
||||||
bootstrap_command = self.compose_bootstrap_command(
|
bootstrap_command = self.compose_bootstrap_command(
|
||||||
subcloud.name,
|
subcloud.name,
|
||||||
|
subcloud.region_name,
|
||||||
ansible_subcloud_inventory_file,
|
ansible_subcloud_inventory_file,
|
||||||
subcloud.software_version)
|
subcloud.software_version)
|
||||||
return bootstrap_command
|
return bootstrap_command
|
||||||
@ -923,7 +926,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
endpoint["id"],
|
endpoint["id"],
|
||||||
endpoint['admin_endpoint_url'],
|
endpoint['admin_endpoint_url'],
|
||||||
interface=dccommon_consts.KS_ENDPOINT_ADMIN,
|
interface=dccommon_consts.KS_ENDPOINT_ADMIN,
|
||||||
region=subcloud.name)
|
region=subcloud.region_name)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
# Keystone service must be temporarily busy, retry
|
# Keystone service must be temporarily busy, retry
|
||||||
LOG.error(str(e))
|
LOG.error(str(e))
|
||||||
@ -931,11 +934,11 @@ class SubcloudManager(manager.Manager):
|
|||||||
endpoint["id"],
|
endpoint["id"],
|
||||||
endpoint['admin_endpoint_url'],
|
endpoint['admin_endpoint_url'],
|
||||||
interface=dccommon_consts.KS_ENDPOINT_ADMIN,
|
interface=dccommon_consts.KS_ENDPOINT_ADMIN,
|
||||||
region=subcloud.name)
|
region=subcloud.region_name)
|
||||||
|
|
||||||
# Inform orchestrator that subcloud has been added
|
# Inform orchestrator that subcloud has been added
|
||||||
self.dcorch_rpc_client.add_subcloud(
|
self.dcorch_rpc_client.add_subcloud(
|
||||||
context, subcloud.name, subcloud.software_version)
|
context, subcloud.region_name, subcloud.software_version)
|
||||||
|
|
||||||
# create entry into alarm summary table, will get real values later
|
# create entry into alarm summary table, will get real values later
|
||||||
alarm_updates = {'critical_alarms': -1,
|
alarm_updates = {'critical_alarms': -1,
|
||||||
@ -1282,7 +1285,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
def _backup_subcloud(self, context, payload, subcloud):
|
def _backup_subcloud(self, context, payload, subcloud):
|
||||||
try:
|
try:
|
||||||
# Health check validation
|
# Health check validation
|
||||||
if not utils.is_subcloud_healthy(subcloud.name):
|
if not utils.is_subcloud_healthy(subcloud.region_name):
|
||||||
db_api.subcloud_update(
|
db_api.subcloud_update(
|
||||||
context,
|
context,
|
||||||
subcloud.id,
|
subcloud.id,
|
||||||
@ -1442,9 +1445,9 @@ class SubcloudManager(manager.Manager):
|
|||||||
else:
|
else:
|
||||||
# Use subcloud floating IP for host reachability
|
# Use subcloud floating IP for host reachability
|
||||||
keystone_client = OpenStackDriver(
|
keystone_client = OpenStackDriver(
|
||||||
region_name=subcloud.name,
|
region_name=subcloud.region_name,
|
||||||
region_clients=None).keystone_client
|
region_clients=None).keystone_client
|
||||||
oam_fip = utils.get_oam_addresses(subcloud.name, keystone_client)\
|
oam_fip = utils.get_oam_addresses(subcloud, keystone_client)\
|
||||||
.oam_floating_ip
|
.oam_floating_ip
|
||||||
|
|
||||||
# Add parameters used to generate inventory
|
# Add parameters used to generate inventory
|
||||||
@ -2042,10 +2045,10 @@ class SubcloudManager(manager.Manager):
|
|||||||
1)
|
1)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _delete_subcloud_cert(subcloud_name):
|
def _delete_subcloud_cert(subcloud_region):
|
||||||
cert_name = SubcloudManager._get_subcloud_cert_name(subcloud_name)
|
cert_name = SubcloudManager._get_subcloud_cert_name(subcloud_region)
|
||||||
secret_name = SubcloudManager._get_subcloud_cert_secret_name(
|
secret_name = SubcloudManager._get_subcloud_cert_secret_name(
|
||||||
subcloud_name)
|
subcloud_region)
|
||||||
|
|
||||||
kube = kubeoperator.KubeOperator()
|
kube = kubeoperator.KubeOperator()
|
||||||
kube.delete_cert_manager_certificate(CERT_NAMESPACE, cert_name)
|
kube.delete_cert_manager_certificate(CERT_NAMESPACE, cert_name)
|
||||||
@ -2059,7 +2062,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
"""Remove subcloud details from database and inform orchestrators"""
|
"""Remove subcloud details from database and inform orchestrators"""
|
||||||
# Inform orchestrators that subcloud has been deleted
|
# Inform orchestrators that subcloud has been deleted
|
||||||
try:
|
try:
|
||||||
self.dcorch_rpc_client.del_subcloud(context, subcloud.name)
|
self.dcorch_rpc_client.del_subcloud(context, subcloud.region_name)
|
||||||
except RemoteError as e:
|
except RemoteError as e:
|
||||||
# TODO(kmacleod): this should be caught as explicit remote exception
|
# TODO(kmacleod): this should be caught as explicit remote exception
|
||||||
# Fix when centos/python2 is no longer supported
|
# Fix when centos/python2 is no longer supported
|
||||||
@ -2083,8 +2086,8 @@ class SubcloudManager(manager.Manager):
|
|||||||
region_clients=None).keystone_client
|
region_clients=None).keystone_client
|
||||||
|
|
||||||
# Delete keystone endpoints for subcloud
|
# Delete keystone endpoints for subcloud
|
||||||
keystone_client.delete_endpoints(subcloud.name)
|
keystone_client.delete_endpoints(subcloud.region_name)
|
||||||
keystone_client.delete_region(subcloud.name)
|
keystone_client.delete_region(subcloud.region_name)
|
||||||
|
|
||||||
# Delete the routes to this subcloud
|
# Delete the routes to this subcloud
|
||||||
self._delete_subcloud_routes(keystone_client, subcloud)
|
self._delete_subcloud_routes(keystone_client, subcloud)
|
||||||
@ -2100,7 +2103,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
utils.delete_subcloud_inventory(ansible_subcloud_inventory_file)
|
utils.delete_subcloud_inventory(ansible_subcloud_inventory_file)
|
||||||
|
|
||||||
# Delete the subcloud intermediate certificate
|
# Delete the subcloud intermediate certificate
|
||||||
SubcloudManager._delete_subcloud_cert(subcloud.name)
|
SubcloudManager._delete_subcloud_cert(subcloud.region_name)
|
||||||
|
|
||||||
# Delete the subcloud backup path
|
# Delete the subcloud backup path
|
||||||
self._delete_subcloud_backup_data(subcloud.name)
|
self._delete_subcloud_backup_data(subcloud.name)
|
||||||
@ -2142,6 +2145,42 @@ class SubcloudManager(manager.Manager):
|
|||||||
if os.path.exists(install_path):
|
if os.path.exists(install_path):
|
||||||
shutil.rmtree(install_path)
|
shutil.rmtree(install_path)
|
||||||
|
|
||||||
|
def _rename_subcloud_ansible_files(self, cur_sc_name, new_sc_name):
|
||||||
|
"""Renames the ansible and logs files from the given subcloud"""
|
||||||
|
|
||||||
|
ansible_path = dccommon_consts.ANSIBLE_OVERRIDES_PATH
|
||||||
|
log_path = consts.DC_ANSIBLE_LOG_DIR
|
||||||
|
|
||||||
|
ansible_file_list = os.listdir(ansible_path)
|
||||||
|
log_file_list = os.listdir(log_path)
|
||||||
|
|
||||||
|
ansible_file_list = [ansible_path + '/' + x for x in ansible_file_list]
|
||||||
|
log_file_list = [log_path + '/' + x for x in log_file_list]
|
||||||
|
|
||||||
|
for cur_file in ansible_file_list + log_file_list:
|
||||||
|
new_file = cur_file.replace(cur_sc_name, new_sc_name)
|
||||||
|
if os.path.exists(cur_file) and new_sc_name in new_file:
|
||||||
|
os.rename(cur_file, new_file)
|
||||||
|
|
||||||
|
# Gets new ansible inventory file
|
||||||
|
ansible_inv_file = self._get_ansible_filename(new_sc_name,
|
||||||
|
INVENTORY_FILE_POSTFIX)
|
||||||
|
|
||||||
|
# Updates inventory host param with the new subcloud name
|
||||||
|
with open(ansible_inv_file, 'r') as f:
|
||||||
|
data = yaml.safe_load(f)
|
||||||
|
|
||||||
|
mkey = list(data.keys())[0]
|
||||||
|
|
||||||
|
if mkey in data and 'hosts' in data[mkey] and \
|
||||||
|
cur_sc_name in data[mkey]['hosts']:
|
||||||
|
|
||||||
|
data[mkey]['hosts'][new_sc_name] = \
|
||||||
|
data[mkey]['hosts'].pop(cur_sc_name)
|
||||||
|
|
||||||
|
with open(ansible_inv_file, 'w') as f:
|
||||||
|
yaml.dump(data, f, sort_keys=False)
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def _delete_subcloud_backup_data(subcloud_name):
|
def _delete_subcloud_backup_data(subcloud_name):
|
||||||
try:
|
try:
|
||||||
@ -2208,6 +2247,62 @@ class SubcloudManager(manager.Manager):
|
|||||||
(subcloud.name, alarm_id))
|
(subcloud.name, alarm_id))
|
||||||
LOG.exception(e)
|
LOG.exception(e)
|
||||||
|
|
||||||
|
def rename_subcloud(self,
|
||||||
|
context,
|
||||||
|
subcloud_id,
|
||||||
|
curr_subcloud_name,
|
||||||
|
new_subcloud_name=None):
|
||||||
|
"""Rename subcloud.
|
||||||
|
|
||||||
|
:param context: request context object.
|
||||||
|
:param subcloud_id: id of subcloud to rename
|
||||||
|
:param curr_subcloud_name: current subcloud name
|
||||||
|
:param new_subcloud_name: new subcloud name
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
subcloud = db_api.\
|
||||||
|
subcloud_get_by_name_or_region_name(context,
|
||||||
|
new_subcloud_name)
|
||||||
|
except exceptions.SubcloudNameOrRegionNameNotFound:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
# If the found subcloud id is not the same as the received
|
||||||
|
# subcloud id, it indicates that the name change does not
|
||||||
|
# correspond to the current subcloud.
|
||||||
|
# Therefore it is not allowed to change the name.
|
||||||
|
if subcloud_id != subcloud.id:
|
||||||
|
raise exceptions.SubcloudOrRegionNameAlreadyExists(
|
||||||
|
name=new_subcloud_name)
|
||||||
|
|
||||||
|
# updates subcloud name
|
||||||
|
subcloud = db_api.subcloud_update(context, subcloud_id,
|
||||||
|
name=new_subcloud_name)
|
||||||
|
# updates subcloud names on alarms
|
||||||
|
db_api.subcloud_rename_alarms(context, curr_subcloud_name,
|
||||||
|
new_subcloud_name)
|
||||||
|
# Deletes subcloud alarms
|
||||||
|
entity_instance_id = "subcloud=%s" % curr_subcloud_name
|
||||||
|
self.fm_api.clear_all(entity_instance_id)
|
||||||
|
|
||||||
|
# Regenerate the dnsmasq host entry
|
||||||
|
self._create_addn_hosts_dc(context)
|
||||||
|
|
||||||
|
# Rename related subcloud files
|
||||||
|
self._rename_subcloud_ansible_files(curr_subcloud_name,
|
||||||
|
new_subcloud_name)
|
||||||
|
|
||||||
|
return subcloud
|
||||||
|
|
||||||
|
def get_subcloud_name_by_region_name(self,
|
||||||
|
context,
|
||||||
|
subcloud_region):
|
||||||
|
subcloud_name = None
|
||||||
|
if subcloud_region is not None:
|
||||||
|
sc = db_api.subcloud_get_by_region_name(context, subcloud_region)
|
||||||
|
subcloud_name = sc.get("name")
|
||||||
|
|
||||||
|
return subcloud_name
|
||||||
|
|
||||||
def update_subcloud(self,
|
def update_subcloud(self,
|
||||||
context,
|
context,
|
||||||
subcloud_id,
|
subcloud_id,
|
||||||
@ -2363,7 +2458,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
# Inform orchestrator of state change
|
# Inform orchestrator of state change
|
||||||
self.dcorch_rpc_client.update_subcloud_states(
|
self.dcorch_rpc_client.update_subcloud_states(
|
||||||
context,
|
context,
|
||||||
subcloud.name,
|
subcloud.region_name,
|
||||||
management_state,
|
management_state,
|
||||||
subcloud.availability_status)
|
subcloud.availability_status)
|
||||||
|
|
||||||
@ -2391,6 +2486,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
self.state_rpc_client.update_subcloud_endpoint_status_sync(
|
self.state_rpc_client.update_subcloud_endpoint_status_sync(
|
||||||
context,
|
context,
|
||||||
subcloud_name=subcloud.name,
|
subcloud_name=subcloud.name,
|
||||||
|
subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=None,
|
endpoint_type=None,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_UNKNOWN,
|
sync_status=dccommon_consts.SYNC_STATUS_UNKNOWN,
|
||||||
ignore_endpoints=[dccommon_consts.ENDPOINT_TYPE_DC_CERT])
|
ignore_endpoints=[dccommon_consts.ENDPOINT_TYPE_DC_CERT])
|
||||||
@ -2399,7 +2495,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
# Tell cert-mon to audit endpoint certificate
|
# Tell cert-mon to audit endpoint certificate
|
||||||
LOG.info('Request certmon audit for %s' % subcloud.name)
|
LOG.info('Request certmon audit for %s' % subcloud.name)
|
||||||
dc_notification = dcmanager_rpc_client.DCManagerNotifications()
|
dc_notification = dcmanager_rpc_client.DCManagerNotifications()
|
||||||
dc_notification.subcloud_managed(context, subcloud.name)
|
dc_notification.subcloud_managed(context, subcloud.region_name)
|
||||||
|
|
||||||
return db_api.subcloud_db_model_to_dict(subcloud)
|
return db_api.subcloud_db_model_to_dict(subcloud)
|
||||||
|
|
||||||
@ -2487,6 +2583,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
:param update_db: whether it should update the db on success/failure
|
:param update_db: whether it should update the db on success/failure
|
||||||
"""
|
"""
|
||||||
subcloud_name = subcloud.name
|
subcloud_name = subcloud.name
|
||||||
|
subcloud_region = subcloud.region_name
|
||||||
subcloud_id = subcloud.id
|
subcloud_id = subcloud.id
|
||||||
sys_controller_gw_ip = payload.get("systemcontroller_gateway_address",
|
sys_controller_gw_ip = payload.get("systemcontroller_gateway_address",
|
||||||
subcloud.systemcontroller_gateway_ip)
|
subcloud.systemcontroller_gateway_ip)
|
||||||
@ -2509,7 +2606,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
return
|
return
|
||||||
try:
|
try:
|
||||||
self._update_services_endpoint(
|
self._update_services_endpoint(
|
||||||
context, payload, subcloud_name, m_ks_client)
|
context, payload, subcloud_region, m_ks_client)
|
||||||
except Exception:
|
except Exception:
|
||||||
LOG.exception("Failed to update subcloud %s endpoints" % subcloud_name)
|
LOG.exception("Failed to update subcloud %s endpoints" % subcloud_name)
|
||||||
if update_db:
|
if update_db:
|
||||||
@ -2541,7 +2638,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
1)
|
1)
|
||||||
|
|
||||||
def _update_services_endpoint(
|
def _update_services_endpoint(
|
||||||
self, context, payload, subcloud_name, m_ks_client):
|
self, context, payload, subcloud_region, m_ks_client):
|
||||||
endpoint_ip = utils.get_management_start_address(payload)
|
endpoint_ip = utils.get_management_start_address(payload)
|
||||||
if netaddr.IPAddress(endpoint_ip).version == 6:
|
if netaddr.IPAddress(endpoint_ip).version == 6:
|
||||||
endpoint_ip = f"[{endpoint_ip}]"
|
endpoint_ip = f"[{endpoint_ip}]"
|
||||||
@ -2556,7 +2653,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
}
|
}
|
||||||
|
|
||||||
for endpoint in m_ks_client.keystone_client.endpoints.list(
|
for endpoint in m_ks_client.keystone_client.endpoints.list(
|
||||||
region=subcloud_name):
|
region=subcloud_region):
|
||||||
service_type = m_ks_client.keystone_client.services.get(
|
service_type = m_ks_client.keystone_client.services.get(
|
||||||
endpoint.service_id).type
|
endpoint.service_id).type
|
||||||
if service_type == dccommon_consts.ENDPOINT_TYPE_PLATFORM:
|
if service_type == dccommon_consts.ENDPOINT_TYPE_PLATFORM:
|
||||||
@ -2576,17 +2673,17 @@ class SubcloudManager(manager.Manager):
|
|||||||
m_ks_client.keystone_client.endpoints.update(
|
m_ks_client.keystone_client.endpoints.update(
|
||||||
endpoint, url=admin_endpoint_url)
|
endpoint, url=admin_endpoint_url)
|
||||||
|
|
||||||
LOG.info("Update services endpoint to %s in subcloud %s" % (
|
LOG.info("Update services endpoint to %s in subcloud region %s" % (
|
||||||
endpoint_ip, subcloud_name))
|
endpoint_ip, subcloud_region))
|
||||||
# Update service URLs in subcloud endpoint cache
|
# Update service URLs in subcloud endpoint cache
|
||||||
self.audit_rpc_client.trigger_subcloud_endpoints_update(
|
self.audit_rpc_client.trigger_subcloud_endpoints_update(
|
||||||
context, subcloud_name, services_endpoints)
|
context, subcloud_region, services_endpoints)
|
||||||
self.dcorch_rpc_client.update_subcloud_endpoints(
|
self.dcorch_rpc_client.update_subcloud_endpoints(
|
||||||
context, subcloud_name, services_endpoints)
|
context, subcloud_region, services_endpoints)
|
||||||
# Update sysinv URL in cert-mon cache
|
# Update sysinv URL in cert-mon cache
|
||||||
dc_notification = dcmanager_rpc_client.DCManagerNotifications()
|
dc_notification = dcmanager_rpc_client.DCManagerNotifications()
|
||||||
dc_notification.subcloud_sysinv_endpoint_update(
|
dc_notification.subcloud_sysinv_endpoint_update(
|
||||||
context, subcloud_name, services_endpoints.get("sysinv"))
|
context, subcloud_region, services_endpoints.get("sysinv"))
|
||||||
|
|
||||||
def _create_subcloud_update_overrides_file(
|
def _create_subcloud_update_overrides_file(
|
||||||
self, payload, subcloud_name, filename_suffix):
|
self, payload, subcloud_name, filename_suffix):
|
||||||
@ -2630,7 +2727,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
payload['override_values']['sc_ca_key'] = payload['sc_ca_key']
|
payload['override_values']['sc_ca_key'] = payload['sc_ca_key']
|
||||||
|
|
||||||
def update_subcloud_sync_endpoint_type(self, context,
|
def update_subcloud_sync_endpoint_type(self, context,
|
||||||
subcloud_name,
|
subcloud_region,
|
||||||
endpoint_type_list,
|
endpoint_type_list,
|
||||||
openstack_installed):
|
openstack_installed):
|
||||||
operation = 'add' if openstack_installed else 'remove'
|
operation = 'add' if openstack_installed else 'remove'
|
||||||
@ -2646,17 +2743,17 @@ class SubcloudManager(manager.Manager):
|
|||||||
}
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
subcloud = db_api.subcloud_get_by_name(context, subcloud_name)
|
subcloud = db_api.subcloud_get_by_region_name(context, subcloud_region)
|
||||||
except Exception:
|
except Exception:
|
||||||
LOG.exception("Failed to get subcloud by name: %s" % subcloud_name)
|
LOG.exception("Failed to get subcloud by region name: %s" % subcloud_region)
|
||||||
raise
|
raise
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Notify dcorch to add/remove sync endpoint type list
|
# Notify dcorch to add/remove sync endpoint type list
|
||||||
func_switcher[operation][0](self.context, subcloud_name,
|
func_switcher[operation][0](self.context, subcloud_region,
|
||||||
endpoint_type_list)
|
endpoint_type_list)
|
||||||
LOG.info('Notifying dcorch, subcloud: %s new sync endpoint: %s' %
|
LOG.info('Notifying dcorch, subcloud: %s new sync endpoint: %s' %
|
||||||
(subcloud_name, endpoint_type_list))
|
(subcloud.name, endpoint_type_list))
|
||||||
|
|
||||||
# Update subcloud status table by adding/removing openstack sync
|
# Update subcloud status table by adding/removing openstack sync
|
||||||
# endpoint types
|
# endpoint types
|
||||||
@ -2668,7 +2765,7 @@ class SubcloudManager(manager.Manager):
|
|||||||
openstack_installed=openstack_installed)
|
openstack_installed=openstack_installed)
|
||||||
except Exception:
|
except Exception:
|
||||||
LOG.exception('Problem informing dcorch of subcloud sync endpoint'
|
LOG.exception('Problem informing dcorch of subcloud sync endpoint'
|
||||||
' type change, subcloud: %s' % subcloud_name)
|
' type change, subcloud region: %s' % subcloud_region)
|
||||||
|
|
||||||
def handle_subcloud_operations_in_progress(self):
|
def handle_subcloud_operations_in_progress(self):
|
||||||
"""Identify subclouds in transitory stages and update subcloud
|
"""Identify subclouds in transitory stages and update subcloud
|
||||||
|
@ -156,6 +156,14 @@ class OrchThread(threading.Thread):
|
|||||||
@staticmethod
|
@staticmethod
|
||||||
def get_region_name(strategy_step):
|
def get_region_name(strategy_step):
|
||||||
"""Get the region name for a strategy step"""
|
"""Get the region name for a strategy step"""
|
||||||
|
if strategy_step.subcloud_id is None:
|
||||||
|
# This is the SystemController.
|
||||||
|
return dccommon_consts.DEFAULT_REGION_NAME
|
||||||
|
return strategy_step.subcloud.region_name
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_subcloud_name(strategy_step):
|
||||||
|
"""Get the subcloud name for a strategy step"""
|
||||||
if strategy_step.subcloud_id is None:
|
if strategy_step.subcloud_id is None:
|
||||||
# This is the SystemController.
|
# This is the SystemController.
|
||||||
return dccommon_consts.DEFAULT_REGION_NAME
|
return dccommon_consts.DEFAULT_REGION_NAME
|
||||||
@ -263,18 +271,18 @@ class OrchThread(threading.Thread):
|
|||||||
for strategy_step in strategy_steps:
|
for strategy_step in strategy_steps:
|
||||||
if strategy_step.state == consts.STRATEGY_STATE_COMPLETE:
|
if strategy_step.state == consts.STRATEGY_STATE_COMPLETE:
|
||||||
# This step is complete
|
# This step is complete
|
||||||
self._delete_subcloud_worker(strategy_step.subcloud.name,
|
self._delete_subcloud_worker(strategy_step.subcloud.region_name,
|
||||||
strategy_step.subcloud_id)
|
strategy_step.subcloud_id)
|
||||||
continue
|
continue
|
||||||
elif strategy_step.state == consts.STRATEGY_STATE_ABORTED:
|
elif strategy_step.state == consts.STRATEGY_STATE_ABORTED:
|
||||||
# This step was aborted
|
# This step was aborted
|
||||||
self._delete_subcloud_worker(strategy_step.subcloud.name,
|
self._delete_subcloud_worker(strategy_step.subcloud.region_name,
|
||||||
strategy_step.subcloud_id)
|
strategy_step.subcloud_id)
|
||||||
abort_detected = True
|
abort_detected = True
|
||||||
continue
|
continue
|
||||||
elif strategy_step.state == consts.STRATEGY_STATE_FAILED:
|
elif strategy_step.state == consts.STRATEGY_STATE_FAILED:
|
||||||
failure_detected = True
|
failure_detected = True
|
||||||
self._delete_subcloud_worker(strategy_step.subcloud.name,
|
self._delete_subcloud_worker(strategy_step.subcloud.region_name,
|
||||||
strategy_step.subcloud_id)
|
strategy_step.subcloud_id)
|
||||||
# This step has failed and needs no further action
|
# This step has failed and needs no further action
|
||||||
if strategy_step.subcloud_id is None:
|
if strategy_step.subcloud_id is None:
|
||||||
@ -572,7 +580,7 @@ class OrchThread(threading.Thread):
|
|||||||
% (self.update_type,
|
% (self.update_type,
|
||||||
strategy_step.stage,
|
strategy_step.stage,
|
||||||
strategy_step.state,
|
strategy_step.state,
|
||||||
self.get_region_name(strategy_step)))
|
self.get_subcloud_name(strategy_step)))
|
||||||
# Instantiate the state operator and perform the state actions
|
# Instantiate the state operator and perform the state actions
|
||||||
state_operator = self.determine_state_operator(strategy_step)
|
state_operator = self.determine_state_operator(strategy_step)
|
||||||
state_operator.registerStopEvent(self._stop)
|
state_operator.registerStopEvent(self._stop)
|
||||||
@ -585,7 +593,7 @@ class OrchThread(threading.Thread):
|
|||||||
% (self.update_type,
|
% (self.update_type,
|
||||||
strategy_step.stage,
|
strategy_step.stage,
|
||||||
strategy_step.state,
|
strategy_step.state,
|
||||||
self.get_region_name(strategy_step)))
|
strategy_step.subcloud.name))
|
||||||
# Transition immediately to complete. Update the details to show
|
# Transition immediately to complete. Update the details to show
|
||||||
# that this subcloud has been skipped
|
# that this subcloud has been skipped
|
||||||
details = self.format_update_details(None, str(ex))
|
details = self.format_update_details(None, str(ex))
|
||||||
@ -598,7 +606,7 @@ class OrchThread(threading.Thread):
|
|||||||
% (self.update_type,
|
% (self.update_type,
|
||||||
strategy_step.stage,
|
strategy_step.stage,
|
||||||
strategy_step.state,
|
strategy_step.state,
|
||||||
self.get_region_name(strategy_step)))
|
strategy_step.subcloud.name))
|
||||||
details = self.format_update_details(strategy_step.state, str(ex))
|
details = self.format_update_details(strategy_step.state, str(ex))
|
||||||
self.strategy_step_update(strategy_step.subcloud_id,
|
self.strategy_step_update(strategy_step.subcloud_id,
|
||||||
state=consts.STRATEGY_STATE_FAILED,
|
state=consts.STRATEGY_STATE_FAILED,
|
||||||
|
@ -55,39 +55,47 @@ class BaseState(object):
|
|||||||
LOG.debug("Stage: %s, State: %s, Subcloud: %s, Details: %s"
|
LOG.debug("Stage: %s, State: %s, Subcloud: %s, Details: %s"
|
||||||
% (strategy_step.stage,
|
% (strategy_step.stage,
|
||||||
strategy_step.state,
|
strategy_step.state,
|
||||||
self.get_region_name(strategy_step),
|
self.get_subcloud_name(strategy_step),
|
||||||
details))
|
details))
|
||||||
|
|
||||||
def info_log(self, strategy_step, details):
|
def info_log(self, strategy_step, details):
|
||||||
LOG.info("Stage: %s, State: %s, Subcloud: %s, Details: %s"
|
LOG.info("Stage: %s, State: %s, Subcloud: %s, Details: %s"
|
||||||
% (strategy_step.stage,
|
% (strategy_step.stage,
|
||||||
strategy_step.state,
|
strategy_step.state,
|
||||||
self.get_region_name(strategy_step),
|
self.get_subcloud_name(strategy_step),
|
||||||
details))
|
details))
|
||||||
|
|
||||||
def warn_log(self, strategy_step, details):
|
def warn_log(self, strategy_step, details):
|
||||||
LOG.warn("Stage: %s, State: %s, Subcloud: %s, Details: %s"
|
LOG.warn("Stage: %s, State: %s, Subcloud: %s, Details: %s"
|
||||||
% (strategy_step.stage,
|
% (strategy_step.stage,
|
||||||
strategy_step.state,
|
strategy_step.state,
|
||||||
self.get_region_name(strategy_step),
|
self.get_subcloud_name(strategy_step),
|
||||||
details))
|
details))
|
||||||
|
|
||||||
def error_log(self, strategy_step, details):
|
def error_log(self, strategy_step, details):
|
||||||
LOG.error("Stage: %s, State: %s, Subcloud: %s, Details: %s"
|
LOG.error("Stage: %s, State: %s, Subcloud: %s, Details: %s"
|
||||||
% (strategy_step.stage,
|
% (strategy_step.stage,
|
||||||
strategy_step.state,
|
strategy_step.state,
|
||||||
self.get_region_name(strategy_step),
|
self.get_subcloud_name(strategy_step),
|
||||||
details))
|
details))
|
||||||
|
|
||||||
def exception_log(self, strategy_step, details):
|
def exception_log(self, strategy_step, details):
|
||||||
LOG.exception("Stage: %s, State: %s, Subcloud: %s, Details: %s"
|
LOG.exception("Stage: %s, State: %s, Subcloud: %s, Details: %s"
|
||||||
% (strategy_step.stage,
|
% (strategy_step.stage,
|
||||||
strategy_step.state,
|
strategy_step.state,
|
||||||
self.get_region_name(strategy_step),
|
self.get_subcloud_name(strategy_step),
|
||||||
details))
|
details))
|
||||||
|
|
||||||
@staticmethod
|
@staticmethod
|
||||||
def get_region_name(strategy_step):
|
def get_region_name(strategy_step):
|
||||||
|
"""Get the region name for a strategy step"""
|
||||||
|
if strategy_step.subcloud_id is None:
|
||||||
|
# This is the SystemController.
|
||||||
|
return dccommon_consts.DEFAULT_REGION_NAME
|
||||||
|
return strategy_step.subcloud.region_name
|
||||||
|
|
||||||
|
@staticmethod
|
||||||
|
def get_subcloud_name(strategy_step):
|
||||||
"""Get the region name for a strategy step"""
|
"""Get the region name for a strategy step"""
|
||||||
if strategy_step.subcloud_id is None:
|
if strategy_step.subcloud_id is None:
|
||||||
# This is the SystemController.
|
# This is the SystemController.
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#
|
#
|
||||||
# Copyright (c) 2020-2022 Wind River Systems, Inc.
|
# Copyright (c) 2020-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
@ -33,10 +33,11 @@ class FinishingFwUpdateState(BaseState):
|
|||||||
% (dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
% (dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
dccommon_consts.SYNC_STATUS_IN_SYNC))
|
dccommon_consts.SYNC_STATUS_IN_SYNC))
|
||||||
dcmanager_state_rpc_client = dcmanager_rpc_client.SubcloudStateClient()
|
dcmanager_state_rpc_client = dcmanager_rpc_client.SubcloudStateClient()
|
||||||
# The subcloud name is the same as the region in the strategy_step
|
# The subcloud name may differ from the region name in the strategy_step
|
||||||
dcmanager_state_rpc_client.update_subcloud_endpoint_status(
|
dcmanager_state_rpc_client.update_subcloud_endpoint_status(
|
||||||
self.context,
|
self.context,
|
||||||
subcloud_name=self.get_region_name(strategy_step),
|
subcloud_name=self.get_subcloud_name(strategy_step),
|
||||||
|
subcloud_region=self.get_region_name(strategy_step),
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#
|
#
|
||||||
# Copyright (c) 2020-2021 Wind River Systems, Inc.
|
# Copyright (c) 2020-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
@ -33,7 +33,7 @@ class LockHostState(BaseState):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
# Create a sysinv client on the subcloud
|
# Create a sysinv client on the subcloud
|
||||||
sysinv_client = self.get_sysinv_client(strategy_step.subcloud.name)
|
sysinv_client = self.get_sysinv_client(strategy_step.subcloud.region_name)
|
||||||
|
|
||||||
host = sysinv_client.get_host(self.target_hostname)
|
host = sysinv_client.get_host(self.target_hostname)
|
||||||
|
|
||||||
@ -58,7 +58,7 @@ class LockHostState(BaseState):
|
|||||||
raise StrategyStoppedException()
|
raise StrategyStoppedException()
|
||||||
# query the administrative state to see if it is the new state.
|
# query the administrative state to see if it is the new state.
|
||||||
host = self.get_sysinv_client(
|
host = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_host(self.target_hostname)
|
strategy_step.subcloud.region_name).get_host(self.target_hostname)
|
||||||
if host.administrative == consts.ADMIN_LOCKED:
|
if host.administrative == consts.ADMIN_LOCKED:
|
||||||
msg = "Host: %s is now: %s" % (self.target_hostname,
|
msg = "Host: %s is now: %s" % (self.target_hostname,
|
||||||
host.administrative)
|
host.administrative)
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#
|
#
|
||||||
# Copyright (c) 2020-2021 Wind River Systems, Inc.
|
# Copyright (c) 2020-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
@ -61,7 +61,7 @@ class UnlockHostState(BaseState):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
# Retrieve host from sysinv client on the subcloud
|
# Retrieve host from sysinv client on the subcloud
|
||||||
host = self._get_host_with_retry(strategy_step.subcloud.name)
|
host = self._get_host_with_retry(strategy_step.subcloud.region_name)
|
||||||
|
|
||||||
# if the host is already in the desired state, no need for action
|
# if the host is already in the desired state, no need for action
|
||||||
if self.check_host_ready(host):
|
if self.check_host_ready(host):
|
||||||
@ -85,7 +85,7 @@ class UnlockHostState(BaseState):
|
|||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
response = self.get_sysinv_client(
|
response = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).unlock_host(host.id)
|
strategy_step.subcloud.region_name).unlock_host(host.id)
|
||||||
if (response.ihost_action != 'unlock' or
|
if (response.ihost_action != 'unlock' or
|
||||||
response.task != 'Unlocking'):
|
response.task != 'Unlocking'):
|
||||||
raise Exception("Unable to unlock host %s"
|
raise Exception("Unable to unlock host %s"
|
||||||
@ -113,7 +113,7 @@ class UnlockHostState(BaseState):
|
|||||||
try:
|
try:
|
||||||
# query the administrative state to see if it is the new state.
|
# query the administrative state to see if it is the new state.
|
||||||
host = self.get_sysinv_client(
|
host = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_host(self.target_hostname)
|
strategy_step.subcloud.region_name).get_host(self.target_hostname)
|
||||||
if self.check_host_ready(host):
|
if self.check_host_ready(host):
|
||||||
# Success. Break out of the loop.
|
# Success. Break out of the loop.
|
||||||
msg = "Host: %s is now: %s %s %s" % (self.target_hostname,
|
msg = "Host: %s is now: %s %s %s" % (self.target_hostname,
|
||||||
|
@ -38,7 +38,7 @@ class ActivatingUpgradeState(BaseState):
|
|||||||
def get_upgrade_state(self, strategy_step):
|
def get_upgrade_state(self, strategy_step):
|
||||||
try:
|
try:
|
||||||
upgrades = self.get_sysinv_client(
|
upgrades = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_upgrades()
|
strategy_step.subcloud.region_name).get_upgrades()
|
||||||
|
|
||||||
except Exception as exception:
|
except Exception as exception:
|
||||||
self.warn_log(strategy_step,
|
self.warn_log(strategy_step,
|
||||||
@ -86,7 +86,7 @@ class ActivatingUpgradeState(BaseState):
|
|||||||
|
|
||||||
# if max retries have occurred, fail the state
|
# if max retries have occurred, fail the state
|
||||||
if activate_retry_counter >= self.max_failed_retries:
|
if activate_retry_counter >= self.max_failed_retries:
|
||||||
error_msg = utils.get_failure_msg(strategy_step.subcloud.name)
|
error_msg = utils.get_failure_msg(strategy_step.subcloud.region_name)
|
||||||
db_api.subcloud_update(
|
db_api.subcloud_update(
|
||||||
self.context, strategy_step.subcloud_id,
|
self.context, strategy_step.subcloud_id,
|
||||||
error_description=error_msg[0:consts.ERROR_DESCRIPTION_LENGTH])
|
error_description=error_msg[0:consts.ERROR_DESCRIPTION_LENGTH])
|
||||||
@ -104,7 +104,7 @@ class ActivatingUpgradeState(BaseState):
|
|||||||
# (no upgrade found, bad host state, auth)
|
# (no upgrade found, bad host state, auth)
|
||||||
try:
|
try:
|
||||||
self.get_sysinv_client(
|
self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).upgrade_activate()
|
strategy_step.subcloud.region_name).upgrade_activate()
|
||||||
first_activate = False # clear first activation flag
|
first_activate = False # clear first activation flag
|
||||||
activate_retry_counter = 0 # reset activation retries
|
activate_retry_counter = 0 # reset activation retries
|
||||||
except Exception as exception:
|
except Exception as exception:
|
||||||
@ -128,7 +128,7 @@ class ActivatingUpgradeState(BaseState):
|
|||||||
% upgrade_state)
|
% upgrade_state)
|
||||||
try:
|
try:
|
||||||
self.get_sysinv_client(
|
self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).upgrade_activate()
|
strategy_step.subcloud.region_name).upgrade_activate()
|
||||||
except Exception as exception:
|
except Exception as exception:
|
||||||
self.warn_log(strategy_step,
|
self.warn_log(strategy_step,
|
||||||
"Encountered exception: %s, "
|
"Encountered exception: %s, "
|
||||||
@ -146,7 +146,7 @@ class ActivatingUpgradeState(BaseState):
|
|||||||
break
|
break
|
||||||
audit_counter += 1
|
audit_counter += 1
|
||||||
if audit_counter >= self.max_queries:
|
if audit_counter >= self.max_queries:
|
||||||
error_msg = utils.get_failure_msg(strategy_step.subcloud.name)
|
error_msg = utils.get_failure_msg(strategy_step.subcloud.region_name)
|
||||||
db_api.subcloud_update(
|
db_api.subcloud_update(
|
||||||
self.context, strategy_step.subcloud_id,
|
self.context, strategy_step.subcloud_id,
|
||||||
error_description=error_msg[0:consts.ERROR_DESCRIPTION_LENGTH])
|
error_description=error_msg[0:consts.ERROR_DESCRIPTION_LENGTH])
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#
|
#
|
||||||
# Copyright (c) 2020 Wind River Systems, Inc.
|
# Copyright (c) 2020-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
@ -32,7 +32,7 @@ class DeletingLoadState(BaseState):
|
|||||||
Any exceptions raised by this method set the strategy to FAILED.
|
Any exceptions raised by this method set the strategy to FAILED.
|
||||||
"""
|
"""
|
||||||
# get the sysinv client for the subcloud
|
# get the sysinv client for the subcloud
|
||||||
sysinv_client = self.get_sysinv_client(strategy_step.subcloud.name)
|
sysinv_client = self.get_sysinv_client(strategy_step.subcloud.region_name)
|
||||||
current_loads = sysinv_client.get_loads()
|
current_loads = sysinv_client.get_loads()
|
||||||
load_id = None
|
load_id = None
|
||||||
|
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#
|
#
|
||||||
# Copyright (c) 2020-2022 Wind River Systems, Inc.
|
# Copyright (c) 2020-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
@ -56,7 +56,7 @@ class FinishingPatchStrategyState(BaseState):
|
|||||||
"RegionOne committed_patch_ids: %s" % committed_patch_ids)
|
"RegionOne committed_patch_ids: %s" % committed_patch_ids)
|
||||||
|
|
||||||
subcloud_patches = self.get_patching_client(
|
subcloud_patches = self.get_patching_client(
|
||||||
strategy_step.subcloud.name).query()
|
strategy_step.subcloud.region_name).query()
|
||||||
self.debug_log(strategy_step,
|
self.debug_log(strategy_step,
|
||||||
"Patches for subcloud: %s" % subcloud_patches)
|
"Patches for subcloud: %s" % subcloud_patches)
|
||||||
|
|
||||||
@ -93,6 +93,6 @@ class FinishingPatchStrategyState(BaseState):
|
|||||||
self.info_log(strategy_step,
|
self.info_log(strategy_step,
|
||||||
"Committing patches %s in subcloud" % patches_to_commit)
|
"Committing patches %s in subcloud" % patches_to_commit)
|
||||||
self.get_patching_client(
|
self.get_patching_client(
|
||||||
strategy_step.subcloud.name).commit(patches_to_commit)
|
strategy_step.subcloud.region_name).commit(patches_to_commit)
|
||||||
|
|
||||||
return self.next_state
|
return self.next_state
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#
|
#
|
||||||
# Copyright (c) 2020-2022 Wind River Systems, Inc.
|
# Copyright (c) 2020-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
@ -49,13 +49,13 @@ class ImportingLoadState(BaseState):
|
|||||||
self.info_log(strategy_step, "Retrieving load list from subcloud...")
|
self.info_log(strategy_step, "Retrieving load list from subcloud...")
|
||||||
# success when only one load, the active load, remains
|
# success when only one load, the active load, remains
|
||||||
if len(self.get_sysinv_client(
|
if len(self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_loads()) == 1:
|
strategy_step.subcloud.region_name).get_loads()) == 1:
|
||||||
msg = "Load: %s has been removed." % load_version
|
msg = "Load: %s has been removed." % load_version
|
||||||
self.info_log(strategy_step, msg)
|
self.info_log(strategy_step, msg)
|
||||||
return True
|
return True
|
||||||
else:
|
else:
|
||||||
load = self.get_sysinv_client(
|
load = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_load(load_id)
|
strategy_step.subcloud.region_name).get_load(load_id)
|
||||||
if load.state == consts.IMPORTED_LOAD_STATE:
|
if load.state == consts.IMPORTED_LOAD_STATE:
|
||||||
# success when load is imported
|
# success when load is imported
|
||||||
msg = "Load: %s is now: %s" % (load_version,
|
msg = "Load: %s is now: %s" % (load_version,
|
||||||
@ -102,7 +102,7 @@ class ImportingLoadState(BaseState):
|
|||||||
load_info = {}
|
load_info = {}
|
||||||
# Check if the load is already imported by checking the version
|
# Check if the load is already imported by checking the version
|
||||||
current_loads = self.get_sysinv_client(
|
current_loads = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_loads()
|
strategy_step.subcloud.region_name).get_loads()
|
||||||
|
|
||||||
for load in current_loads:
|
for load in current_loads:
|
||||||
if load.software_version == target_version:
|
if load.software_version == target_version:
|
||||||
@ -140,12 +140,12 @@ class ImportingLoadState(BaseState):
|
|||||||
self.info_log(strategy_step,
|
self.info_log(strategy_step,
|
||||||
"Deleting load %s..." % load_id_to_be_deleted)
|
"Deleting load %s..." % load_id_to_be_deleted)
|
||||||
self.get_sysinv_client(
|
self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).delete_load(load_id_to_be_deleted)
|
strategy_step.subcloud.region_name).delete_load(load_id_to_be_deleted)
|
||||||
req_info['type'] = LOAD_DELETE_REQUEST_TYPE
|
req_info['type'] = LOAD_DELETE_REQUEST_TYPE
|
||||||
self._wait_for_request_to_complete(strategy_step, req_info)
|
self._wait_for_request_to_complete(strategy_step, req_info)
|
||||||
|
|
||||||
subcloud_type = self.get_sysinv_client(
|
subcloud_type = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_system().system_mode
|
strategy_step.subcloud.region_name).get_system().system_mode
|
||||||
load_import_retry_counter = 0
|
load_import_retry_counter = 0
|
||||||
load = None
|
load = None
|
||||||
if subcloud_type == consts.SYSTEM_MODE_SIMPLEX:
|
if subcloud_type == consts.SYSTEM_MODE_SIMPLEX:
|
||||||
@ -158,7 +158,7 @@ class ImportingLoadState(BaseState):
|
|||||||
target_load = {key: target_load[key] for key in creation_keys}
|
target_load = {key: target_load[key] for key in creation_keys}
|
||||||
try:
|
try:
|
||||||
load = self.get_sysinv_client(
|
load = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).import_load_metadata(target_load)
|
strategy_step.subcloud.region_name).import_load_metadata(target_load)
|
||||||
self.info_log(strategy_step,
|
self.info_log(strategy_step,
|
||||||
"Load: %s is now: %s" % (
|
"Load: %s is now: %s" % (
|
||||||
load.software_version, load.state))
|
load.software_version, load.state))
|
||||||
@ -190,7 +190,7 @@ class ImportingLoadState(BaseState):
|
|||||||
# Call the API. import_load blocks until the load state is 'importing'
|
# Call the API. import_load blocks until the load state is 'importing'
|
||||||
self.info_log(strategy_step, "Sending load import request...")
|
self.info_log(strategy_step, "Sending load import request...")
|
||||||
load = self.get_sysinv_client(
|
load = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).import_load(iso_path, sig_path)
|
strategy_step.subcloud.region_name).import_load(iso_path, sig_path)
|
||||||
|
|
||||||
break
|
break
|
||||||
except VaultLoadMissingError:
|
except VaultLoadMissingError:
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#
|
#
|
||||||
# Copyright (c) 2020, 2022 Wind River Systems, Inc.
|
# Copyright (c) 2020-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
@ -64,7 +64,7 @@ class InstallingLicenseState(BaseState):
|
|||||||
|
|
||||||
# retrieve the keystone session for the subcloud and query its license
|
# retrieve the keystone session for the subcloud and query its license
|
||||||
subcloud_sysinv_client = \
|
subcloud_sysinv_client = \
|
||||||
self.get_sysinv_client(strategy_step.subcloud.name)
|
self.get_sysinv_client(strategy_step.subcloud.region_name)
|
||||||
subcloud_license_response = subcloud_sysinv_client.get_license()
|
subcloud_license_response = subcloud_sysinv_client.get_license()
|
||||||
subcloud_license = subcloud_license_response.get('content')
|
subcloud_license = subcloud_license_response.get('content')
|
||||||
subcloud_error = subcloud_license_response.get('error')
|
subcloud_error = subcloud_license_response.get('error')
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#
|
#
|
||||||
# Copyright (c) 2020-2022 Wind River Systems, Inc.
|
# Copyright (c) 2020-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
@ -76,7 +76,7 @@ class MigratingDataState(BaseState):
|
|||||||
try:
|
try:
|
||||||
# query the administrative state to see if it is the new state.
|
# query the administrative state to see if it is the new state.
|
||||||
host = self.get_sysinv_client(
|
host = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_host(target_hostname)
|
strategy_step.subcloud.region_name).get_host(target_hostname)
|
||||||
if (host.administrative == consts.ADMIN_UNLOCKED and
|
if (host.administrative == consts.ADMIN_UNLOCKED and
|
||||||
host.operational == consts.OPERATIONAL_ENABLED):
|
host.operational == consts.OPERATIONAL_ENABLED):
|
||||||
# Success. Break out of the loop.
|
# Success. Break out of the loop.
|
||||||
@ -160,7 +160,7 @@ class MigratingDataState(BaseState):
|
|||||||
msg_subcloud = utils.find_ansible_error_msg(
|
msg_subcloud = utils.find_ansible_error_msg(
|
||||||
strategy_step.subcloud.name, log_file, consts.DEPLOY_STATE_MIGRATING_DATA)
|
strategy_step.subcloud.name, log_file, consts.DEPLOY_STATE_MIGRATING_DATA)
|
||||||
# Get script output in case it is available
|
# Get script output in case it is available
|
||||||
error_msg = utils.get_failure_msg(strategy_step.subcloud.name)
|
error_msg = utils.get_failure_msg(strategy_step.subcloud.region_name)
|
||||||
failure = ('%s \n%s' % (error_msg, msg_subcloud))
|
failure = ('%s \n%s' % (error_msg, msg_subcloud))
|
||||||
db_api.subcloud_update(
|
db_api.subcloud_update(
|
||||||
self.context, strategy_step.subcloud_id,
|
self.context, strategy_step.subcloud_id,
|
||||||
|
@ -1,5 +1,5 @@
|
|||||||
#
|
#
|
||||||
# Copyright (c) 2020-2022 Wind River Systems, Inc.
|
# Copyright (c) 2020-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
@ -204,8 +204,8 @@ class PreCheckState(BaseState):
|
|||||||
if subcloud.availability_status == dccommon_consts.AVAILABILITY_ONLINE:
|
if subcloud.availability_status == dccommon_consts.AVAILABILITY_ONLINE:
|
||||||
subcloud_sysinv_client = None
|
subcloud_sysinv_client = None
|
||||||
try:
|
try:
|
||||||
subcloud_sysinv_client = self.get_sysinv_client(strategy_step.subcloud.name)
|
subcloud_sysinv_client = self.get_sysinv_client(strategy_step.subcloud.region_name)
|
||||||
subcloud_fm_client = self.get_fm_client(strategy_step.subcloud.name)
|
subcloud_fm_client = self.get_fm_client(strategy_step.subcloud.region_name)
|
||||||
except Exception:
|
except Exception:
|
||||||
# if getting the token times out, the orchestrator may have
|
# if getting the token times out, the orchestrator may have
|
||||||
# restarted and subcloud may be offline; so will attempt
|
# restarted and subcloud may be offline; so will attempt
|
||||||
@ -220,7 +220,7 @@ class PreCheckState(BaseState):
|
|||||||
|
|
||||||
host = subcloud_sysinv_client.get_host("controller-0")
|
host = subcloud_sysinv_client.get_host("controller-0")
|
||||||
subcloud_type = self.get_sysinv_client(
|
subcloud_type = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_system().system_mode
|
strategy_step.subcloud.region_name).get_system().system_mode
|
||||||
|
|
||||||
upgrades = subcloud_sysinv_client.get_upgrades()
|
upgrades = subcloud_sysinv_client.get_upgrades()
|
||||||
if subcloud_type == consts.SYSTEM_MODE_SIMPLEX:
|
if subcloud_type == consts.SYSTEM_MODE_SIMPLEX:
|
||||||
@ -313,7 +313,7 @@ class PreCheckState(BaseState):
|
|||||||
|
|
||||||
all_hosts_upgraded = True
|
all_hosts_upgraded = True
|
||||||
subcloud_hosts = self.get_sysinv_client(
|
subcloud_hosts = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_hosts()
|
strategy_step.subcloud.region_name).get_hosts()
|
||||||
for subcloud_host in subcloud_hosts:
|
for subcloud_host in subcloud_hosts:
|
||||||
if(subcloud_host.software_load != target_version or
|
if(subcloud_host.software_load != target_version or
|
||||||
subcloud_host.administrative == consts.ADMIN_LOCKED or
|
subcloud_host.administrative == consts.ADMIN_LOCKED or
|
||||||
|
@ -36,7 +36,7 @@ class StartingUpgradeState(BaseState):
|
|||||||
def get_upgrade_state(self, strategy_step):
|
def get_upgrade_state(self, strategy_step):
|
||||||
try:
|
try:
|
||||||
upgrades = self.get_sysinv_client(
|
upgrades = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_upgrades()
|
strategy_step.subcloud.region_name).get_upgrades()
|
||||||
except Exception as exception:
|
except Exception as exception:
|
||||||
self.warn_log(strategy_step,
|
self.warn_log(strategy_step,
|
||||||
"Encountered exception: %s, "
|
"Encountered exception: %s, "
|
||||||
@ -58,7 +58,7 @@ class StartingUpgradeState(BaseState):
|
|||||||
# Check if an existing upgrade is already in progress.
|
# Check if an existing upgrade is already in progress.
|
||||||
# The list of upgrades will never contain more than one entry.
|
# The list of upgrades will never contain more than one entry.
|
||||||
upgrades = self.get_sysinv_client(
|
upgrades = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).get_upgrades()
|
strategy_step.subcloud.region_name).get_upgrades()
|
||||||
if upgrades is not None and len(upgrades) > 0:
|
if upgrades is not None and len(upgrades) > 0:
|
||||||
for upgrade in upgrades:
|
for upgrade in upgrades:
|
||||||
# If a previous upgrade exists (even one that failed) skip
|
# If a previous upgrade exists (even one that failed) skip
|
||||||
@ -79,7 +79,7 @@ class StartingUpgradeState(BaseState):
|
|||||||
|
|
||||||
# This call is asynchronous and throws an exception on failure.
|
# This call is asynchronous and throws an exception on failure.
|
||||||
self.get_sysinv_client(
|
self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).upgrade_start(force=force_flag)
|
strategy_step.subcloud.region_name).upgrade_start(force=force_flag)
|
||||||
|
|
||||||
# Do not move to the next state until the upgrade state is correct
|
# Do not move to the next state until the upgrade state is correct
|
||||||
counter = 0
|
counter = 0
|
||||||
@ -96,7 +96,7 @@ class StartingUpgradeState(BaseState):
|
|||||||
if upgrade_state in UPGRADE_RETRY_STATES:
|
if upgrade_state in UPGRADE_RETRY_STATES:
|
||||||
retry_counter += 1
|
retry_counter += 1
|
||||||
if retry_counter >= self.max_failed_retries:
|
if retry_counter >= self.max_failed_retries:
|
||||||
error_msg = utils.get_failure_msg(strategy_step.subcloud.name)
|
error_msg = utils.get_failure_msg(strategy_step.subcloud.region_name)
|
||||||
db_api.subcloud_update(
|
db_api.subcloud_update(
|
||||||
self.context, strategy_step.subcloud_id,
|
self.context, strategy_step.subcloud_id,
|
||||||
error_description=error_msg[0:consts.ERROR_DESCRIPTION_LENGTH])
|
error_description=error_msg[0:consts.ERROR_DESCRIPTION_LENGTH])
|
||||||
@ -110,7 +110,7 @@ class StartingUpgradeState(BaseState):
|
|||||||
% upgrade_state)
|
% upgrade_state)
|
||||||
try:
|
try:
|
||||||
self.get_sysinv_client(
|
self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name).upgrade_start(force=force_flag)
|
strategy_step.subcloud.region_name).upgrade_start(force=force_flag)
|
||||||
except Exception as exception:
|
except Exception as exception:
|
||||||
self.warn_log(strategy_step,
|
self.warn_log(strategy_step,
|
||||||
"Encountered exception: %s, "
|
"Encountered exception: %s, "
|
||||||
|
@ -48,7 +48,7 @@ class TransferCACertificateState(BaseState):
|
|||||||
retry_counter = 0
|
retry_counter = 0
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
sysinv_client = self.get_sysinv_client(strategy_step.subcloud.name)
|
sysinv_client = self.get_sysinv_client(strategy_step.subcloud.region_name)
|
||||||
|
|
||||||
data = {'mode': 'openldap_ca'}
|
data = {'mode': 'openldap_ca'}
|
||||||
ldap_ca_cert, ldap_ca_key = utils.get_certificate_from_secret(
|
ldap_ca_cert, ldap_ca_key = utils.get_certificate_from_secret(
|
||||||
|
@ -38,9 +38,9 @@ class UpgradingSimplexState(BaseState):
|
|||||||
subcloud_barbican_client = None
|
subcloud_barbican_client = None
|
||||||
try:
|
try:
|
||||||
subcloud_sysinv_client = self.get_sysinv_client(
|
subcloud_sysinv_client = self.get_sysinv_client(
|
||||||
strategy_step.subcloud.name)
|
strategy_step.subcloud.region_name)
|
||||||
subcloud_barbican_client = self.get_barbican_client(
|
subcloud_barbican_client = self.get_barbican_client(
|
||||||
strategy_step.subcloud.name)
|
strategy_step.subcloud.region_name)
|
||||||
except Exception:
|
except Exception:
|
||||||
# if getting the token times out, the orchestrator may have
|
# if getting the token times out, the orchestrator may have
|
||||||
# restarted and subcloud may be offline; so will attempt
|
# restarted and subcloud may be offline; so will attempt
|
||||||
|
@ -69,6 +69,7 @@ class SubcloudStateClient(RPCClient):
|
|||||||
|
|
||||||
def update_subcloud_availability(self, ctxt,
|
def update_subcloud_availability(self, ctxt,
|
||||||
subcloud_name,
|
subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
availability_status,
|
availability_status,
|
||||||
update_state_only=False,
|
update_state_only=False,
|
||||||
audit_fail_count=None):
|
audit_fail_count=None):
|
||||||
@ -77,11 +78,13 @@ class SubcloudStateClient(RPCClient):
|
|||||||
ctxt,
|
ctxt,
|
||||||
self.make_msg('update_subcloud_availability',
|
self.make_msg('update_subcloud_availability',
|
||||||
subcloud_name=subcloud_name,
|
subcloud_name=subcloud_name,
|
||||||
|
subcloud_region=subcloud_region,
|
||||||
availability_status=availability_status,
|
availability_status=availability_status,
|
||||||
update_state_only=update_state_only,
|
update_state_only=update_state_only,
|
||||||
audit_fail_count=audit_fail_count))
|
audit_fail_count=audit_fail_count))
|
||||||
|
|
||||||
def update_subcloud_endpoint_status(self, ctxt, subcloud_name=None,
|
def update_subcloud_endpoint_status(self, ctxt, subcloud_name=None,
|
||||||
|
subcloud_region=None,
|
||||||
endpoint_type=None,
|
endpoint_type=None,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
||||||
ignore_endpoints=None,
|
ignore_endpoints=None,
|
||||||
@ -90,12 +93,14 @@ class SubcloudStateClient(RPCClient):
|
|||||||
# See below for synchronous method call
|
# See below for synchronous method call
|
||||||
return self.cast(ctxt, self.make_msg('update_subcloud_endpoint_status',
|
return self.cast(ctxt, self.make_msg('update_subcloud_endpoint_status',
|
||||||
subcloud_name=subcloud_name,
|
subcloud_name=subcloud_name,
|
||||||
|
subcloud_region=subcloud_region,
|
||||||
endpoint_type=endpoint_type,
|
endpoint_type=endpoint_type,
|
||||||
sync_status=sync_status,
|
sync_status=sync_status,
|
||||||
ignore_endpoints=ignore_endpoints,
|
ignore_endpoints=ignore_endpoints,
|
||||||
alarmable=alarmable))
|
alarmable=alarmable))
|
||||||
|
|
||||||
def update_subcloud_endpoint_status_sync(self, ctxt, subcloud_name=None,
|
def update_subcloud_endpoint_status_sync(self, ctxt, subcloud_name=None,
|
||||||
|
subcloud_region=None,
|
||||||
endpoint_type=None,
|
endpoint_type=None,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
||||||
ignore_endpoints=None,
|
ignore_endpoints=None,
|
||||||
@ -103,6 +108,7 @@ class SubcloudStateClient(RPCClient):
|
|||||||
# Note: synchronous
|
# Note: synchronous
|
||||||
return self.call(ctxt, self.make_msg('update_subcloud_endpoint_status',
|
return self.call(ctxt, self.make_msg('update_subcloud_endpoint_status',
|
||||||
subcloud_name=subcloud_name,
|
subcloud_name=subcloud_name,
|
||||||
|
subcloud_region=subcloud_region,
|
||||||
endpoint_type=endpoint_type,
|
endpoint_type=endpoint_type,
|
||||||
sync_status=sync_status,
|
sync_status=sync_status,
|
||||||
ignore_endpoints=ignore_endpoints,
|
ignore_endpoints=ignore_endpoints,
|
||||||
@ -133,6 +139,12 @@ class ManagerClient(RPCClient):
|
|||||||
return self.call(ctxt, self.make_msg('delete_subcloud',
|
return self.call(ctxt, self.make_msg('delete_subcloud',
|
||||||
subcloud_id=subcloud_id))
|
subcloud_id=subcloud_id))
|
||||||
|
|
||||||
|
def rename_subcloud(self, ctxt, subcloud_id, curr_subcloud_name, new_subcloud_name=None):
|
||||||
|
return self.call(ctxt, self.make_msg('rename_subcloud',
|
||||||
|
subcloud_id=subcloud_id,
|
||||||
|
curr_subcloud_name=curr_subcloud_name,
|
||||||
|
new_subcloud_name=new_subcloud_name))
|
||||||
|
|
||||||
def update_subcloud(self, ctxt, subcloud_id, management_state=None,
|
def update_subcloud(self, ctxt, subcloud_id, management_state=None,
|
||||||
description=None, location=None, group_id=None,
|
description=None, location=None, group_id=None,
|
||||||
data_install=None, force=None,
|
data_install=None, force=None,
|
||||||
@ -173,13 +185,13 @@ class ManagerClient(RPCClient):
|
|||||||
payload=payload))
|
payload=payload))
|
||||||
|
|
||||||
def update_subcloud_sync_endpoint_type(self, ctxt,
|
def update_subcloud_sync_endpoint_type(self, ctxt,
|
||||||
subcloud_name,
|
subcloud_region,
|
||||||
endpoint_type_list,
|
endpoint_type_list,
|
||||||
openstack_installed):
|
openstack_installed):
|
||||||
return self.cast(
|
return self.cast(
|
||||||
ctxt,
|
ctxt,
|
||||||
self.make_msg('update_subcloud_sync_endpoint_type',
|
self.make_msg('update_subcloud_sync_endpoint_type',
|
||||||
subcloud_name=subcloud_name,
|
subcloud_region=subcloud_region,
|
||||||
endpoint_type_list=endpoint_type_list,
|
endpoint_type_list=endpoint_type_list,
|
||||||
openstack_installed=openstack_installed))
|
openstack_installed=openstack_installed))
|
||||||
|
|
||||||
@ -229,6 +241,10 @@ class ManagerClient(RPCClient):
|
|||||||
subcloud_ref=subcloud_ref,
|
subcloud_ref=subcloud_ref,
|
||||||
payload=payload))
|
payload=payload))
|
||||||
|
|
||||||
|
def get_subcloud_name_by_region_name(self, ctxt, subcloud_region):
|
||||||
|
return self.call(ctxt, self.make_msg('get_subcloud_name_by_region_name',
|
||||||
|
subcloud_region=subcloud_region))
|
||||||
|
|
||||||
|
|
||||||
class DCManagerNotifications(RPCClient):
|
class DCManagerNotifications(RPCClient):
|
||||||
"""DC Manager Notification interface to broadcast subcloud state changed
|
"""DC Manager Notification interface to broadcast subcloud state changed
|
||||||
|
@ -10,7 +10,7 @@
|
|||||||
# License for the specific language governing permissions and limitations
|
# License for the specific language governing permissions and limitations
|
||||||
# under the License.
|
# under the License.
|
||||||
#
|
#
|
||||||
# Copyright (c) 2017-2022 Wind River Systems, Inc.
|
# Copyright (c) 2017-2023 Wind River Systems, Inc.
|
||||||
#
|
#
|
||||||
# The right to copy, distribute, modify, or otherwise make use
|
# The right to copy, distribute, modify, or otherwise make use
|
||||||
# of this software may be licensed only pursuant to the terms
|
# of this software may be licensed only pursuant to the terms
|
||||||
@ -113,6 +113,7 @@ class DCManagerStateService(service.Service):
|
|||||||
|
|
||||||
@request_context
|
@request_context
|
||||||
def update_subcloud_endpoint_status(self, context, subcloud_name=None,
|
def update_subcloud_endpoint_status(self, context, subcloud_name=None,
|
||||||
|
subcloud_region=None,
|
||||||
endpoint_type=None,
|
endpoint_type=None,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
||||||
alarmable=True,
|
alarmable=True,
|
||||||
@ -124,7 +125,7 @@ class DCManagerStateService(service.Service):
|
|||||||
|
|
||||||
self.subcloud_state_manager. \
|
self.subcloud_state_manager. \
|
||||||
update_subcloud_endpoint_status(context,
|
update_subcloud_endpoint_status(context,
|
||||||
subcloud_name,
|
subcloud_region,
|
||||||
endpoint_type,
|
endpoint_type,
|
||||||
sync_status,
|
sync_status,
|
||||||
alarmable,
|
alarmable,
|
||||||
@ -153,6 +154,7 @@ class DCManagerStateService(service.Service):
|
|||||||
@request_context
|
@request_context
|
||||||
def update_subcloud_availability(self, context,
|
def update_subcloud_availability(self, context,
|
||||||
subcloud_name,
|
subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
availability_status,
|
availability_status,
|
||||||
update_state_only=False,
|
update_state_only=False,
|
||||||
audit_fail_count=None):
|
audit_fail_count=None):
|
||||||
@ -161,7 +163,7 @@ class DCManagerStateService(service.Service):
|
|||||||
subcloud_name)
|
subcloud_name)
|
||||||
self.subcloud_state_manager.update_subcloud_availability(
|
self.subcloud_state_manager.update_subcloud_availability(
|
||||||
context,
|
context,
|
||||||
subcloud_name,
|
subcloud_region,
|
||||||
availability_status,
|
availability_status,
|
||||||
update_state_only,
|
update_state_only,
|
||||||
audit_fail_count)
|
audit_fail_count)
|
||||||
|
@ -42,9 +42,9 @@ def sync_update_subcloud_endpoint_status(func):
|
|||||||
"""Synchronized lock decorator for _update_subcloud_endpoint_status. """
|
"""Synchronized lock decorator for _update_subcloud_endpoint_status. """
|
||||||
|
|
||||||
def _get_lock_and_call(*args, **kwargs):
|
def _get_lock_and_call(*args, **kwargs):
|
||||||
"""Get a single fair lock per subcloud based on subcloud name. """
|
"""Get a single fair lock per subcloud based on subcloud region. """
|
||||||
|
|
||||||
# subcloud name is the 3rd argument to
|
# subcloud region is the 3rd argument to
|
||||||
# _update_subcloud_endpoint_status()
|
# _update_subcloud_endpoint_status()
|
||||||
@utils.synchronized(args[2], external=True, fair=True)
|
@utils.synchronized(args[2], external=True, fair=True)
|
||||||
def _call_func(*args, **kwargs):
|
def _call_func(*args, **kwargs):
|
||||||
@ -262,7 +262,7 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
@sync_update_subcloud_endpoint_status
|
@sync_update_subcloud_endpoint_status
|
||||||
def _update_subcloud_endpoint_status(
|
def _update_subcloud_endpoint_status(
|
||||||
self, context,
|
self, context,
|
||||||
subcloud_name,
|
subcloud_region,
|
||||||
endpoint_type=None,
|
endpoint_type=None,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
||||||
alarmable=True,
|
alarmable=True,
|
||||||
@ -270,7 +270,7 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
"""Update subcloud endpoint status
|
"""Update subcloud endpoint status
|
||||||
|
|
||||||
:param context: request context object
|
:param context: request context object
|
||||||
:param subcloud_name: name of subcloud to update
|
:param subcloud_region: name of subcloud region to update
|
||||||
:param endpoint_type: endpoint type to update
|
:param endpoint_type: endpoint type to update
|
||||||
:param sync_status: sync status to set
|
:param sync_status: sync status to set
|
||||||
:param alarmable: controls raising an alarm if applicable
|
:param alarmable: controls raising an alarm if applicable
|
||||||
@ -281,13 +281,13 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
if ignore_endpoints is None:
|
if ignore_endpoints is None:
|
||||||
ignore_endpoints = []
|
ignore_endpoints = []
|
||||||
|
|
||||||
if not subcloud_name:
|
if not subcloud_region:
|
||||||
raise exceptions.BadRequest(
|
raise exceptions.BadRequest(
|
||||||
resource='subcloud',
|
resource='subcloud',
|
||||||
msg='Subcloud name not provided')
|
msg='Subcloud region not provided')
|
||||||
|
|
||||||
try:
|
try:
|
||||||
subcloud = db_api.subcloud_get_by_name(context, subcloud_name)
|
subcloud = db_api.subcloud_get_by_region_name(context, subcloud_region)
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
LOG.exception(e)
|
LOG.exception(e)
|
||||||
raise e
|
raise e
|
||||||
@ -327,12 +327,12 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
else:
|
else:
|
||||||
LOG.info("Ignoring subcloud sync_status update for subcloud:%s "
|
LOG.info("Ignoring subcloud sync_status update for subcloud:%s "
|
||||||
"availability:%s management:%s endpoint:%s sync:%s" %
|
"availability:%s management:%s endpoint:%s sync:%s" %
|
||||||
(subcloud_name, subcloud.availability_status,
|
(subcloud.name, subcloud.availability_status,
|
||||||
subcloud.management_state, endpoint_type, sync_status))
|
subcloud.management_state, endpoint_type, sync_status))
|
||||||
|
|
||||||
def update_subcloud_endpoint_status(
|
def update_subcloud_endpoint_status(
|
||||||
self, context,
|
self, context,
|
||||||
subcloud_name=None,
|
subcloud_region=None,
|
||||||
endpoint_type=None,
|
endpoint_type=None,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
||||||
alarmable=True,
|
alarmable=True,
|
||||||
@ -340,7 +340,7 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
"""Update subcloud endpoint status
|
"""Update subcloud endpoint status
|
||||||
|
|
||||||
:param context: request context object
|
:param context: request context object
|
||||||
:param subcloud_name: name of subcloud to update
|
:param subcloud_region: region of subcloud to update
|
||||||
:param endpoint_type: endpoint type to update
|
:param endpoint_type: endpoint type to update
|
||||||
:param sync_status: sync status to set
|
:param sync_status: sync status to set
|
||||||
:param alarmable: controls raising an alarm if applicable
|
:param alarmable: controls raising an alarm if applicable
|
||||||
@ -351,18 +351,18 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
if ignore_endpoints is None:
|
if ignore_endpoints is None:
|
||||||
ignore_endpoints = []
|
ignore_endpoints = []
|
||||||
|
|
||||||
if subcloud_name:
|
if subcloud_region:
|
||||||
self._update_subcloud_endpoint_status(
|
self._update_subcloud_endpoint_status(
|
||||||
context, subcloud_name, endpoint_type, sync_status, alarmable,
|
context, subcloud_region, endpoint_type, sync_status, alarmable,
|
||||||
ignore_endpoints)
|
ignore_endpoints)
|
||||||
else:
|
else:
|
||||||
# update all subclouds
|
# update all subclouds
|
||||||
for subcloud in db_api.subcloud_get_all(context):
|
for subcloud in db_api.subcloud_get_all(context):
|
||||||
self._update_subcloud_endpoint_status(
|
self._update_subcloud_endpoint_status(
|
||||||
context, subcloud.name, endpoint_type, sync_status,
|
context, subcloud.region_name, endpoint_type, sync_status,
|
||||||
alarmable, ignore_endpoints)
|
alarmable, ignore_endpoints)
|
||||||
|
|
||||||
def _update_subcloud_state(self, context, subcloud_name,
|
def _update_subcloud_state(self, context, subcloud_name, subcloud_region,
|
||||||
management_state, availability_status):
|
management_state, availability_status):
|
||||||
try:
|
try:
|
||||||
LOG.info('Notifying dcorch, subcloud:%s management: %s, '
|
LOG.info('Notifying dcorch, subcloud:%s management: %s, '
|
||||||
@ -372,7 +372,7 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
availability_status))
|
availability_status))
|
||||||
|
|
||||||
self.dcorch_rpc_client.update_subcloud_states(
|
self.dcorch_rpc_client.update_subcloud_states(
|
||||||
context, subcloud_name, management_state, availability_status)
|
context, subcloud_region, management_state, availability_status)
|
||||||
|
|
||||||
except Exception:
|
except Exception:
|
||||||
LOG.exception('Problem informing dcorch of subcloud state change,'
|
LOG.exception('Problem informing dcorch of subcloud state change,'
|
||||||
@ -418,20 +418,21 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
LOG.exception("Failed to raise offline alarm for subcloud: %s",
|
LOG.exception("Failed to raise offline alarm for subcloud: %s",
|
||||||
subcloud_name)
|
subcloud_name)
|
||||||
|
|
||||||
def update_subcloud_availability(self, context, subcloud_name,
|
def update_subcloud_availability(self, context, subcloud_region,
|
||||||
availability_status,
|
availability_status,
|
||||||
update_state_only=False,
|
update_state_only=False,
|
||||||
audit_fail_count=None):
|
audit_fail_count=None):
|
||||||
try:
|
try:
|
||||||
subcloud = db_api.subcloud_get_by_name(context, subcloud_name)
|
subcloud = db_api.subcloud_get_by_region_name(context, subcloud_region)
|
||||||
except Exception:
|
except Exception:
|
||||||
LOG.exception("Failed to get subcloud by name: %s" % subcloud_name)
|
LOG.exception("Failed to get subcloud by region name %s" % subcloud_region)
|
||||||
raise
|
raise
|
||||||
|
|
||||||
if update_state_only:
|
if update_state_only:
|
||||||
# Nothing has changed, but we want to send a state update for this
|
# Nothing has changed, but we want to send a state update for this
|
||||||
# subcloud as an audit. Get the most up-to-date data.
|
# subcloud as an audit. Get the most up-to-date data.
|
||||||
self._update_subcloud_state(context, subcloud_name,
|
self._update_subcloud_state(context, subcloud.name,
|
||||||
|
subcloud.region_name,
|
||||||
subcloud.management_state,
|
subcloud.management_state,
|
||||||
availability_status)
|
availability_status)
|
||||||
elif availability_status is None:
|
elif availability_status is None:
|
||||||
@ -443,17 +444,17 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
# slim possibility subcloud could have been deleted since
|
# slim possibility subcloud could have been deleted since
|
||||||
# we found it in db, ignore this benign error.
|
# we found it in db, ignore this benign error.
|
||||||
LOG.info('Ignoring SubcloudNotFound when attempting '
|
LOG.info('Ignoring SubcloudNotFound when attempting '
|
||||||
'audit_fail_count update: %s' % subcloud_name)
|
'audit_fail_count update: %s' % subcloud.name)
|
||||||
return
|
return
|
||||||
else:
|
else:
|
||||||
self._raise_or_clear_subcloud_status_alarm(subcloud_name,
|
self._raise_or_clear_subcloud_status_alarm(subcloud.name,
|
||||||
availability_status)
|
availability_status)
|
||||||
|
|
||||||
if availability_status == dccommon_consts.AVAILABILITY_OFFLINE:
|
if availability_status == dccommon_consts.AVAILABILITY_OFFLINE:
|
||||||
# Subcloud is going offline, set all endpoint statuses to
|
# Subcloud is going offline, set all endpoint statuses to
|
||||||
# unknown.
|
# unknown.
|
||||||
self._update_subcloud_endpoint_status(
|
self._update_subcloud_endpoint_status(
|
||||||
context, subcloud_name, endpoint_type=None,
|
context, subcloud.region_name, endpoint_type=None,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_UNKNOWN)
|
sync_status=dccommon_consts.SYNC_STATUS_UNKNOWN)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@ -466,27 +467,28 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
# slim possibility subcloud could have been deleted since
|
# slim possibility subcloud could have been deleted since
|
||||||
# we found it in db, ignore this benign error.
|
# we found it in db, ignore this benign error.
|
||||||
LOG.info('Ignoring SubcloudNotFound when attempting state'
|
LOG.info('Ignoring SubcloudNotFound when attempting state'
|
||||||
' update: %s' % subcloud_name)
|
' update: %s' % subcloud.name)
|
||||||
return
|
return
|
||||||
|
|
||||||
if availability_status == dccommon_consts.AVAILABILITY_ONLINE:
|
if availability_status == dccommon_consts.AVAILABILITY_ONLINE:
|
||||||
# Subcloud is going online
|
# Subcloud is going online
|
||||||
# Tell cert-mon to audit endpoint certificate.
|
# Tell cert-mon to audit endpoint certificate.
|
||||||
LOG.info('Request for online audit for %s' % subcloud_name)
|
LOG.info('Request for online audit for %s' % subcloud.name)
|
||||||
dc_notification = rpc_client.DCManagerNotifications()
|
dc_notification = rpc_client.DCManagerNotifications()
|
||||||
dc_notification.subcloud_online(context, subcloud_name)
|
dc_notification.subcloud_online(context, subcloud.region_name)
|
||||||
# Trigger all the audits for the subcloud so it can update the
|
# Trigger all the audits for the subcloud so it can update the
|
||||||
# sync status ASAP.
|
# sync status ASAP.
|
||||||
self.audit_rpc_client.trigger_subcloud_audits(context,
|
self.audit_rpc_client.trigger_subcloud_audits(context,
|
||||||
subcloud.id)
|
subcloud.id)
|
||||||
|
|
||||||
# Send dcorch a state update
|
# Send dcorch a state update
|
||||||
self._update_subcloud_state(context, subcloud_name,
|
self._update_subcloud_state(context, subcloud.name,
|
||||||
|
subcloud.region_name,
|
||||||
updated_subcloud.management_state,
|
updated_subcloud.management_state,
|
||||||
availability_status)
|
availability_status)
|
||||||
|
|
||||||
def update_subcloud_sync_endpoint_type(self, context,
|
def update_subcloud_sync_endpoint_type(self, context,
|
||||||
subcloud_name,
|
subcloud_region,
|
||||||
endpoint_type_list,
|
endpoint_type_list,
|
||||||
openstack_installed):
|
openstack_installed):
|
||||||
operation = 'add' if openstack_installed else 'remove'
|
operation = 'add' if openstack_installed else 'remove'
|
||||||
@ -502,17 +504,17 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
}
|
}
|
||||||
|
|
||||||
try:
|
try:
|
||||||
subcloud = db_api.subcloud_get_by_name(context, subcloud_name)
|
subcloud = db_api.subcloud_get_by_region_name(context, subcloud_region)
|
||||||
except Exception:
|
except Exception:
|
||||||
LOG.exception("Failed to get subcloud by name: %s" % subcloud_name)
|
LOG.exception("Failed to get subcloud by region name: %s" % subcloud_region)
|
||||||
raise
|
raise
|
||||||
|
|
||||||
try:
|
try:
|
||||||
# Notify dcorch to add/remove sync endpoint type list
|
# Notify dcorch to add/remove sync endpoint type list
|
||||||
func_switcher[operation][0](self.context, subcloud_name,
|
func_switcher[operation][0](self.context, subcloud_region,
|
||||||
endpoint_type_list)
|
endpoint_type_list)
|
||||||
LOG.info('Notifying dcorch, subcloud: %s new sync endpoint: %s' %
|
LOG.info('Notifying dcorch, subcloud: %s new sync endpoint: %s' %
|
||||||
(subcloud_name, endpoint_type_list))
|
(subcloud.name, endpoint_type_list))
|
||||||
|
|
||||||
# Update subcloud status table by adding/removing openstack sync
|
# Update subcloud status table by adding/removing openstack sync
|
||||||
# endpoint types
|
# endpoint types
|
||||||
@ -524,4 +526,4 @@ class SubcloudStateManager(manager.Manager):
|
|||||||
openstack_installed=openstack_installed)
|
openstack_installed=openstack_installed)
|
||||||
except Exception:
|
except Exception:
|
||||||
LOG.exception('Problem informing dcorch of subcloud sync endpoint'
|
LOG.exception('Problem informing dcorch of subcloud sync endpoint'
|
||||||
' type change, subcloud: %s' % subcloud_name)
|
' type change, subcloud: %s' % subcloud.name)
|
||||||
|
@ -36,6 +36,29 @@ get_engine = api.get_engine
|
|||||||
from sqlalchemy.engine import Engine
|
from sqlalchemy.engine import Engine
|
||||||
from sqlalchemy import event
|
from sqlalchemy import event
|
||||||
|
|
||||||
|
SUBCLOUD_1 = {'name': 'subcloud1',
|
||||||
|
'region_name': '2ec93dfb654846909efe61d1b39dd2ce'}
|
||||||
|
SUBCLOUD_2 = {'name': 'subcloud2',
|
||||||
|
'region_name': 'ca2761ee7aa34cbe8415ec9a3c86854f'}
|
||||||
|
SUBCLOUD_3 = {'name': 'subcloud3',
|
||||||
|
'region_name': '659e12e5f7ad411abfcd83f5cedca0bf'}
|
||||||
|
SUBCLOUD_4 = {'name': 'subcloud4',
|
||||||
|
'region_name': 'c25f3b0553384104b664789bd93a2ba8'}
|
||||||
|
SUBCLOUD_5 = {'name': 'subcloud5',
|
||||||
|
'region_name': '809581dc2d154e008480bac1f43b7aff'}
|
||||||
|
SUBCLOUD_6 = {'name': 'subcloud6',
|
||||||
|
'region_name': '8c60b99f3e1245b7bc5a049802ade8d2'}
|
||||||
|
SUBCLOUD_7 = {'name': 'subcloud7',
|
||||||
|
'region_name': '9fde6dca22fa422bb1e8cf03bedc18e4'}
|
||||||
|
SUBCLOUD_8 = {'name': 'subcloud8',
|
||||||
|
'region_name': 'f3cb0b109c4543fda3ed50ed5783279d'}
|
||||||
|
SUBCLOUD_9 = {'name': 'subcloud9',
|
||||||
|
'region_name': '1cfab1df7b444bb3bd562894d684f352'}
|
||||||
|
SUBCLOUD_10 = {'name': 'subcloud10',
|
||||||
|
'region_name': '6d0040199b4f4a9fb4a1f2ed4d498159'}
|
||||||
|
SUBCLOUD_11 = {'name': 'subcloud11',
|
||||||
|
'region_name': '169e6fc231e94959ad6ff0a66fbcb753'}
|
||||||
|
|
||||||
SUBCLOUD_SAMPLE_DATA_0 = [
|
SUBCLOUD_SAMPLE_DATA_0 = [
|
||||||
6, # id
|
6, # id
|
||||||
"subcloud-4", # name
|
"subcloud-4", # name
|
||||||
@ -63,6 +86,7 @@ SUBCLOUD_SAMPLE_DATA_0 = [
|
|||||||
1, # group_id
|
1, # group_id
|
||||||
consts.DEPLOY_STATE_DONE, # deploy_status
|
consts.DEPLOY_STATE_DONE, # deploy_status
|
||||||
consts.ERROR_DESC_EMPTY, # error_description
|
consts.ERROR_DESC_EMPTY, # error_description
|
||||||
|
SUBCLOUD_4['region_name'], # region_name
|
||||||
json.dumps({'data_install': 'test data install values'}), # data_install
|
json.dumps({'data_install': 'test data install values'}), # data_install
|
||||||
]
|
]
|
||||||
|
|
||||||
|
@ -17,6 +17,7 @@ from dcmanager.common import consts
|
|||||||
from dcmanager.db.sqlalchemy import api as db_api
|
from dcmanager.db.sqlalchemy import api as db_api
|
||||||
from dcmanager.rpc import client as rpc_client
|
from dcmanager.rpc import client as rpc_client
|
||||||
|
|
||||||
|
from dcmanager.tests import base
|
||||||
from dcmanager.tests.unit.api import test_root_controller as testroot
|
from dcmanager.tests.unit.api import test_root_controller as testroot
|
||||||
from dcmanager.tests.unit.common import fake_subcloud
|
from dcmanager.tests.unit.common import fake_subcloud
|
||||||
from dcmanager.tests import utils
|
from dcmanager.tests import utils
|
||||||
@ -1059,7 +1060,9 @@ class TestSubcloudRestore(testroot.DCManagerApiTest):
|
|||||||
|
|
||||||
test_group_id = 1
|
test_group_id = 1
|
||||||
subcloud = fake_subcloud.create_fake_subcloud(self.ctx, group_id=test_group_id)
|
subcloud = fake_subcloud.create_fake_subcloud(self.ctx, group_id=test_group_id)
|
||||||
subcloud2 = fake_subcloud.create_fake_subcloud(self.ctx, group_id=test_group_id, name='subcloud2')
|
subcloud2 = fake_subcloud.create_fake_subcloud(self.ctx, group_id=test_group_id,
|
||||||
|
name=base.SUBCLOUD_2['name'],
|
||||||
|
region_name=base.SUBCLOUD_2['region_name'])
|
||||||
# Valid subcloud, management state is 'unmanaged'
|
# Valid subcloud, management state is 'unmanaged'
|
||||||
db_api.subcloud_update(self.ctx,
|
db_api.subcloud_update(self.ctx,
|
||||||
subcloud.id,
|
subcloud.id,
|
||||||
|
@ -239,6 +239,7 @@ class TestSubcloudGroupGet(testroot.DCManagerApiTest,
|
|||||||
FAKE_SUBCLOUD_DATA.get('systemcontroller_gateway_ip'),
|
FAKE_SUBCLOUD_DATA.get('systemcontroller_gateway_ip'),
|
||||||
'deploy_status': FAKE_SUBCLOUD_DATA.get('deploy_status'),
|
'deploy_status': FAKE_SUBCLOUD_DATA.get('deploy_status'),
|
||||||
'error_description': FAKE_SUBCLOUD_DATA.get('error_description'),
|
'error_description': FAKE_SUBCLOUD_DATA.get('error_description'),
|
||||||
|
'region_name': FAKE_SUBCLOUD_DATA.get('region_name'),
|
||||||
'openstack_installed':
|
'openstack_installed':
|
||||||
FAKE_SUBCLOUD_DATA.get('openstack_installed'),
|
FAKE_SUBCLOUD_DATA.get('openstack_installed'),
|
||||||
'group_id': FAKE_SUBCLOUD_DATA.get('group_id', 1)
|
'group_id': FAKE_SUBCLOUD_DATA.get('group_id', 1)
|
||||||
|
@ -1429,7 +1429,8 @@ class TestSubcloudAPIOther(testroot.DCManagerApiTest):
|
|||||||
data, headers=FAKE_HEADERS)
|
data, headers=FAKE_HEADERS)
|
||||||
|
|
||||||
self.mock_rpc_state_client().update_subcloud_endpoint_status.\
|
self.mock_rpc_state_client().update_subcloud_endpoint_status.\
|
||||||
assert_called_once_with(mock.ANY, subcloud.name, 'dc-cert', 'in-sync')
|
assert_called_once_with(mock.ANY, subcloud.name, subcloud.region_name,
|
||||||
|
'dc-cert', 'in-sync')
|
||||||
|
|
||||||
self.assertEqual(response.status_int, 200)
|
self.assertEqual(response.status_int, 200)
|
||||||
|
|
||||||
|
@ -28,7 +28,6 @@ from dcmanager.audit import subcloud_audit_manager
|
|||||||
from dcmanager.tests import base
|
from dcmanager.tests import base
|
||||||
from dcmanager.tests import utils
|
from dcmanager.tests import utils
|
||||||
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
|
||||||
@ -462,11 +461,14 @@ class TestFirmwareAudit(base.DCManagerTestCase):
|
|||||||
am.firmware_audit = fm
|
am.firmware_audit = fm
|
||||||
firmware_audit_data = self.get_fw_audit_data(am)
|
firmware_audit_data = self.get_fw_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
fm.subcloud_firmware_audit(name, firmware_audit_data)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
fm.subcloud_firmware_audit(name, region, firmware_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -494,11 +496,14 @@ class TestFirmwareAudit(base.DCManagerTestCase):
|
|||||||
am.firmware_audit = fm
|
am.firmware_audit = fm
|
||||||
firmware_audit_data = self.get_fw_audit_data(am)
|
firmware_audit_data = self.get_fw_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
fm.subcloud_firmware_audit(name, firmware_audit_data)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
fm.subcloud_firmware_audit(name, region, firmware_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -525,11 +530,14 @@ class TestFirmwareAudit(base.DCManagerTestCase):
|
|||||||
am.firmware_audit = fm
|
am.firmware_audit = fm
|
||||||
firmware_audit_data = self.get_fw_audit_data(am)
|
firmware_audit_data = self.get_fw_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
fm.subcloud_firmware_audit(name, firmware_audit_data)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
fm.subcloud_firmware_audit(name, region, firmware_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -556,11 +564,14 @@ class TestFirmwareAudit(base.DCManagerTestCase):
|
|||||||
am.firmware_audit = fm
|
am.firmware_audit = fm
|
||||||
firmware_audit_data = self.get_fw_audit_data(am)
|
firmware_audit_data = self.get_fw_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
fm.subcloud_firmware_audit(name, firmware_audit_data)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
fm.subcloud_firmware_audit(name, region, firmware_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -587,11 +598,14 @@ class TestFirmwareAudit(base.DCManagerTestCase):
|
|||||||
am.firmware_audit = fm
|
am.firmware_audit = fm
|
||||||
firmware_audit_data = self.get_fw_audit_data(am)
|
firmware_audit_data = self.get_fw_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
fm.subcloud_firmware_audit(name, firmware_audit_data)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
fm.subcloud_firmware_audit(name, region, firmware_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -618,11 +632,14 @@ class TestFirmwareAudit(base.DCManagerTestCase):
|
|||||||
am.firmware_audit = fm
|
am.firmware_audit = fm
|
||||||
firmware_audit_data = self.get_fw_audit_data(am)
|
firmware_audit_data = self.get_fw_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
fm.subcloud_firmware_audit(name, firmware_audit_data)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
fm.subcloud_firmware_audit(name, region, firmware_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -649,11 +666,14 @@ class TestFirmwareAudit(base.DCManagerTestCase):
|
|||||||
am.firmware_audit = fm
|
am.firmware_audit = fm
|
||||||
firmware_audit_data = self.get_fw_audit_data(am)
|
firmware_audit_data = self.get_fw_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
fm.subcloud_firmware_audit(name, firmware_audit_data)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
fm.subcloud_firmware_audit(name, region, firmware_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -680,11 +700,14 @@ class TestFirmwareAudit(base.DCManagerTestCase):
|
|||||||
am.firmware_audit = fm
|
am.firmware_audit = fm
|
||||||
firmware_audit_data = self.get_fw_audit_data(am)
|
firmware_audit_data = self.get_fw_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
fm.subcloud_firmware_audit(name, firmware_audit_data)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
fm.subcloud_firmware_audit(name, region, firmware_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_FIRMWARE,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
|
@ -24,7 +24,6 @@ from dcmanager.audit import subcloud_audit_manager
|
|||||||
from dcmanager.tests import base
|
from dcmanager.tests import base
|
||||||
from dcmanager.tests import utils
|
from dcmanager.tests import utils
|
||||||
|
|
||||||
|
|
||||||
PREVIOUS_KUBE_VERSION = 'v1.2.3'
|
PREVIOUS_KUBE_VERSION = 'v1.2.3'
|
||||||
UPGRADED_KUBE_VERSION = 'v1.2.3-a'
|
UPGRADED_KUBE_VERSION = 'v1.2.3-a'
|
||||||
|
|
||||||
@ -166,11 +165,14 @@ class TestKubernetesAudit(base.DCManagerTestCase):
|
|||||||
am.kubernetes_audit = audit
|
am.kubernetes_audit = audit
|
||||||
kubernetes_audit_data = self.get_kube_audit_data(am)
|
kubernetes_audit_data = self.get_kube_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
audit.subcloud_kubernetes_audit(name, kubernetes_audit_data)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
audit.subcloud_kubernetes_audit(name, region, kubernetes_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -190,15 +192,18 @@ class TestKubernetesAudit(base.DCManagerTestCase):
|
|||||||
]
|
]
|
||||||
kubernetes_audit_data = self.get_kube_audit_data(am)
|
kubernetes_audit_data = self.get_kube_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
# return different kube versions in the subclouds
|
# return different kube versions in the subclouds
|
||||||
self.kube_sysinv_client.get_kube_versions.return_value = [
|
self.kube_sysinv_client.get_kube_versions.return_value = [
|
||||||
FakeKubeVersion(version=PREVIOUS_KUBE_VERSION),
|
FakeKubeVersion(version=PREVIOUS_KUBE_VERSION),
|
||||||
]
|
]
|
||||||
audit.subcloud_kubernetes_audit(name, kubernetes_audit_data)
|
audit.subcloud_kubernetes_audit(name, region, kubernetes_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -218,15 +223,18 @@ class TestKubernetesAudit(base.DCManagerTestCase):
|
|||||||
]
|
]
|
||||||
kubernetes_audit_data = self.get_kube_audit_data(am)
|
kubernetes_audit_data = self.get_kube_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
# return different kube versions in the subclouds
|
# return different kube versions in the subclouds
|
||||||
self.kube_sysinv_client.get_kube_versions.return_value = [
|
self.kube_sysinv_client.get_kube_versions.return_value = [
|
||||||
FakeKubeVersion(version=UPGRADED_KUBE_VERSION),
|
FakeKubeVersion(version=UPGRADED_KUBE_VERSION),
|
||||||
]
|
]
|
||||||
audit.subcloud_kubernetes_audit(name, kubernetes_audit_data)
|
audit.subcloud_kubernetes_audit(name, region, kubernetes_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -247,15 +255,18 @@ class TestKubernetesAudit(base.DCManagerTestCase):
|
|||||||
]
|
]
|
||||||
kubernetes_audit_data = self.get_kube_audit_data(am)
|
kubernetes_audit_data = self.get_kube_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
# return same kube versions in the subclouds
|
# return same kube versions in the subclouds
|
||||||
self.kube_sysinv_client.get_kube_versions.return_value = [
|
self.kube_sysinv_client.get_kube_versions.return_value = [
|
||||||
FakeKubeVersion(version=UPGRADED_KUBE_VERSION),
|
FakeKubeVersion(version=UPGRADED_KUBE_VERSION),
|
||||||
]
|
]
|
||||||
audit.subcloud_kubernetes_audit(name, kubernetes_audit_data)
|
audit.subcloud_kubernetes_audit(name, region, kubernetes_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -282,15 +293,18 @@ class TestKubernetesAudit(base.DCManagerTestCase):
|
|||||||
]
|
]
|
||||||
kubernetes_audit_data = self.get_kube_audit_data(am)
|
kubernetes_audit_data = self.get_kube_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
# return same kube versions in the subclouds
|
# return same kube versions in the subclouds
|
||||||
self.kube_sysinv_client.get_kube_versions.return_value = [
|
self.kube_sysinv_client.get_kube_versions.return_value = [
|
||||||
FakeKubeVersion(version=UPGRADED_KUBE_VERSION),
|
FakeKubeVersion(version=UPGRADED_KUBE_VERSION),
|
||||||
]
|
]
|
||||||
audit.subcloud_kubernetes_audit(name, kubernetes_audit_data)
|
audit.subcloud_kubernetes_audit(name, region, kubernetes_audit_data)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_KUBERNETES,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
# Copyright (c) 2017-2022 Wind River Systems, Inc.
|
# Copyright (c) 2017-2023 Wind River Systems, Inc.
|
||||||
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
|
||||||
# not use this file except in compliance with the License. You may obtain
|
# not use this file except in compliance with the License. You may obtain
|
||||||
# a copy of the License at
|
# a copy of the License at
|
||||||
@ -27,7 +27,6 @@ from dcmanager.audit import subcloud_audit_manager
|
|||||||
from dcmanager.tests import base
|
from dcmanager.tests import base
|
||||||
from dcmanager.tests import utils
|
from dcmanager.tests import utils
|
||||||
|
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
|
|
||||||
|
|
||||||
@ -85,7 +84,8 @@ class FakePatchingClientInSync(object):
|
|||||||
'repostate': 'Applied',
|
'repostate': 'Applied',
|
||||||
'patchstate': 'Applied'},
|
'patchstate': 'Applied'},
|
||||||
}
|
}
|
||||||
elif self.region in ['subcloud1', 'subcloud2']:
|
elif self.region in [base.SUBCLOUD_1['region_name'],
|
||||||
|
base.SUBCLOUD_2['region_name']]:
|
||||||
return {'DC.1': {'sw_version': '17.07',
|
return {'DC.1': {'sw_version': '17.07',
|
||||||
'repostate': 'Applied',
|
'repostate': 'Applied',
|
||||||
'patchstate': 'Applied'},
|
'patchstate': 'Applied'},
|
||||||
@ -117,25 +117,25 @@ class FakePatchingClientOutOfSync(object):
|
|||||||
'DC.2': {'sw_version': '17.07',
|
'DC.2': {'sw_version': '17.07',
|
||||||
'repostate': 'Applied',
|
'repostate': 'Applied',
|
||||||
'patchstate': 'Applied'}}
|
'patchstate': 'Applied'}}
|
||||||
elif self.region == 'subcloud1':
|
elif self.region == base.SUBCLOUD_1['region_name']:
|
||||||
return {'DC.1': {'sw_version': '17.07',
|
return {'DC.1': {'sw_version': '17.07',
|
||||||
'repostate': 'Applied',
|
'repostate': 'Applied',
|
||||||
'patchstate': 'Applied'},
|
'patchstate': 'Applied'},
|
||||||
'DC.2': {'sw_version': '17.07',
|
'DC.2': {'sw_version': '17.07',
|
||||||
'repostate': 'Available',
|
'repostate': 'Available',
|
||||||
'patchstate': 'Available'}}
|
'patchstate': 'Available'}}
|
||||||
elif self.region == 'subcloud2':
|
elif self.region == base.SUBCLOUD_2['region_name']:
|
||||||
return {'DC.1': {'sw_version': '17.07',
|
return {'DC.1': {'sw_version': '17.07',
|
||||||
'repostate': 'Applied',
|
'repostate': 'Applied',
|
||||||
'patchstate': 'Applied'}}
|
'patchstate': 'Applied'}}
|
||||||
elif self.region == 'subcloud3':
|
elif self.region == base.SUBCLOUD_3['region_name']:
|
||||||
return {'DC.1': {'sw_version': '17.07',
|
return {'DC.1': {'sw_version': '17.07',
|
||||||
'repostate': 'Applied',
|
'repostate': 'Applied',
|
||||||
'patchstate': 'Applied'},
|
'patchstate': 'Applied'},
|
||||||
'DC.2': {'sw_version': '17.07',
|
'DC.2': {'sw_version': '17.07',
|
||||||
'repostate': 'Applied',
|
'repostate': 'Applied',
|
||||||
'patchstate': 'Applied'}}
|
'patchstate': 'Applied'}}
|
||||||
elif self.region == 'subcloud4':
|
elif self.region == base.SUBCLOUD_4['region_name']:
|
||||||
return {'DC.1': {'sw_version': '17.07',
|
return {'DC.1': {'sw_version': '17.07',
|
||||||
'repostate': 'Applied',
|
'repostate': 'Applied',
|
||||||
'patchstate': 'Applied'},
|
'patchstate': 'Applied'},
|
||||||
@ -219,7 +219,7 @@ class FakeSysinvClientOneLoadUnmatchedSoftwareVersion(object):
|
|||||||
return self.upgrades
|
return self.upgrades
|
||||||
|
|
||||||
def get_system(self):
|
def get_system(self):
|
||||||
if self.region == 'subcloud2':
|
if self.region == base.SUBCLOUD_2['region_name']:
|
||||||
return System('17.06')
|
return System('17.06')
|
||||||
else:
|
else:
|
||||||
return self.system
|
return self.system
|
||||||
@ -238,7 +238,7 @@ class FakeSysinvClientOneLoadUpgradeInProgress(object):
|
|||||||
return self.loads
|
return self.loads
|
||||||
|
|
||||||
def get_upgrades(self):
|
def get_upgrades(self):
|
||||||
if self.region == 'subcloud2':
|
if self.region == base.SUBCLOUD_2['region_name']:
|
||||||
return [Upgrade('started')]
|
return [Upgrade('started')]
|
||||||
else:
|
else:
|
||||||
return self.upgrades
|
return self.upgrades
|
||||||
@ -302,15 +302,19 @@ class TestPatchAudit(base.DCManagerTestCase):
|
|||||||
do_load_audit = True
|
do_load_audit = True
|
||||||
patch_audit_data = self.get_patch_audit_data(am)
|
patch_audit_data = self.get_patch_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
pm.subcloud_patch_audit(name, patch_audit_data, do_load_audit)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
pm.subcloud_patch_audit(name, region, patch_audit_data, do_load_audit)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status. \
|
||||||
@ -336,40 +340,52 @@ class TestPatchAudit(base.DCManagerTestCase):
|
|||||||
do_load_audit = True
|
do_load_audit = True
|
||||||
patch_audit_data = self.get_patch_audit_data(am)
|
patch_audit_data = self.get_patch_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2', 'subcloud3', 'subcloud4']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
pm.subcloud_patch_audit(name, patch_audit_data, do_load_audit)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name'],
|
||||||
|
base.SUBCLOUD_3['name']: base.SUBCLOUD_3['region_name'],
|
||||||
|
base.SUBCLOUD_4['name']: base.SUBCLOUD_4['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
pm.subcloud_patch_audit(name, region, patch_audit_data, do_load_audit)
|
||||||
|
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud1',
|
subcloud_name=base.SUBCLOUD_1['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_1['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud1',
|
subcloud_name=base.SUBCLOUD_1['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_1['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud2',
|
subcloud_name=base.SUBCLOUD_2['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_2['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud2',
|
subcloud_name=base.SUBCLOUD_2['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_2['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud3',
|
subcloud_name=base.SUBCLOUD_3['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_3['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud3',
|
subcloud_name=base.SUBCLOUD_3['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_3['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud4',
|
subcloud_name=base.SUBCLOUD_4['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_4['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud4',
|
subcloud_name=base.SUBCLOUD_4['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_4['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
]
|
]
|
||||||
@ -397,15 +413,19 @@ class TestPatchAudit(base.DCManagerTestCase):
|
|||||||
do_load_audit = True
|
do_load_audit = True
|
||||||
patch_audit_data = self.get_patch_audit_data(am)
|
patch_audit_data = self.get_patch_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
pm.subcloud_patch_audit(name, patch_audit_data, do_load_audit)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
pm.subcloud_patch_audit(name, region, patch_audit_data, do_load_audit)
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name=name,
|
subcloud_name=name,
|
||||||
|
subcloud_region=region,
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)]
|
||||||
self.fake_dcmanager_state_api.update_subcloud_endpoint_status.\
|
self.fake_dcmanager_state_api.update_subcloud_endpoint_status.\
|
||||||
@ -431,24 +451,30 @@ class TestPatchAudit(base.DCManagerTestCase):
|
|||||||
do_load_audit = True
|
do_load_audit = True
|
||||||
patch_audit_data = self.get_patch_audit_data(am)
|
patch_audit_data = self.get_patch_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
pm.subcloud_patch_audit(name, patch_audit_data, do_load_audit)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
pm.subcloud_patch_audit(name, region, patch_audit_data, do_load_audit)
|
||||||
|
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud1',
|
subcloud_name=base.SUBCLOUD_1['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_1['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud1',
|
subcloud_name=base.SUBCLOUD_1['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_1['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud2',
|
subcloud_name=base.SUBCLOUD_2['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_2['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud2',
|
subcloud_name=base.SUBCLOUD_2['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_2['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
||||||
]
|
]
|
||||||
@ -475,24 +501,30 @@ class TestPatchAudit(base.DCManagerTestCase):
|
|||||||
do_load_audit = True
|
do_load_audit = True
|
||||||
patch_audit_data = self.get_patch_audit_data(am)
|
patch_audit_data = self.get_patch_audit_data(am)
|
||||||
|
|
||||||
for name in ['subcloud1', 'subcloud2']:
|
subclouds = {base.SUBCLOUD_1['name']: base.SUBCLOUD_1['region_name'],
|
||||||
pm.subcloud_patch_audit(name, patch_audit_data, do_load_audit)
|
base.SUBCLOUD_2['name']: base.SUBCLOUD_2['region_name']}
|
||||||
|
for name, region in subclouds.items():
|
||||||
|
pm.subcloud_patch_audit(name, region, patch_audit_data, do_load_audit)
|
||||||
|
|
||||||
expected_calls = [
|
expected_calls = [
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud1',
|
subcloud_name=base.SUBCLOUD_1['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_1['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud1',
|
subcloud_name=base.SUBCLOUD_1['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_1['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud2',
|
subcloud_name=base.SUBCLOUD_2['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_2['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_PATCHING,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC),
|
||||||
mock.call(mock.ANY,
|
mock.call(mock.ANY,
|
||||||
subcloud_name='subcloud2',
|
subcloud_name=base.SUBCLOUD_2['name'],
|
||||||
|
subcloud_region=base.SUBCLOUD_2['region_name'],
|
||||||
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
endpoint_type=dccommon_consts.ENDPOINT_TYPE_LOAD,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC),
|
||||||
]
|
]
|
||||||
|
@ -276,6 +276,7 @@ class TestAuditManager(base.DCManagerTestCase):
|
|||||||
'systemcontroller_gateway_ip': "192.168.204.101",
|
'systemcontroller_gateway_ip': "192.168.204.101",
|
||||||
'deploy_status': "not-deployed",
|
'deploy_status': "not-deployed",
|
||||||
'error_description': 'No errors present',
|
'error_description': 'No errors present',
|
||||||
|
'region_name': base.SUBCLOUD_1['region_name'],
|
||||||
'openstack_installed': False,
|
'openstack_installed': False,
|
||||||
'group_id': 1,
|
'group_id': 1,
|
||||||
}
|
}
|
||||||
|
@ -370,6 +370,7 @@ class TestAuditWorkerManager(base.DCManagerTestCase):
|
|||||||
'systemcontroller_gateway_ip': "192.168.204.101",
|
'systemcontroller_gateway_ip': "192.168.204.101",
|
||||||
'deploy_status': "not-deployed",
|
'deploy_status': "not-deployed",
|
||||||
'error_description': 'No errors present',
|
'error_description': 'No errors present',
|
||||||
|
'region_name': base.SUBCLOUD_1['region_name'],
|
||||||
'openstack_installed': False,
|
'openstack_installed': False,
|
||||||
'group_id': 1,
|
'group_id': 1,
|
||||||
}
|
}
|
||||||
@ -429,8 +430,8 @@ class TestAuditWorkerManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Verify the subcloud was set to online
|
# Verify the subcloud was set to online
|
||||||
self.fake_dcmanager_state_api.update_subcloud_availability.assert_called_with(
|
self.fake_dcmanager_state_api.update_subcloud_availability.assert_called_with(
|
||||||
mock.ANY, subcloud.name, dccommon_consts.AVAILABILITY_ONLINE,
|
mock.ANY, subcloud.name, subcloud.region_name,
|
||||||
False, 0)
|
dccommon_consts.AVAILABILITY_ONLINE, False, 0)
|
||||||
|
|
||||||
# Verify the _update_subcloud_audit_fail_count is not called
|
# Verify the _update_subcloud_audit_fail_count is not called
|
||||||
with mock.patch.object(wm, '_update_subcloud_audit_fail_count') as \
|
with mock.patch.object(wm, '_update_subcloud_audit_fail_count') as \
|
||||||
@ -447,19 +448,19 @@ class TestAuditWorkerManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Verify patch audit is called
|
# Verify patch audit is called
|
||||||
self.fake_patch_audit.subcloud_patch_audit.assert_called_with(
|
self.fake_patch_audit.subcloud_patch_audit.assert_called_with(
|
||||||
subcloud.name, patch_audit_data, do_load_audit)
|
subcloud.name, subcloud.region_name, patch_audit_data, do_load_audit)
|
||||||
|
|
||||||
# Verify firmware audit is called
|
# Verify firmware audit is called
|
||||||
self.fake_firmware_audit.subcloud_firmware_audit.assert_called_with(
|
self.fake_firmware_audit.subcloud_firmware_audit.assert_called_with(
|
||||||
subcloud.name, firmware_audit_data)
|
subcloud.name, subcloud.region_name, firmware_audit_data)
|
||||||
|
|
||||||
# Verify kubernetes audit is called
|
# Verify kubernetes audit is called
|
||||||
self.fake_kubernetes_audit.subcloud_kubernetes_audit.assert_called_with(
|
self.fake_kubernetes_audit.subcloud_kubernetes_audit.assert_called_with(
|
||||||
subcloud.name, kubernetes_audit_data)
|
subcloud.name, subcloud.region_name, kubernetes_audit_data)
|
||||||
|
|
||||||
# Verify kube rootca update audit is called
|
# Verify kube rootca update audit is called
|
||||||
self.fake_kube_rootca_update_audit.subcloud_audit.assert_called_with(
|
self.fake_kube_rootca_update_audit.subcloud_audit.assert_called_with(
|
||||||
subcloud.name, kube_rootca_update_audit_data)
|
subcloud.name, subcloud.region_name, kube_rootca_update_audit_data)
|
||||||
|
|
||||||
def test_audit_subcloud_online_first_identity_sync_not_complete(self):
|
def test_audit_subcloud_online_first_identity_sync_not_complete(self):
|
||||||
|
|
||||||
@ -506,8 +507,8 @@ class TestAuditWorkerManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Verify the subcloud was set to online
|
# Verify the subcloud was set to online
|
||||||
self.fake_dcmanager_state_api.update_subcloud_availability.assert_called_with(
|
self.fake_dcmanager_state_api.update_subcloud_availability.assert_called_with(
|
||||||
mock.ANY, subcloud.name, dccommon_consts.AVAILABILITY_ONLINE,
|
mock.ANY, subcloud.name, subcloud.region_name,
|
||||||
False, 0)
|
dccommon_consts.AVAILABILITY_ONLINE, False, 0)
|
||||||
|
|
||||||
# Verify the _update_subcloud_audit_fail_count is not called
|
# Verify the _update_subcloud_audit_fail_count is not called
|
||||||
with mock.patch.object(wm, '_update_subcloud_audit_fail_count') as \
|
with mock.patch.object(wm, '_update_subcloud_audit_fail_count') as \
|
||||||
@ -573,8 +574,8 @@ class TestAuditWorkerManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Verify the subcloud was set to online
|
# Verify the subcloud was set to online
|
||||||
self.fake_dcmanager_state_api.update_subcloud_availability.assert_called_with(
|
self.fake_dcmanager_state_api.update_subcloud_availability.assert_called_with(
|
||||||
mock.ANY, subcloud.name, dccommon_consts.AVAILABILITY_ONLINE,
|
mock.ANY, subcloud.name, subcloud.region_name,
|
||||||
False, 0)
|
dccommon_consts.AVAILABILITY_ONLINE, False, 0)
|
||||||
|
|
||||||
# Verify the _update_subcloud_audit_fail_count is not called
|
# Verify the _update_subcloud_audit_fail_count is not called
|
||||||
with mock.patch.object(wm, '_update_subcloud_audit_fail_count') as \
|
with mock.patch.object(wm, '_update_subcloud_audit_fail_count') as \
|
||||||
@ -669,8 +670,8 @@ class TestAuditWorkerManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Verify the subcloud state was updated even though no change
|
# Verify the subcloud state was updated even though no change
|
||||||
self.fake_dcmanager_state_api.update_subcloud_availability.assert_called_with(
|
self.fake_dcmanager_state_api.update_subcloud_availability.assert_called_with(
|
||||||
mock.ANY, subcloud.name, dccommon_consts.AVAILABILITY_ONLINE,
|
mock.ANY, subcloud.name, subcloud.region_name,
|
||||||
True, None)
|
dccommon_consts.AVAILABILITY_ONLINE, True, None)
|
||||||
|
|
||||||
# Verify the _update_subcloud_audit_fail_count is not called
|
# Verify the _update_subcloud_audit_fail_count is not called
|
||||||
with mock.patch.object(wm, '_update_subcloud_audit_fail_count') as \
|
with mock.patch.object(wm, '_update_subcloud_audit_fail_count') as \
|
||||||
@ -785,19 +786,19 @@ class TestAuditWorkerManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Verify patch audit is called only once
|
# Verify patch audit is called only once
|
||||||
self.fake_patch_audit.subcloud_patch_audit.assert_called_once_with(
|
self.fake_patch_audit.subcloud_patch_audit.assert_called_once_with(
|
||||||
subcloud.name, mock.ANY, True)
|
subcloud.name, subcloud.region_name, mock.ANY, True)
|
||||||
|
|
||||||
# Verify firmware audit is only called once
|
# Verify firmware audit is only called once
|
||||||
self.fake_firmware_audit.subcloud_firmware_audit.assert_called_once_with(
|
self.fake_firmware_audit.subcloud_firmware_audit.assert_called_once_with(
|
||||||
subcloud.name, mock.ANY)
|
subcloud.name, subcloud.region_name, mock.ANY)
|
||||||
|
|
||||||
# Verify kubernetes audit is only called once
|
# Verify kubernetes audit is only called once
|
||||||
self.fake_kubernetes_audit.subcloud_kubernetes_audit.assert_called_once_with(
|
self.fake_kubernetes_audit.subcloud_kubernetes_audit.assert_called_once_with(
|
||||||
subcloud.name, mock.ANY)
|
subcloud.name, subcloud.region_name, mock.ANY)
|
||||||
|
|
||||||
# Verify kube rootca update audit is only called once
|
# Verify kube rootca update audit is only called once
|
||||||
self.fake_kube_rootca_update_audit.subcloud_audit.assert_called_once_with(
|
self.fake_kube_rootca_update_audit.subcloud_audit.assert_called_once_with(
|
||||||
subcloud.name, mock.ANY)
|
subcloud.name, subcloud.region_name, mock.ANY)
|
||||||
|
|
||||||
def test_audit_subcloud_offline_no_change(self):
|
def test_audit_subcloud_offline_no_change(self):
|
||||||
subcloud = self.create_subcloud_static(self.ctx, name='subcloud1')
|
subcloud = self.create_subcloud_static(self.ctx, name='subcloud1')
|
||||||
@ -1060,12 +1061,12 @@ class TestAuditWorkerManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Verify the openstack endpoints were removed
|
# Verify the openstack endpoints were removed
|
||||||
self.fake_dcmanager_api.update_subcloud_sync_endpoint_type.\
|
self.fake_dcmanager_api.update_subcloud_sync_endpoint_type.\
|
||||||
assert_called_with(mock.ANY, 'subcloud1',
|
assert_called_with(mock.ANY, subcloud.region_name,
|
||||||
dccommon_consts.ENDPOINT_TYPES_LIST_OS, False)
|
dccommon_consts.ENDPOINT_TYPES_LIST_OS, False)
|
||||||
|
|
||||||
# Verify alarm update is called
|
# Verify alarm update is called
|
||||||
self.fake_alarm_aggr.update_alarm_summary.assert_called_once_with(
|
self.fake_alarm_aggr.update_alarm_summary.assert_called_once_with(
|
||||||
'subcloud1', self.fake_openstack_client.fm_client)
|
subcloud.name, self.fake_openstack_client.fm_client)
|
||||||
|
|
||||||
# Verify patch audit is not called
|
# Verify patch audit is not called
|
||||||
self.fake_patch_audit.subcloud_patch_audit.assert_not_called()
|
self.fake_patch_audit.subcloud_patch_audit.assert_not_called()
|
||||||
@ -1122,7 +1123,7 @@ class TestAuditWorkerManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Verify the openstack endpoints were removed
|
# Verify the openstack endpoints were removed
|
||||||
self.fake_dcmanager_api.update_subcloud_sync_endpoint_type.\
|
self.fake_dcmanager_api.update_subcloud_sync_endpoint_type.\
|
||||||
assert_called_with(mock.ANY, 'subcloud1',
|
assert_called_with(mock.ANY, subcloud.region_name,
|
||||||
dccommon_consts.ENDPOINT_TYPES_LIST_OS, False)
|
dccommon_consts.ENDPOINT_TYPES_LIST_OS, False)
|
||||||
|
|
||||||
# Verify alarm update is called
|
# Verify alarm update is called
|
||||||
@ -1195,7 +1196,7 @@ class TestAuditWorkerManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Verify patch audit is called
|
# Verify patch audit is called
|
||||||
self.fake_patch_audit.subcloud_patch_audit.assert_called_with(
|
self.fake_patch_audit.subcloud_patch_audit.assert_called_with(
|
||||||
subcloud.name, patch_audit_data, do_load_audit)
|
subcloud.name, subcloud.region_name, patch_audit_data, do_load_audit)
|
||||||
|
|
||||||
# Verify the _update_subcloud_audit_fail_count is not called
|
# Verify the _update_subcloud_audit_fail_count is not called
|
||||||
with mock.patch.object(wm, '_update_subcloud_audit_fail_count') as \
|
with mock.patch.object(wm, '_update_subcloud_audit_fail_count') as \
|
||||||
|
@ -9,6 +9,7 @@ import base64
|
|||||||
from dcmanager.common import consts
|
from dcmanager.common import consts
|
||||||
from dcmanager.db.sqlalchemy import api as db_api
|
from dcmanager.db.sqlalchemy import api as db_api
|
||||||
|
|
||||||
|
from dcmanager.tests import base
|
||||||
from dcmanager.tests import utils
|
from dcmanager.tests import utils
|
||||||
|
|
||||||
FAKE_TENANT = utils.UUID1
|
FAKE_TENANT = utils.UUID1
|
||||||
@ -33,6 +34,7 @@ FAKE_SUBCLOUD_DATA = {"id": FAKE_ID,
|
|||||||
"systemcontroller_gateway_address": "192.168.204.101",
|
"systemcontroller_gateway_address": "192.168.204.101",
|
||||||
"deploy_status": consts.DEPLOY_STATE_DONE,
|
"deploy_status": consts.DEPLOY_STATE_DONE,
|
||||||
'error_description': consts.ERROR_DESC_EMPTY,
|
'error_description': consts.ERROR_DESC_EMPTY,
|
||||||
|
'region_name': base.SUBCLOUD_1['region_name'],
|
||||||
"external_oam_subnet": "10.10.10.0/24",
|
"external_oam_subnet": "10.10.10.0/24",
|
||||||
"external_oam_gateway_address": "10.10.10.1",
|
"external_oam_gateway_address": "10.10.10.1",
|
||||||
"external_oam_floating_address": "10.10.10.12",
|
"external_oam_floating_address": "10.10.10.12",
|
||||||
@ -128,6 +130,7 @@ def create_fake_subcloud(ctxt, **kwargs):
|
|||||||
"systemcontroller_gateway_ip": "192.168.204.101",
|
"systemcontroller_gateway_ip": "192.168.204.101",
|
||||||
'deploy_status': consts.DEPLOY_STATE_DONE,
|
'deploy_status': consts.DEPLOY_STATE_DONE,
|
||||||
'error_description': consts.ERROR_DESC_EMPTY,
|
'error_description': consts.ERROR_DESC_EMPTY,
|
||||||
|
'region_name': base.SUBCLOUD_1['region_name'],
|
||||||
'openstack_installed': False,
|
'openstack_installed': False,
|
||||||
'group_id': 1,
|
'group_id': 1,
|
||||||
'data_install': 'data from install',
|
'data_install': 'data from install',
|
||||||
|
@ -15,6 +15,7 @@
|
|||||||
import datetime
|
import datetime
|
||||||
|
|
||||||
from oslo_db import exception as db_exception
|
from oslo_db import exception as db_exception
|
||||||
|
from oslo_utils import uuidutils
|
||||||
|
|
||||||
from dcmanager.common import exceptions as exception
|
from dcmanager.common import exceptions as exception
|
||||||
from dcmanager.db import api as api
|
from dcmanager.db import api as api
|
||||||
@ -40,6 +41,7 @@ class DBAPISubcloudAuditsTest(base.DCManagerTestCase):
|
|||||||
'systemcontroller_gateway_ip': "192.168.204.101",
|
'systemcontroller_gateway_ip': "192.168.204.101",
|
||||||
'deploy_status': "not-deployed",
|
'deploy_status': "not-deployed",
|
||||||
'error_description': 'No errors present',
|
'error_description': 'No errors present',
|
||||||
|
'region_name': uuidutils.generate_uuid().replace("-", ""),
|
||||||
'openstack_installed': False,
|
'openstack_installed': False,
|
||||||
'group_id': 1,
|
'group_id': 1,
|
||||||
}
|
}
|
||||||
|
@ -57,6 +57,7 @@ class DBAPISubcloudTest(base.DCManagerTestCase):
|
|||||||
'systemcontroller_gateway_ip': "192.168.204.101",
|
'systemcontroller_gateway_ip': "192.168.204.101",
|
||||||
'deploy_status': "not-deployed",
|
'deploy_status': "not-deployed",
|
||||||
'error_description': 'No errors present',
|
'error_description': 'No errors present',
|
||||||
|
'region_name': base.SUBCLOUD_1['region_name'],
|
||||||
'openstack_installed': False,
|
'openstack_installed': False,
|
||||||
'group_id': 1,
|
'group_id': 1,
|
||||||
}
|
}
|
||||||
@ -78,6 +79,7 @@ class DBAPISubcloudTest(base.DCManagerTestCase):
|
|||||||
'systemcontroller_gateway_address'],
|
'systemcontroller_gateway_address'],
|
||||||
'deploy_status': "not-deployed",
|
'deploy_status': "not-deployed",
|
||||||
'error_description': 'No errors present',
|
'error_description': 'No errors present',
|
||||||
|
'region_name': data['region_name'],
|
||||||
'openstack_installed': False,
|
'openstack_installed': False,
|
||||||
'group_id': 1,
|
'group_id': 1,
|
||||||
}
|
}
|
||||||
@ -143,19 +145,26 @@ class DBAPISubcloudTest(base.DCManagerTestCase):
|
|||||||
|
|
||||||
def test_create_multiple_subclouds(self):
|
def test_create_multiple_subclouds(self):
|
||||||
name1 = 'testname1'
|
name1 = 'testname1'
|
||||||
|
region1 = base.SUBCLOUD_1['region_name']
|
||||||
name2 = 'testname2'
|
name2 = 'testname2'
|
||||||
|
region2 = base.SUBCLOUD_2['region_name']
|
||||||
name3 = 'testname3'
|
name3 = 'testname3'
|
||||||
subcloud = self.create_subcloud_static(self.ctx, name=name1)
|
region3 = base.SUBCLOUD_3['region_name']
|
||||||
|
subcloud = self.create_subcloud_static(self.ctx,
|
||||||
|
name=name1,
|
||||||
|
region_name=region1)
|
||||||
self.assertIsNotNone(subcloud)
|
self.assertIsNotNone(subcloud)
|
||||||
|
|
||||||
subcloud2 = self.create_subcloud_static(self.ctx,
|
subcloud2 = self.create_subcloud_static(self.ctx,
|
||||||
name=name2,
|
name=name2,
|
||||||
|
region_name=region2,
|
||||||
management_start_ip="2.3.4.6",
|
management_start_ip="2.3.4.6",
|
||||||
management_end_ip="2.3.4.7")
|
management_end_ip="2.3.4.7")
|
||||||
self.assertIsNotNone(subcloud2)
|
self.assertIsNotNone(subcloud2)
|
||||||
|
|
||||||
subcloud3 = self.create_subcloud_static(self.ctx,
|
subcloud3 = self.create_subcloud_static(self.ctx,
|
||||||
name=name3,
|
name=name3,
|
||||||
|
region_name=region3,
|
||||||
management_start_ip="3.3.4.6",
|
management_start_ip="3.3.4.6",
|
||||||
management_end_ip="3.3.4.7")
|
management_end_ip="3.3.4.7")
|
||||||
self.assertIsNotNone(subcloud3)
|
self.assertIsNotNone(subcloud3)
|
||||||
|
@ -22,6 +22,7 @@ from dcmanager.manager import service
|
|||||||
from dcmanager.tests import base
|
from dcmanager.tests import base
|
||||||
from dcmanager.tests import utils
|
from dcmanager.tests import utils
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
from oslo_utils import uuidutils
|
||||||
|
|
||||||
CONF = cfg.CONF
|
CONF = cfg.CONF
|
||||||
FAKE_USER = utils.UUID1
|
FAKE_USER = utils.UUID1
|
||||||
@ -76,9 +77,11 @@ class TestDCManagerService(base.DCManagerTestCase):
|
|||||||
|
|
||||||
@mock.patch.object(service, 'SubcloudManager')
|
@mock.patch.object(service, 'SubcloudManager')
|
||||||
def test_add_subcloud(self, mock_subcloud_manager):
|
def test_add_subcloud(self, mock_subcloud_manager):
|
||||||
|
payload = {'name': 'testname',
|
||||||
|
'region_name': uuidutils.generate_uuid().replace("-", "")}
|
||||||
self.service_obj.init_managers()
|
self.service_obj.init_managers()
|
||||||
self.service_obj.add_subcloud(
|
self.service_obj.add_subcloud(
|
||||||
self.context, subcloud_id=1, payload={'name': 'testname'})
|
self.context, subcloud_id=1, payload=payload)
|
||||||
mock_subcloud_manager().add_subcloud.\
|
mock_subcloud_manager().add_subcloud.\
|
||||||
assert_called_once_with(self.context, 1, mock.ANY)
|
assert_called_once_with(self.context, 1, mock.ANY)
|
||||||
|
|
||||||
|
@ -423,6 +423,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
"systemcontroller_gateway_ip": "192.168.204.101",
|
"systemcontroller_gateway_ip": "192.168.204.101",
|
||||||
'deploy_status': "not-deployed",
|
'deploy_status': "not-deployed",
|
||||||
'error_description': "No errors present",
|
'error_description': "No errors present",
|
||||||
|
'region_name': base.SUBCLOUD_1['region_name'],
|
||||||
'openstack_installed': False,
|
'openstack_installed': False,
|
||||||
'group_id': 1,
|
'group_id': 1,
|
||||||
'data_install': 'data from install',
|
'data_install': 'data from install',
|
||||||
@ -501,7 +502,8 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
values['deploy_status'] = consts.DEPLOY_STATE_NONE
|
values['deploy_status'] = consts.DEPLOY_STATE_NONE
|
||||||
|
|
||||||
# dcmanager add_subcloud queries the data from the db
|
# dcmanager add_subcloud queries the data from the db
|
||||||
subcloud = self.create_subcloud_static(self.ctx, name=values['name'])
|
subcloud = self.create_subcloud_static(self.ctx, name=values['name'],
|
||||||
|
region_name=values['region_name'])
|
||||||
values['id'] = subcloud.id
|
values['id'] = subcloud.id
|
||||||
|
|
||||||
mock_keystone_client().keystone_client = FakeKeystoneClient()
|
mock_keystone_client().keystone_client = FakeKeystoneClient()
|
||||||
@ -535,7 +537,8 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
values['deploy_status'] = consts.DEPLOY_STATE_NONE
|
values['deploy_status'] = consts.DEPLOY_STATE_NONE
|
||||||
|
|
||||||
# dcmanager add_subcloud queries the data from the db
|
# dcmanager add_subcloud queries the data from the db
|
||||||
subcloud = self.create_subcloud_static(self.ctx, name=values['name'])
|
subcloud = self.create_subcloud_static(self.ctx, name=values['name'],
|
||||||
|
region_name=values['region_name'])
|
||||||
values['id'] = subcloud.id
|
values['id'] = subcloud.id
|
||||||
|
|
||||||
mock_keystone_client.side_effect = FakeException('boom')
|
mock_keystone_client.side_effect = FakeException('boom')
|
||||||
@ -731,6 +734,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Create subcloud in DB
|
# Create subcloud in DB
|
||||||
subcloud = self.create_subcloud_static(self.ctx, name=payload['name'])
|
subcloud = self.create_subcloud_static(self.ctx, name=payload['name'])
|
||||||
|
payload['region_name'] = subcloud.region_name
|
||||||
|
|
||||||
# Mock return values
|
# Mock return values
|
||||||
mock_get_playbook_for_software_version.return_value = SW_VERSION
|
mock_get_playbook_for_software_version.return_value = SW_VERSION
|
||||||
@ -790,7 +794,8 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
sysadmin_password = values['sysadmin_password']
|
sysadmin_password = values['sysadmin_password']
|
||||||
|
|
||||||
# dcmanager add_subcloud queries the data from the db
|
# dcmanager add_subcloud queries the data from the db
|
||||||
subcloud = self.create_subcloud_static(self.ctx, name=values['name'])
|
subcloud = self.create_subcloud_static(self.ctx, name=values['name'],
|
||||||
|
region_name=values['region_name'])
|
||||||
|
|
||||||
mock_keystone_client().keystone_client = FakeKeystoneClient()
|
mock_keystone_client().keystone_client = FakeKeystoneClient()
|
||||||
mock_keyring.get_password.return_value = sysadmin_password
|
mock_keyring.get_password.return_value = sysadmin_password
|
||||||
@ -809,6 +814,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
mock_run_playbook.assert_called_once()
|
mock_run_playbook.assert_called_once()
|
||||||
mock_compose_rehome_command.assert_called_once_with(
|
mock_compose_rehome_command.assert_called_once_with(
|
||||||
values['name'],
|
values['name'],
|
||||||
|
values['region_name'],
|
||||||
sm._get_ansible_filename(values['name'], consts.INVENTORY_FILE_POSTFIX),
|
sm._get_ansible_filename(values['name'], consts.INVENTORY_FILE_POSTFIX),
|
||||||
subcloud['software_version'])
|
subcloud['software_version'])
|
||||||
|
|
||||||
@ -836,7 +842,8 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
services = FAKE_SERVICES
|
services = FAKE_SERVICES
|
||||||
|
|
||||||
# dcmanager add_subcloud queries the data from the db
|
# dcmanager add_subcloud queries the data from the db
|
||||||
subcloud = self.create_subcloud_static(self.ctx, name=values['name'])
|
subcloud = self.create_subcloud_static(self.ctx, name=values['name'],
|
||||||
|
region_name=values['region_name'])
|
||||||
|
|
||||||
self.fake_dcorch_api.add_subcloud.side_effect = FakeException('boom')
|
self.fake_dcorch_api.add_subcloud.side_effect = FakeException('boom')
|
||||||
mock_get_cached_regionone_data.return_value = FAKE_CACHED_REGIONONE_DATA
|
mock_get_cached_regionone_data.return_value = FAKE_CACHED_REGIONONE_DATA
|
||||||
@ -865,7 +872,8 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
services = FAKE_SERVICES
|
services = FAKE_SERVICES
|
||||||
|
|
||||||
# dcmanager add_subcloud queries the data from the db
|
# dcmanager add_subcloud queries the data from the db
|
||||||
subcloud = self.create_subcloud_static(self.ctx, name=values['name'])
|
subcloud = self.create_subcloud_static(self.ctx, name=values['name'],
|
||||||
|
region_name=values['region_name'])
|
||||||
|
|
||||||
self.fake_dcorch_api.add_subcloud.side_effect = FakeException('boom')
|
self.fake_dcorch_api.add_subcloud.side_effect = FakeException('boom')
|
||||||
mock_get_cached_regionone_data.return_value = FAKE_CACHED_REGIONONE_DATA
|
mock_get_cached_regionone_data.return_value = FAKE_CACHED_REGIONONE_DATA
|
||||||
@ -933,7 +941,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
location="subcloud new location")
|
location="subcloud new location")
|
||||||
|
|
||||||
fake_dcmanager_notification.subcloud_managed.assert_called_once_with(
|
fake_dcmanager_notification.subcloud_managed.assert_called_once_with(
|
||||||
self.ctx, subcloud.name)
|
self.ctx, subcloud.region_name)
|
||||||
|
|
||||||
# Verify subcloud was updated with correct values
|
# Verify subcloud was updated with correct values
|
||||||
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, subcloud.name)
|
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, subcloud.name)
|
||||||
@ -1063,7 +1071,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
data_install="install values")
|
data_install="install values")
|
||||||
|
|
||||||
fake_dcmanager_cermon_api.subcloud_managed.assert_called_once_with(
|
fake_dcmanager_cermon_api.subcloud_managed.assert_called_once_with(
|
||||||
self.ctx, subcloud.name)
|
self.ctx, subcloud.region_name)
|
||||||
|
|
||||||
# Verify subcloud was updated with correct values
|
# Verify subcloud was updated with correct values
|
||||||
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, subcloud.name)
|
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, subcloud.name)
|
||||||
@ -1190,7 +1198,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
group_id=2)
|
group_id=2)
|
||||||
|
|
||||||
fake_dcmanager_cermon_api.subcloud_managed.assert_called_once_with(
|
fake_dcmanager_cermon_api.subcloud_managed.assert_called_once_with(
|
||||||
self.ctx, subcloud.name)
|
self.ctx, subcloud.region_name)
|
||||||
|
|
||||||
# Verify subcloud was updated with correct values
|
# Verify subcloud was updated with correct values
|
||||||
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, subcloud.name)
|
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, subcloud.name)
|
||||||
@ -1234,7 +1242,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
dccommon_consts.ENDPOINT_TYPE_DC_CERT]:
|
dccommon_consts.ENDPOINT_TYPE_DC_CERT]:
|
||||||
# Update
|
# Update
|
||||||
ssm.update_subcloud_endpoint_status(
|
ssm.update_subcloud_endpoint_status(
|
||||||
self.ctx, subcloud_name=subcloud.name,
|
self.ctx, subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=endpoint)
|
endpoint_type=endpoint)
|
||||||
|
|
||||||
# Verify
|
# Verify
|
||||||
@ -1253,7 +1261,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
dccommon_consts.ENDPOINT_TYPE_NFV,
|
dccommon_consts.ENDPOINT_TYPE_NFV,
|
||||||
dccommon_consts.ENDPOINT_TYPE_DC_CERT]:
|
dccommon_consts.ENDPOINT_TYPE_DC_CERT]:
|
||||||
ssm.update_subcloud_endpoint_status(
|
ssm.update_subcloud_endpoint_status(
|
||||||
self.ctx, subcloud_name=subcloud.name,
|
self.ctx, subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=endpoint,
|
endpoint_type=endpoint,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
|
|
||||||
@ -1267,7 +1275,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
# Attempt to update each status to be unknown for an offline/unmanaged
|
# Attempt to update each status to be unknown for an offline/unmanaged
|
||||||
# subcloud. This is allowed.
|
# subcloud. This is allowed.
|
||||||
ssm.update_subcloud_endpoint_status(
|
ssm.update_subcloud_endpoint_status(
|
||||||
self.ctx, subcloud_name=subcloud.name,
|
self.ctx, subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=None,
|
endpoint_type=None,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_UNKNOWN)
|
sync_status=dccommon_consts.SYNC_STATUS_UNKNOWN)
|
||||||
|
|
||||||
@ -1286,7 +1294,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
# Attempt to update each status to be out-of-sync for an
|
# Attempt to update each status to be out-of-sync for an
|
||||||
# offline/unmanaged subcloud. Exclude one endpoint. This is allowed.
|
# offline/unmanaged subcloud. Exclude one endpoint. This is allowed.
|
||||||
ssm.update_subcloud_endpoint_status(
|
ssm.update_subcloud_endpoint_status(
|
||||||
self.ctx, subcloud_name=subcloud.name,
|
self.ctx, subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=None,
|
endpoint_type=None,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC,
|
||||||
ignore_endpoints=[dccommon_consts.ENDPOINT_TYPE_DC_CERT])
|
ignore_endpoints=[dccommon_consts.ENDPOINT_TYPE_DC_CERT])
|
||||||
@ -1328,7 +1336,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
dccommon_consts.ENDPOINT_TYPE_FM,
|
dccommon_consts.ENDPOINT_TYPE_FM,
|
||||||
dccommon_consts.ENDPOINT_TYPE_NFV]:
|
dccommon_consts.ENDPOINT_TYPE_NFV]:
|
||||||
ssm.update_subcloud_endpoint_status(
|
ssm.update_subcloud_endpoint_status(
|
||||||
self.ctx, subcloud_name=subcloud.name,
|
self.ctx, subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=endpoint,
|
endpoint_type=endpoint,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
|
|
||||||
@ -1343,7 +1351,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
# online/unmanaged subcloud. This is allowed. Verify the change.
|
# online/unmanaged subcloud. This is allowed. Verify the change.
|
||||||
endpoint = dccommon_consts.ENDPOINT_TYPE_DC_CERT
|
endpoint = dccommon_consts.ENDPOINT_TYPE_DC_CERT
|
||||||
ssm.update_subcloud_endpoint_status(
|
ssm.update_subcloud_endpoint_status(
|
||||||
self.ctx, subcloud_name=subcloud.name,
|
self.ctx, subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=endpoint,
|
endpoint_type=endpoint,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
|
|
||||||
@ -1373,7 +1381,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
dccommon_consts.ENDPOINT_TYPE_NFV,
|
dccommon_consts.ENDPOINT_TYPE_NFV,
|
||||||
dccommon_consts.ENDPOINT_TYPE_DC_CERT]:
|
dccommon_consts.ENDPOINT_TYPE_DC_CERT]:
|
||||||
ssm.update_subcloud_endpoint_status(
|
ssm.update_subcloud_endpoint_status(
|
||||||
self.ctx, subcloud_name=subcloud.name,
|
self.ctx, subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=endpoint,
|
endpoint_type=endpoint,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
|
|
||||||
@ -1393,11 +1401,11 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
dccommon_consts.ENDPOINT_TYPE_NFV,
|
dccommon_consts.ENDPOINT_TYPE_NFV,
|
||||||
dccommon_consts.ENDPOINT_TYPE_DC_CERT]:
|
dccommon_consts.ENDPOINT_TYPE_DC_CERT]:
|
||||||
ssm.update_subcloud_endpoint_status(
|
ssm.update_subcloud_endpoint_status(
|
||||||
self.ctx, subcloud_name=subcloud.name,
|
self.ctx, subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=endpoint,
|
endpoint_type=endpoint,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
sync_status=dccommon_consts.SYNC_STATUS_OUT_OF_SYNC)
|
||||||
# Verify lock was called
|
# Verify lock was called
|
||||||
mock_lock.assert_called_with(subcloud.name)
|
mock_lock.assert_called_with(subcloud.region_name)
|
||||||
|
|
||||||
# Verify status was updated
|
# Verify status was updated
|
||||||
updated_subcloud_status = db_api.subcloud_status_get(
|
updated_subcloud_status = db_api.subcloud_status_get(
|
||||||
@ -1436,7 +1444,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
self.assertIsNotNone(status)
|
self.assertIsNotNone(status)
|
||||||
self.assertEqual(status.sync_status, dccommon_consts.SYNC_STATUS_UNKNOWN)
|
self.assertEqual(status.sync_status, dccommon_consts.SYNC_STATUS_UNKNOWN)
|
||||||
|
|
||||||
ssm.update_subcloud_availability(self.ctx, subcloud.name,
|
ssm.update_subcloud_availability(self.ctx, subcloud.region_name,
|
||||||
dccommon_consts.AVAILABILITY_ONLINE)
|
dccommon_consts.AVAILABILITY_ONLINE)
|
||||||
|
|
||||||
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, 'subcloud1')
|
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, 'subcloud1')
|
||||||
@ -1445,13 +1453,14 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
dccommon_consts.AVAILABILITY_ONLINE)
|
dccommon_consts.AVAILABILITY_ONLINE)
|
||||||
# Verify notifying dcorch
|
# Verify notifying dcorch
|
||||||
self.fake_dcorch_api.update_subcloud_states.assert_called_once_with(
|
self.fake_dcorch_api.update_subcloud_states.assert_called_once_with(
|
||||||
self.ctx, subcloud.name, updated_subcloud.management_state,
|
self.ctx, subcloud.region_name, updated_subcloud.management_state,
|
||||||
dccommon_consts.AVAILABILITY_ONLINE)
|
dccommon_consts.AVAILABILITY_ONLINE)
|
||||||
# Verify triggering audits
|
# Verify triggering audits
|
||||||
self.fake_dcmanager_audit_api.trigger_subcloud_audits.\
|
self.fake_dcmanager_audit_api.trigger_subcloud_audits.\
|
||||||
assert_called_once_with(self.ctx, subcloud.id)
|
assert_called_once_with(self.ctx, subcloud.id)
|
||||||
|
|
||||||
fake_dcmanager_cermon_api.subcloud_online.assert_called_once_with(self.ctx, subcloud.name)
|
fake_dcmanager_cermon_api.subcloud_online.\
|
||||||
|
assert_called_once_with(self.ctx, subcloud.region_name)
|
||||||
|
|
||||||
def test_update_subcloud_availability_go_online_unmanaged(self):
|
def test_update_subcloud_availability_go_online_unmanaged(self):
|
||||||
# create a subcloud
|
# create a subcloud
|
||||||
@ -1483,7 +1492,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
self.assertIsNotNone(status)
|
self.assertIsNotNone(status)
|
||||||
self.assertEqual(status.sync_status, dccommon_consts.SYNC_STATUS_UNKNOWN)
|
self.assertEqual(status.sync_status, dccommon_consts.SYNC_STATUS_UNKNOWN)
|
||||||
|
|
||||||
ssm.update_subcloud_availability(self.ctx, subcloud.name,
|
ssm.update_subcloud_availability(self.ctx, subcloud.region_name,
|
||||||
dccommon_consts.AVAILABILITY_ONLINE)
|
dccommon_consts.AVAILABILITY_ONLINE)
|
||||||
|
|
||||||
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, 'subcloud1')
|
updated_subcloud = db_api.subcloud_get_by_name(self.ctx, 'subcloud1')
|
||||||
@ -1492,13 +1501,14 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
dccommon_consts.AVAILABILITY_ONLINE)
|
dccommon_consts.AVAILABILITY_ONLINE)
|
||||||
# Verify notifying dcorch
|
# Verify notifying dcorch
|
||||||
self.fake_dcorch_api.update_subcloud_states.assert_called_once_with(
|
self.fake_dcorch_api.update_subcloud_states.assert_called_once_with(
|
||||||
self.ctx, subcloud.name, updated_subcloud.management_state,
|
self.ctx, subcloud.region_name, updated_subcloud.management_state,
|
||||||
dccommon_consts.AVAILABILITY_ONLINE)
|
dccommon_consts.AVAILABILITY_ONLINE)
|
||||||
# Verify triggering audits
|
# Verify triggering audits
|
||||||
self.fake_dcmanager_audit_api.trigger_subcloud_audits.\
|
self.fake_dcmanager_audit_api.trigger_subcloud_audits.\
|
||||||
assert_called_once_with(self.ctx, subcloud.id)
|
assert_called_once_with(self.ctx, subcloud.id)
|
||||||
|
|
||||||
fake_dcmanager_cermon_api.subcloud_online.assert_called_once_with(self.ctx, subcloud.name)
|
fake_dcmanager_cermon_api.subcloud_online.\
|
||||||
|
assert_called_once_with(self.ctx, subcloud.region_name)
|
||||||
|
|
||||||
def test_update_subcloud_availability_go_offline(self):
|
def test_update_subcloud_availability_go_offline(self):
|
||||||
subcloud = self.create_subcloud_static(self.ctx, name='subcloud1')
|
subcloud = self.create_subcloud_static(self.ctx, name='subcloud1')
|
||||||
@ -1520,7 +1530,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
db_api.subcloud_status_create(
|
db_api.subcloud_status_create(
|
||||||
self.ctx, subcloud.id, endpoint)
|
self.ctx, subcloud.id, endpoint)
|
||||||
ssm.update_subcloud_endpoint_status(
|
ssm.update_subcloud_endpoint_status(
|
||||||
self.ctx, subcloud_name=subcloud.name,
|
self.ctx, subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=endpoint,
|
endpoint_type=endpoint,
|
||||||
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
sync_status=dccommon_consts.SYNC_STATUS_IN_SYNC)
|
||||||
|
|
||||||
@ -1531,7 +1541,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Audit fails once
|
# Audit fails once
|
||||||
audit_fail_count = 1
|
audit_fail_count = 1
|
||||||
ssm.update_subcloud_availability(self.ctx, subcloud.name,
|
ssm.update_subcloud_availability(self.ctx, subcloud.region_name,
|
||||||
availability_status=None,
|
availability_status=None,
|
||||||
audit_fail_count=audit_fail_count)
|
audit_fail_count=audit_fail_count)
|
||||||
# Verify the subcloud availability was not updated
|
# Verify the subcloud availability was not updated
|
||||||
@ -1546,7 +1556,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Audit fails again
|
# Audit fails again
|
||||||
audit_fail_count = audit_fail_count + 1
|
audit_fail_count = audit_fail_count + 1
|
||||||
ssm.update_subcloud_availability(self.ctx, subcloud.name,
|
ssm.update_subcloud_availability(self.ctx, subcloud.region_name,
|
||||||
dccommon_consts.AVAILABILITY_OFFLINE,
|
dccommon_consts.AVAILABILITY_OFFLINE,
|
||||||
audit_fail_count=audit_fail_count)
|
audit_fail_count=audit_fail_count)
|
||||||
|
|
||||||
@ -1557,7 +1567,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Verify notifying dcorch
|
# Verify notifying dcorch
|
||||||
self.fake_dcorch_api.update_subcloud_states.assert_called_once_with(
|
self.fake_dcorch_api.update_subcloud_states.assert_called_once_with(
|
||||||
self.ctx, subcloud.name, updated_subcloud.management_state,
|
self.ctx, subcloud.region_name, updated_subcloud.management_state,
|
||||||
dccommon_consts.AVAILABILITY_OFFLINE)
|
dccommon_consts.AVAILABILITY_OFFLINE)
|
||||||
|
|
||||||
# Verify all endpoint statuses set to unknown
|
# Verify all endpoint statuses set to unknown
|
||||||
@ -1597,7 +1607,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Update identity to the original status
|
# Update identity to the original status
|
||||||
ssm.update_subcloud_endpoint_status(
|
ssm.update_subcloud_endpoint_status(
|
||||||
self.ctx, subcloud_name=subcloud.name,
|
self.ctx, subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=endpoint,
|
endpoint_type=endpoint,
|
||||||
sync_status=original_sync_status)
|
sync_status=original_sync_status)
|
||||||
|
|
||||||
@ -1607,7 +1617,7 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Update identity to new status and get the count of the trigger again
|
# Update identity to new status and get the count of the trigger again
|
||||||
ssm.update_subcloud_endpoint_status(
|
ssm.update_subcloud_endpoint_status(
|
||||||
self.ctx, subcloud_name=subcloud.name,
|
self.ctx, subcloud_region=subcloud.region_name,
|
||||||
endpoint_type=endpoint,
|
endpoint_type=endpoint,
|
||||||
sync_status=new_sync_status)
|
sync_status=new_sync_status)
|
||||||
new_trigger_subcloud_audits = \
|
new_trigger_subcloud_audits = \
|
||||||
@ -1634,13 +1644,13 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Test openstack app installed
|
# Test openstack app installed
|
||||||
openstack_installed = True
|
openstack_installed = True
|
||||||
sm.update_subcloud_sync_endpoint_type(self.ctx, subcloud.name,
|
sm.update_subcloud_sync_endpoint_type(self.ctx, subcloud.region_name,
|
||||||
endpoint_type_list,
|
endpoint_type_list,
|
||||||
openstack_installed)
|
openstack_installed)
|
||||||
|
|
||||||
# Verify notifying dcorch to add subcloud sync endpoint type
|
# Verify notifying dcorch to add subcloud sync endpoint type
|
||||||
self.fake_dcorch_api.add_subcloud_sync_endpoint_type.\
|
self.fake_dcorch_api.add_subcloud_sync_endpoint_type.\
|
||||||
assert_called_once_with(self.ctx, subcloud.name,
|
assert_called_once_with(self.ctx, subcloud.region_name,
|
||||||
endpoint_type_list)
|
endpoint_type_list)
|
||||||
|
|
||||||
# Verify the subcloud status created for os endpoints
|
# Verify the subcloud status created for os endpoints
|
||||||
@ -1657,12 +1667,12 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
|
|
||||||
# Test openstack app removed
|
# Test openstack app removed
|
||||||
openstack_installed = False
|
openstack_installed = False
|
||||||
sm.update_subcloud_sync_endpoint_type(self.ctx, subcloud.name,
|
sm.update_subcloud_sync_endpoint_type(self.ctx, subcloud.region_name,
|
||||||
endpoint_type_list,
|
endpoint_type_list,
|
||||||
openstack_installed)
|
openstack_installed)
|
||||||
# Verify notifying dcorch to remove subcloud sync endpoint type
|
# Verify notifying dcorch to remove subcloud sync endpoint type
|
||||||
self.fake_dcorch_api.remove_subcloud_sync_endpoint_type.\
|
self.fake_dcorch_api.remove_subcloud_sync_endpoint_type.\
|
||||||
assert_called_once_with(self.ctx, subcloud.name,
|
assert_called_once_with(self.ctx, subcloud.region_name,
|
||||||
endpoint_type_list)
|
endpoint_type_list)
|
||||||
|
|
||||||
# Verify the subcloud status is deleted for os endpoints
|
# Verify the subcloud status is deleted for os endpoints
|
||||||
@ -1703,8 +1713,11 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
def test_compose_bootstrap_command(self, mock_isfile):
|
def test_compose_bootstrap_command(self, mock_isfile):
|
||||||
mock_isfile.return_value = True
|
mock_isfile.return_value = True
|
||||||
sm = subcloud_manager.SubcloudManager()
|
sm = subcloud_manager.SubcloudManager()
|
||||||
|
subcloud_name = base.SUBCLOUD_1['name']
|
||||||
|
subcloud_region = base.SUBCLOUD_1['region_name']
|
||||||
bootstrap_command = sm.compose_bootstrap_command(
|
bootstrap_command = sm.compose_bootstrap_command(
|
||||||
'subcloud1',
|
subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
f'{dccommon_consts.ANSIBLE_OVERRIDES_PATH}/subcloud1_inventory.yml',
|
f'{dccommon_consts.ANSIBLE_OVERRIDES_PATH}/subcloud1_inventory.yml',
|
||||||
FAKE_PREVIOUS_SW_VERSION)
|
FAKE_PREVIOUS_SW_VERSION)
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
@ -1715,8 +1728,9 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
subcloud_manager.ANSIBLE_SUBCLOUD_PLAYBOOK,
|
subcloud_manager.ANSIBLE_SUBCLOUD_PLAYBOOK,
|
||||||
FAKE_PREVIOUS_SW_VERSION),
|
FAKE_PREVIOUS_SW_VERSION),
|
||||||
'-i', f'{dccommon_consts.ANSIBLE_OVERRIDES_PATH}/subcloud1_inventory.yml',
|
'-i', f'{dccommon_consts.ANSIBLE_OVERRIDES_PATH}/subcloud1_inventory.yml',
|
||||||
'--limit', 'subcloud1', '-e',
|
'--limit', '%s' % subcloud_name, '-e',
|
||||||
f"override_files_dir='{dccommon_consts.ANSIBLE_OVERRIDES_PATH}' region_name=subcloud1",
|
str("override_files_dir='%s' region_name=%s") %
|
||||||
|
(dccommon_consts.ANSIBLE_OVERRIDES_PATH, subcloud_region),
|
||||||
'-e', "install_release_version=%s" % FAKE_PREVIOUS_SW_VERSION
|
'-e', "install_release_version=%s" % FAKE_PREVIOUS_SW_VERSION
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
@ -1746,8 +1760,12 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
def test_compose_rehome_command(self, mock_isfile):
|
def test_compose_rehome_command(self, mock_isfile):
|
||||||
mock_isfile.return_value = True
|
mock_isfile.return_value = True
|
||||||
sm = subcloud_manager.SubcloudManager()
|
sm = subcloud_manager.SubcloudManager()
|
||||||
|
subcloud_name = base.SUBCLOUD_1['name']
|
||||||
|
subcloud_region = base.SUBCLOUD_1['region_name']
|
||||||
|
|
||||||
rehome_command = sm.compose_rehome_command(
|
rehome_command = sm.compose_rehome_command(
|
||||||
'subcloud1',
|
subcloud_name,
|
||||||
|
subcloud_region,
|
||||||
f'{dccommon_consts.ANSIBLE_OVERRIDES_PATH}/subcloud1_inventory.yml',
|
f'{dccommon_consts.ANSIBLE_OVERRIDES_PATH}/subcloud1_inventory.yml',
|
||||||
FAKE_PREVIOUS_SW_VERSION)
|
FAKE_PREVIOUS_SW_VERSION)
|
||||||
self.assertEqual(
|
self.assertEqual(
|
||||||
@ -1758,10 +1776,10 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
subcloud_manager.ANSIBLE_SUBCLOUD_REHOME_PLAYBOOK,
|
subcloud_manager.ANSIBLE_SUBCLOUD_REHOME_PLAYBOOK,
|
||||||
FAKE_PREVIOUS_SW_VERSION),
|
FAKE_PREVIOUS_SW_VERSION),
|
||||||
'-i', f'{dccommon_consts.ANSIBLE_OVERRIDES_PATH}/subcloud1_inventory.yml',
|
'-i', f'{dccommon_consts.ANSIBLE_OVERRIDES_PATH}/subcloud1_inventory.yml',
|
||||||
'--limit', 'subcloud1',
|
'--limit', subcloud_name,
|
||||||
'--timeout', subcloud_manager.REHOME_PLAYBOOK_TIMEOUT,
|
'--timeout', subcloud_manager.REHOME_PLAYBOOK_TIMEOUT,
|
||||||
'-e',
|
'-e', str("override_files_dir='%s' region_name=%s") %
|
||||||
f"override_files_dir='{dccommon_consts.ANSIBLE_OVERRIDES_PATH}' region_name=subcloud1"
|
(dccommon_consts.ANSIBLE_OVERRIDES_PATH, subcloud_region)
|
||||||
]
|
]
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -1823,39 +1841,48 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
def test_handle_subcloud_operations_in_progress(self):
|
def test_handle_subcloud_operations_in_progress(self):
|
||||||
subcloud1 = self.create_subcloud_static(
|
subcloud1 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud1',
|
name=base.SUBCLOUD_1['name'],
|
||||||
|
region_name=base.SUBCLOUD_1['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_PRE_DEPLOY)
|
deploy_status=consts.DEPLOY_STATE_PRE_DEPLOY)
|
||||||
subcloud2 = self.create_subcloud_static(
|
subcloud2 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud2',
|
name=base.SUBCLOUD_2['name'],
|
||||||
|
region_name=base.SUBCLOUD_2['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_PRE_INSTALL)
|
deploy_status=consts.DEPLOY_STATE_PRE_INSTALL)
|
||||||
subcloud3 = self.create_subcloud_static(
|
subcloud3 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud3',
|
name=base.SUBCLOUD_3['name'],
|
||||||
|
region_name=base.SUBCLOUD_3['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_INSTALLING)
|
deploy_status=consts.DEPLOY_STATE_INSTALLING)
|
||||||
subcloud4 = self.create_subcloud_static(
|
subcloud4 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud4',
|
name=base.SUBCLOUD_4['name'],
|
||||||
|
region_name=base.SUBCLOUD_4['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_BOOTSTRAPPING)
|
deploy_status=consts.DEPLOY_STATE_BOOTSTRAPPING)
|
||||||
subcloud5 = self.create_subcloud_static(
|
subcloud5 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud5',
|
name=base.SUBCLOUD_5['name'],
|
||||||
|
region_name=base.SUBCLOUD_5['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_DEPLOYING)
|
deploy_status=consts.DEPLOY_STATE_DEPLOYING)
|
||||||
subcloud6 = self.create_subcloud_static(
|
subcloud6 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud6',
|
name=base.SUBCLOUD_6['name'],
|
||||||
|
region_name=base.SUBCLOUD_6['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_MIGRATING_DATA)
|
deploy_status=consts.DEPLOY_STATE_MIGRATING_DATA)
|
||||||
subcloud7 = self.create_subcloud_static(
|
subcloud7 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud7',
|
name=base.SUBCLOUD_7['name'],
|
||||||
|
region_name=base.SUBCLOUD_7['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_PRE_RESTORE)
|
deploy_status=consts.DEPLOY_STATE_PRE_RESTORE)
|
||||||
subcloud8 = self.create_subcloud_static(
|
subcloud8 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud8',
|
name=base.SUBCLOUD_8['name'],
|
||||||
|
region_name=base.SUBCLOUD_8['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_RESTORING)
|
deploy_status=consts.DEPLOY_STATE_RESTORING)
|
||||||
subcloud9 = self.create_subcloud_static(
|
subcloud9 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud9',
|
name=base.SUBCLOUD_9['name'],
|
||||||
|
region_name=base.SUBCLOUD_9['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_NONE)
|
deploy_status=consts.DEPLOY_STATE_NONE)
|
||||||
subcloud10 = self.create_subcloud_static(
|
subcloud10 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
@ -1940,47 +1967,58 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
def test_handle_completed_subcloud_operations(self):
|
def test_handle_completed_subcloud_operations(self):
|
||||||
subcloud1 = self.create_subcloud_static(
|
subcloud1 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud1',
|
name=base.SUBCLOUD_1['name'],
|
||||||
|
region_name=base.SUBCLOUD_1['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_CREATE_FAILED)
|
deploy_status=consts.DEPLOY_STATE_CREATE_FAILED)
|
||||||
subcloud2 = self.create_subcloud_static(
|
subcloud2 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud2',
|
name=base.SUBCLOUD_2['name'],
|
||||||
|
region_name=base.SUBCLOUD_2['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_PRE_INSTALL_FAILED)
|
deploy_status=consts.DEPLOY_STATE_PRE_INSTALL_FAILED)
|
||||||
subcloud3 = self.create_subcloud_static(
|
subcloud3 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud3',
|
name=base.SUBCLOUD_3['name'],
|
||||||
|
region_name=base.SUBCLOUD_3['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_INSTALL_FAILED)
|
deploy_status=consts.DEPLOY_STATE_INSTALL_FAILED)
|
||||||
subcloud4 = self.create_subcloud_static(
|
subcloud4 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud4',
|
name=base.SUBCLOUD_4['name'],
|
||||||
|
region_name=base.SUBCLOUD_4['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_INSTALLED)
|
deploy_status=consts.DEPLOY_STATE_INSTALLED)
|
||||||
subcloud5 = self.create_subcloud_static(
|
subcloud5 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud5',
|
name=base.SUBCLOUD_5['name'],
|
||||||
|
region_name=base.SUBCLOUD_5['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_BOOTSTRAP_FAILED)
|
deploy_status=consts.DEPLOY_STATE_BOOTSTRAP_FAILED)
|
||||||
subcloud6 = self.create_subcloud_static(
|
subcloud6 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud6',
|
name=base.SUBCLOUD_6['name'],
|
||||||
|
region_name=base.SUBCLOUD_6['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_CONFIG_FAILED)
|
deploy_status=consts.DEPLOY_STATE_CONFIG_FAILED)
|
||||||
subcloud7 = self.create_subcloud_static(
|
subcloud7 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud7',
|
name=base.SUBCLOUD_7['name'],
|
||||||
|
region_name=base.SUBCLOUD_7['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_DATA_MIGRATION_FAILED)
|
deploy_status=consts.DEPLOY_STATE_DATA_MIGRATION_FAILED)
|
||||||
subcloud8 = self.create_subcloud_static(
|
subcloud8 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud8',
|
name=base.SUBCLOUD_8['name'],
|
||||||
|
region_name=base.SUBCLOUD_8['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_MIGRATED)
|
deploy_status=consts.DEPLOY_STATE_MIGRATED)
|
||||||
subcloud9 = self.create_subcloud_static(
|
subcloud9 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud9',
|
name=base.SUBCLOUD_9['name'],
|
||||||
|
region_name=base.SUBCLOUD_9['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_RESTORE_PREP_FAILED)
|
deploy_status=consts.DEPLOY_STATE_RESTORE_PREP_FAILED)
|
||||||
subcloud10 = self.create_subcloud_static(
|
subcloud10 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud10',
|
name=base.SUBCLOUD_10['name'],
|
||||||
|
region_name=base.SUBCLOUD_10['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_RESTORE_FAILED)
|
deploy_status=consts.DEPLOY_STATE_RESTORE_FAILED)
|
||||||
subcloud11 = self.create_subcloud_static(
|
subcloud11 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name='subcloud11',
|
name=base.SUBCLOUD_11['name'],
|
||||||
|
region_name=base.SUBCLOUD_11['region_name'],
|
||||||
deploy_status=consts.DEPLOY_STATE_DONE)
|
deploy_status=consts.DEPLOY_STATE_DONE)
|
||||||
subcloud12 = self.create_subcloud_static(
|
subcloud12 = self.create_subcloud_static(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
@ -2792,7 +2830,8 @@ class TestSubcloudManager(base.DCManagerTestCase):
|
|||||||
values = {
|
values = {
|
||||||
'name': 'TestSubcloud',
|
'name': 'TestSubcloud',
|
||||||
'sysadmin_password': '123',
|
'sysadmin_password': '123',
|
||||||
'secondary': 'true'
|
'secondary': 'true',
|
||||||
|
'region_name': '2ec93dfb654846909efe61d1b39dd2ce'
|
||||||
}
|
}
|
||||||
|
|
||||||
# Create an instance of SubcloudManager
|
# Create an instance of SubcloudManager
|
||||||
|
@ -8,6 +8,7 @@ import mock
|
|||||||
from dccommon import consts as dccommon_consts
|
from dccommon import consts as dccommon_consts
|
||||||
from dcmanager.common import consts
|
from dcmanager.common import consts
|
||||||
from dcmanager.db.sqlalchemy import api as db_api
|
from dcmanager.db.sqlalchemy import api as db_api
|
||||||
|
from dcmanager.tests import base
|
||||||
from dcmanager.tests.unit.common import fake_strategy
|
from dcmanager.tests.unit.common import fake_strategy
|
||||||
from dcmanager.tests.unit.common import fake_subcloud
|
from dcmanager.tests.unit.common import fake_subcloud
|
||||||
from dcmanager.tests.unit.orchestrator.states.fakes import FakeAlarm
|
from dcmanager.tests.unit.orchestrator.states.fakes import FakeAlarm
|
||||||
@ -496,7 +497,8 @@ class TestSwUpgradePreCheckSimplexStage(TestSwUpgradePreCheckStage):
|
|||||||
# and no data install values
|
# and no data install values
|
||||||
self.subcloud = fake_subcloud.create_fake_subcloud(
|
self.subcloud = fake_subcloud.create_fake_subcloud(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name="subcloud2",
|
name=base.SUBCLOUD_2['name'],
|
||||||
|
region_name=base.SUBCLOUD_2['region_name'],
|
||||||
data_install=None
|
data_install=None
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -580,7 +582,8 @@ class TestSwUpgradePreCheckSimplexStage(TestSwUpgradePreCheckStage):
|
|||||||
# availability status as "offline" and no data install values
|
# availability status as "offline" and no data install values
|
||||||
self.subcloud = fake_subcloud.create_fake_subcloud(
|
self.subcloud = fake_subcloud.create_fake_subcloud(
|
||||||
self.ctx,
|
self.ctx,
|
||||||
name="subcloud2",
|
name=base.SUBCLOUD_2['name'],
|
||||||
|
region_name=base.SUBCLOUD_2['region_name'],
|
||||||
data_install=None,
|
data_install=None,
|
||||||
deploy_status=consts.DEPLOY_STATE_INSTALL_FAILED
|
deploy_status=consts.DEPLOY_STATE_INSTALL_FAILED
|
||||||
)
|
)
|
||||||
|
@ -4,6 +4,7 @@
|
|||||||
# SPDX-License-Identifier: Apache-2.0
|
# SPDX-License-Identifier: Apache-2.0
|
||||||
#
|
#
|
||||||
import mock
|
import mock
|
||||||
|
from oslo_utils import uuidutils
|
||||||
|
|
||||||
from dccommon import consts as dccommon_consts
|
from dccommon import consts as dccommon_consts
|
||||||
from dccommon.drivers.openstack import vim
|
from dccommon.drivers.openstack import vim
|
||||||
@ -39,6 +40,7 @@ class TestFwOrchThread(TestSwUpdate):
|
|||||||
"systemcontroller_gateway_ip": "192.168.204.101",
|
"systemcontroller_gateway_ip": "192.168.204.101",
|
||||||
'deploy_status': "not-deployed",
|
'deploy_status': "not-deployed",
|
||||||
'error_description': 'No errors present',
|
'error_description': 'No errors present',
|
||||||
|
'region_name': uuidutils.generate_uuid().replace("-", ""),
|
||||||
'openstack_installed': False,
|
'openstack_installed': False,
|
||||||
'group_id': group_id,
|
'group_id': group_id,
|
||||||
'data_install': 'data from install',
|
'data_install': 'data from install',
|
||||||
|
@ -17,6 +17,7 @@ import copy
|
|||||||
import mock
|
import mock
|
||||||
|
|
||||||
from oslo_config import cfg
|
from oslo_config import cfg
|
||||||
|
from oslo_utils import uuidutils
|
||||||
|
|
||||||
from dccommon import consts as dccommon_consts
|
from dccommon import consts as dccommon_consts
|
||||||
from dcmanager.common import consts
|
from dcmanager.common import consts
|
||||||
@ -117,6 +118,7 @@ class TestSwUpdateManager(base.DCManagerTestCase):
|
|||||||
"systemcontroller_gateway_ip": "192.168.204.101",
|
"systemcontroller_gateway_ip": "192.168.204.101",
|
||||||
'deploy_status': "not-deployed",
|
'deploy_status': "not-deployed",
|
||||||
'error_description': 'No errors present',
|
'error_description': 'No errors present',
|
||||||
|
'region_name': uuidutils.generate_uuid().replace("-", ""),
|
||||||
'openstack_installed': False,
|
'openstack_installed': False,
|
||||||
'group_id': group_id,
|
'group_id': group_id,
|
||||||
'data_install': 'data from install',
|
'data_install': 'data from install',
|
||||||
|
@ -93,4 +93,5 @@ def create_subcloud_dict(data_list):
|
|||||||
'group_id': data_list[23],
|
'group_id': data_list[23],
|
||||||
'deploy_status': data_list[24],
|
'deploy_status': data_list[24],
|
||||||
'error_description': data_list[25],
|
'error_description': data_list[25],
|
||||||
'data_install': data_list[26]}
|
'region_name': data_list[26],
|
||||||
|
'data_install': data_list[27]}
|
||||||
|
@ -377,12 +377,12 @@ def add_identity_filter(query, value,
|
|||||||
|
|
||||||
:return: Modified query.
|
:return: Modified query.
|
||||||
"""
|
"""
|
||||||
if strutils.is_int_like(value):
|
if use_region_name:
|
||||||
|
return query.filter_by(region_name=value)
|
||||||
|
elif strutils.is_int_like(value):
|
||||||
return query.filter_by(id=value)
|
return query.filter_by(id=value)
|
||||||
elif uuidutils.is_uuid_like(value):
|
elif uuidutils.is_uuid_like(value):
|
||||||
return query.filter_by(uuid=value)
|
return query.filter_by(uuid=value)
|
||||||
elif use_region_name:
|
|
||||||
return query.filter_by(region_name=value)
|
|
||||||
elif use_resource_type:
|
elif use_resource_type:
|
||||||
return query.filter_by(resource_type=value)
|
return query.filter_by(resource_type=value)
|
||||||
else:
|
else:
|
||||||
|
@ -87,6 +87,7 @@ class SyncThread(object):
|
|||||||
self.log_extra = {
|
self.log_extra = {
|
||||||
"instance": self.subcloud_name + ": "}
|
"instance": self.subcloud_name + ": "}
|
||||||
self.dcmanager_state_rpc_client = dcmanager_rpc_client.SubcloudStateClient()
|
self.dcmanager_state_rpc_client = dcmanager_rpc_client.SubcloudStateClient()
|
||||||
|
self.dcmanager_rpc_client = dcmanager_rpc_client.ManagerClient()
|
||||||
|
|
||||||
self.sc_admin_session = None
|
self.sc_admin_session = None
|
||||||
self.admin_session = None
|
self.admin_session = None
|
||||||
@ -298,15 +299,35 @@ class SyncThread(object):
|
|||||||
self.subcloud_name, sync_status, alarmable),
|
self.subcloud_name, sync_status, alarmable),
|
||||||
extra=self.log_extra)
|
extra=self.log_extra)
|
||||||
|
|
||||||
self.dcmanager_state_rpc_client.update_subcloud_endpoint_status(
|
try:
|
||||||
self.ctxt, self.subcloud_name,
|
# This block is required to get the real subcloud name
|
||||||
self.endpoint_type, sync_status,
|
# dcorch uses the subcloud name as the region name.
|
||||||
alarmable=alarmable)
|
# The region name cannot be changed, so at this point it
|
||||||
|
# is necessary to query the subcloud name as it is required
|
||||||
|
# for logging purposes.
|
||||||
|
|
||||||
db_api.subcloud_sync_update(
|
# Save current subcloud name (region name from dcorch DB)
|
||||||
self.ctxt, self.subcloud_name, self.endpoint_type,
|
dcorch_subcloud_region = self.subcloud_name
|
||||||
values={'sync_status_reported': sync_status,
|
|
||||||
'sync_status_report_time': timeutils.utcnow()})
|
# Get the subcloud name from dcmanager database supplying
|
||||||
|
# the dcorch region name
|
||||||
|
subcloud_name = self.dcmanager_rpc_client \
|
||||||
|
.get_subcloud_name_by_region_name(self.ctxt,
|
||||||
|
dcorch_subcloud_region)
|
||||||
|
|
||||||
|
# Updates the endpoint status supplying the subcloud name and
|
||||||
|
# the region name
|
||||||
|
self.dcmanager_state_rpc_client.update_subcloud_endpoint_status(
|
||||||
|
self.ctxt, subcloud_name, dcorch_subcloud_region,
|
||||||
|
self.endpoint_type, sync_status,
|
||||||
|
alarmable=alarmable)
|
||||||
|
|
||||||
|
db_api.subcloud_sync_update(
|
||||||
|
self.ctxt, dcorch_subcloud_region, self.endpoint_type,
|
||||||
|
values={'sync_status_reported': sync_status,
|
||||||
|
'sync_status_report_time': timeutils.utcnow()})
|
||||||
|
except Exception:
|
||||||
|
raise
|
||||||
|
|
||||||
def sync(self, engine_id):
|
def sync(self, engine_id):
|
||||||
LOG.debug("{}: starting sync routine".format(self.subcloud_name),
|
LOG.debug("{}: starting sync routine".format(self.subcloud_name),
|
||||||
|
@ -60,7 +60,7 @@ openstackdocs_auto_name = False
|
|||||||
#
|
#
|
||||||
# This is also used if you do content translation via gettext catalogs.
|
# This is also used if you do content translation via gettext catalogs.
|
||||||
# Usually you set "language" from the command line for these cases.
|
# Usually you set "language" from the command line for these cases.
|
||||||
language = None
|
language = 'en'
|
||||||
|
|
||||||
# List of patterns, relative to source directory, that match files and
|
# List of patterns, relative to source directory, that match files and
|
||||||
# directories to ignore when looking for source files.
|
# directories to ignore when looking for source files.
|
||||||
|
Loading…
Reference in New Issue
Block a user