This website requires JavaScript.
Explore
Get Started
starlingx
/
nginx-ingress-controller-armada-app
Code
Issues
Proposed changes
ec70749e14
Branches
Tags
View all branches
nginx-ingress-controller-ar...
/
python-k8sapp-nginx-ingress-controller
/
k8sapp_nginx_ingress_controller
/
requirements.txt
4 lines
82 B
Plaintext
Raw
Normal View
History
Unescape
Escape
Add B&R lifecycle hooks to the nginx-ingress-controller The recent upversion of the nginx app (https://review.opendev.org/c/starlingx/nginx-ingress-controller-armada-app/+/782326) enabled the nginx admissionWebhook and this introduced an issue in the restore procedure. The proposed solution is to use the lifecycle operator to delete the nginx admissionWebhook before the backup. If we do this, the backup of the etcd database will not have the nginx webhook and the restore will succeed. Note that the solution implies a deletion of a resource in the nginx app. Because of this, there are some procedural changes to the backup and restore that the user must do: - After backup completes the following steps must be done: 1. $ system helm-override-update nginx-ingress-controller ingress-nginx kube-system --set controller.admissionWebhooks.enabled=true 2. reapply the nginx app to restore the admissionWebhook: $ system application-apply nginx-ingress-controller - After the whole restore procedure (i.e after all the nodes are restored and unlocked, apps are in applied state and 'system restore-complete' was executed) the user must do the same steps as above to restore the nginx webhook: 1. $ system helm-override-update nginx-ingress-controller ingress-nginx kube-system --set controller.admissionWebhooks.enabled=true 2. $ system application-apply nginx-ingress-controller Depends-On: I61156db05970aa03c96ddc8533fdd4f4a680b334 Depends-On: I0ebab45f4846cbcd25fecac6bf99195d9047eb8a Depends-On: I648e940f8104307e111213afd511f8fca19e39ab Closes-Bug: 1923185 Signed-off-by: Mihnea Saracin <Mihnea.Saracin@windriver.com> Change-Id: I9ca56329cfa353e7938a9fd8e94c50295c6a0778
2021-04-09 18:31:29 +03:00
pbr>=2.0.0
Fix zuul failures during setup The ubuntu-jammy nodeset gets selected by default and is causing problems during setup. Collecting cffi>=1.1 Failed to build cffi ubuntu-focal seem to work fine. Will specify the nodeset to be focal to resolve this. Need to update a file that is monitored by zuul in order to trigger the failing zuul jobs. In order to not require the legacy pip resolver, the requirements need to be updated. The upper constraints are also updated. When the debian upper constraints in the build-tools repo are updated for the appropriate docker and kubernetes, the file in this repo can set back to empty. Partial-Bug: 1994843 Signed-off-by: Al Bailey <al.bailey@windriver.com> Change-Id: Ia76846f827e06a7de2908ae123566706b21a589a
2022-10-28 17:16:07 +00:00
PyYAML==5.3.1;python_version>="3.9"
PyYAML==3.1.3;python_version<"3.9"
Copy Permalink