enable-kubernetes: Fix jammy install, improve pod test

This updates the ensure-kubernetes testing to check the pod is
actually running.  This was hiding some issues on Jammy where the
installation succeeded but the pod was not ready.

The essence of the problem seems to be that the
containernetworking-plugins tools are coming from upstream packages on
Ubuntu Jammy.  This native package places the networking tools in a
different location to those from the Opensuse kubic repo.

We need to update the cri-o path and the docker path for our jobs.

For cri-o this is just an update to the config file, which is
separated out into the crio-Ubuntu-22.04 include file.

For docker things are bit harder, because you need the cri-docker shim
now to use a docker runtime with kubernetes.  Per the note inline,
this shim has some hard-coded assumptions which mean we need to
override the way it overrides (!).  This works but does all feel a bit
fragile; we should probably consider our overall support for the
docker backend.

With ensure-kubernetes working now, we can revert the non-voting jobs
from the eariler change Id6ee7ed38fec254493a2abbfa076b9671c907c83.

Change-Id: I5f02f4e056a0e731d74d00ebafa96390c06175cf
This commit is contained in:
Ian Wienand 2022-11-07 16:22:31 +11:00
parent 64a60ea377
commit 1e133ba51d
No known key found for this signature in database
5 changed files with 119 additions and 41 deletions

View File

@ -0,0 +1,41 @@
- name: Add all repositories
include_role:
name: ensure-package-repositories
vars:
repositories_keys:
- url: "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_{{ ansible_distribution_version }}/Release.key"
- url: "https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.24/xUbuntu_{{ ansible_distribution_version }}/Release.key"
repositories_list:
- repo: "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_{{ ansible_distribution_version }}/ /"
- repo: "deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.24/xUbuntu_{{ ansible_distribution_version }}/ /"
- name: Install packages
package:
name:
- cri-o
- cri-o-runc
- containernetworking-plugins
- podman
- cri-tools
state: present
become: true
- name: Find networking plugins
ini_file:
path: /etc/crio/crio.conf
section: crio.network
option: plugin_dirs
value:
- '/opt/cni/bin/'
- '/usr/lib/cni'
mode: 0644
become: true
register: _crio_conf_updated
# NOTE: want to restart here rather than notify and do it later, so
# that we don't go on without the config correct.
- name: Restart crio to pickup changes # noqa no-handler
service:
name: crio
state: restarted
become: yes
when: _crio_conf_updated.changed

View File

@ -114,6 +114,34 @@
args:
executable: '/bin/bash'
# minikube has a hard-coded cri-docker setup step that writes out
# /etc/systemd/system/cri-docker.service.d/10-cni.conf
# which overrides the ExecStart with CNI arguments. This seems to
# be written to assume different packages than we have on Ubuntu
# Jammy -- containernetworking-plugins is a native package and is
# in /usr/lib, whereas the OpenSuse kubic versions are in /opt.
# We thus add an 11-* config to override the override with
# something that works ... see
# https://github.com/kubernetes/minikube/issues/15320
- name: Correct override for native packages
when: ansible_distribution_release == 'jammy'
block:
- name: Make override dir
file:
state: directory
path: /etc/systemd/system/cri-docker.service.d
owner: root
group: root
mode: '0755'
- name: Override cri-docker
template:
src: 11-cri-docker-override.conf.j2
dest: /etc/systemd/system/cri-docker.service.d/11-cri-docker-override.conf
owner: root
group: root
mode: '0644'
- name: Ensure cri-dockerd running
service:
name: cri-docker

View File

@ -0,0 +1,3 @@
[Service]
ExecStart=
ExecStart=/usr/local/bin/cri-dockerd --container-runtime-endpoint fd:// --network-plugin=cni --cni-bin-dir=/usr/lib/cni --hairpin-mode=promiscuous-bridge

View File

@ -1,6 +1,8 @@
- hosts: all
name: Post testing
tasks:
- name: Run functionality tests
block:
# The default account is known to take a while to appear; see
# https://github.com/kubernetes/kubernetes/issues/66689
- name: Ensure default account created
@ -27,14 +29,22 @@
- name: Start pod
command: kubectl apply -f test-pod.yaml
- name: Check status
- name: Ensure pod is running
shell: sleep 5; kubectl get pods
register: _get_pods_output
until: "'Running' in _get_pods_output.stdout"
retries: 3
delay: 5
always:
- name: Collect container logs
import_role:
name: collect-container-logs
- name: Collect kubernetes logs
import_role:
name: collect-kubernetes-logs
- hosts: all
roles:
- collect-container-logs
- collect-kubernetes-logs
tasks:
- name: Get minikube logs
become: true
shell: "/tmp/minikube logs > {{ ansible_user_dir }}/zuul-output/logs/minikube.txt"

View File

@ -294,9 +294,6 @@
- job:
name: zuul-jobs-test-registry-buildset-registry-k8s-docker
# NOTE(ianw) 2022-11-04 : This job is currently unhappy on Ubuntu
# Jammy, and needs full investigation.
voting: false
dependencies: zuul-jobs-test-registry-buildset-registry
description: |
Test a buildset registry with kubernetes and docker
@ -322,9 +319,6 @@
- job:
name: zuul-jobs-test-registry-buildset-registry-k8s-crio
# NOTE(ianw) 2022-11-04 : This job is currently unhappy on Ubuntu
# Jammy, and needs full investigation.
voting: false
dependencies: zuul-jobs-test-registry-buildset-registry
description: |
Test a buildset registry with kubernetes and CRIO
@ -640,6 +634,8 @@
- zuul-jobs-test-registry-docker-multiarch
- zuul-jobs-test-registry-podman
- zuul-jobs-test-registry-buildset-registry
- zuul-jobs-test-registry-buildset-registry-k8s-docker
- zuul-jobs-test-registry-buildset-registry-k8s-crio
- zuul-jobs-test-registry-buildset-registry-openshift-docker
- zuul-jobs-test-ensure-kubernetes-docker-ubuntu-bionic
- zuul-jobs-test-ensure-kubernetes-docker-ubuntu-focal