|
3 | 3 | ## Testing
|
4 | 4 |
|
5 | 5 | To test csi-cloudscale in conjunction with Kubernetes, a suite of integration tests has been implemented.
|
6 |
| -To run this test suite, a Kubernetes cluster is required. For this purpose, this setup was prepared using kubespray. |
| 6 | +To run this test suite, a Kubernetes cluster is required. For this purpose, this setup was prepared using |
| 7 | +[k8test](https://github.com/cloudscale-ch/k8test). |
7 | 8 |
|
8 |
| -> ⚠️ Running these tests yourself may incur unexpected costs and may result in data loss if run against a production account with live systems. herefore, we strongly advise you to use a separate account for these tests. |
| 9 | +> ⚠️ Running these tests yourself may incur unexpected costs and may result in data loss if run against a production account with live systems. herefore, we strongly advise you to use a separate account for these tests. |
9 | 10 | > The Kubernetes cluster created is not production ready and should not be used for any purpose other than testing.
|
10 | 11 |
|
| 12 | +First bootstrap the cluster |
11 | 13 |
|
12 |
| - # setup all required charts in the local folder, as they will be used by the ansible playbook. |
13 |
| - cd charts/csi-cloudscale/ |
14 |
| - helm repo add bitnami https://charts.bitnami.com/bitnami |
15 |
| - helm repo update |
16 |
| - helm dependency build |
17 |
| - cd ../../ |
| 14 | + # Export your API Token obtained from http://control.cloudscale.ch |
| 15 | + export CLOUDSCALE_API_TOKEN="..." |
18 | 16 |
|
19 |
| - # kubspray is provided as a git submodule |
20 |
| - git submodule init |
21 |
| - git submodule update |
22 |
| - # if you want to test against another Kubernetes version, checkout a differnt tag in the the kubspray folder |
| 17 | + # See the script for options, sensible defaults apply |
| 18 | + ./helpers/bootstrap-cluster |
23 | 19 |
|
24 |
| - # setup the python venv |
25 |
| - cd deploy |
26 |
| - python3 -m venv venv |
27 |
| - . venv/bin/activate |
28 |
| - # or requirements-{VERSION}.txt, see https://github.com/kubernetes-sigs/kubespray/blob/master/docs/ansible.md#ansible-python-compatibility |
29 |
| - pip install -r kubespray/requirements.txt |
30 |
| - |
31 |
| - # setup the cluster |
32 |
| - cd kubespray/ |
33 |
| - # get a token from the cloudscale.ch Control Panel and set it as CLOUDSCALE_TOKEN envrionment variable |
34 |
| - export CLOUDSCALE_TOKEN="foobar" |
35 |
| - # running this playbook will install a Kubernetes cluster on cloudscale.ch |
36 |
| - ansible-playbook ../integration_test.yml -i inventory/hosts.ini --skip-tags cleanup --skip-tags test |
37 |
| - |
38 |
| - # get the IP address of server "test-kubernetes-master" from the cloudscale.ch Control Panel |
39 |
| - # add the IP in the property "server" in the file "kubeconfig.yml", keep the https prefix and the port |
40 |
| - cd ../../ |
41 |
| - vi deploy/kubeconfig.yml |
42 |
| - |
43 |
| - # add the path of this file to the KUBECONFIG env variable |
44 |
| - export KUBECONFIG=$(pwd)/deploy/kubeconfig.yml |
45 |
| - |
46 |
| - # finally, run the integration tests |
| 20 | + # Verify cluster setup and access |
| 21 | + export KUBECONFIG=$PWD/k8test/cluster/admin.conf |
| 22 | + kubectl get nodes -o wide |
| 23 | + |
| 24 | + |
| 25 | +You can **either** install the driver from your working directory |
| 26 | + |
| 27 | + # Install driver using dev image from working dir |
| 28 | + # Pre-requesit: ensure the you have run `helm dependency build` as described in the main README file. |
| 29 | + helm install -g -n kube-system --set controller.image.tag=dev --set node.image.tag=dev --set controller.image.pullPolicy=Always --set node.image.pullPolicy=Always ./charts/csi-cloudscale |
| 30 | + |
| 31 | +**Or** you can install a released version: |
| 32 | + |
| 33 | + # List all released versions |
| 34 | + helm search repo csi-cloudscale/csi-cloudscale --versions |
| 35 | + # Install a specific Chart version or latest if --version is omitted |
| 36 | + helm install -n kube-system -g csi-cloudscale/csi-cloudscale [ --version v1.0.0 ] |
| 37 | + |
| 38 | +Then execute the test suite: |
| 39 | + |
47 | 40 | make test-integration
|
48 | 41 |
|
49 |
| -*Command line options for the playbook:* |
50 |
| -- If you just want to provision a cluster, you can use an additional `--skip-tags cleanup --skip-tags test`. If not, the VMs will be removed again. |
51 |
| -- If you want to a test release other than `dev`, you can use an |
52 |
| - additional `-e version=v1.0.0`. Caution: This does only inject the docker image tag in to helm, but uses the chart from the current working directory. |
| 42 | +The get rid of the cluster: |
| 43 | + |
| 44 | + ./helpers/clean-up |
53 | 45 |
|
54 | 46 | ## Debugging
|
55 | 47 |
|
56 |
| -If the playbook does not pass, there are a good number of ways to debug. |
| 48 | +If the suite does not pass, there are a good number of ways to debug. |
57 | 49 | You can just redeploy all csi pods and push a new version to docker hub:
|
58 | 50 |
|
59 | 51 | VERSION=dev make publish
|
@@ -127,11 +119,3 @@ Using etcdctl:
|
127 | 119 | > <https://1.1.1.1/k8s/clusters/c-xfmg6> kubectl get nodes \# get nodes
|
128 | 120 | > name kubectl get \--raw /k8s/clusters/{}/api/v1/nodes/{}/proxy/metrics
|
129 | 121 | > \| grep kubelet_vol
|
130 |
| -
|
131 |
| -Ansible: |
132 |
| - |
133 |
| - # Keep cluster after test run |
134 |
| - CLOUDSCALE_TOKEN="foobar" ansible-playbook integration_test.yml -i inventory/hosts.ini --skip-tags cleanup |
135 |
| - |
136 |
| - # Just run tests |
137 |
| - ansible-playbook -i inventory/hosts.ini integration_test.yml --tags test |
0 commit comments