Skip to content

Commit f721639

Browse files
authored
Merge pull request #464 from akutz/bugfix/portable-yaml-gen
Fixes YAML generation issues with Docker
2 parents 5c29835 + 1c73f74 commit f721639

File tree

3 files changed

+53
-49
lines changed

3 files changed

+53
-49
lines changed

docs/getting_started.md

Lines changed: 27 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -90,30 +90,25 @@ export CLUSTER_CIDR='100.96.0.0/11' # (optional) The cluster CIDR of the m
9090
EOF
9191
```
9292

93-
With the above environment variable file it is now possible to generate the manifests needed to bootstrap the management cluster. The following command uses Docker to run an image that has all of the necessary templates and tools to generate the YAML manifests. Please note that the example mounts the current directory as the location where the YAML will be generated. Additionally, the `envvars.txt` file created above is mounted inside the the image in order to provide the generation routine with its default values:
93+
With the above environment variable file it is now possible to generate the manifests needed to bootstrap the management cluster. The following command uses Docker to run an image that has all of the necessary templates and tools to generate the YAML manifests. Additionally, the `envvars.txt` file created above is mounted inside the the image in order to provide the generation routine with its default values:
9494

9595
```shell
96-
# create the output directory for the management cluster manifests,
97-
# only required for Linux to work around permissions issues on volume mounts
98-
$ mkdir -p management-cluster
99-
10096
$ docker run --rm \
101-
--user "$(id -u):$(id -g)" \
102-
-v "$(pwd)/management-cluster":/out \
103-
-v "$(pwd)/envvars.txt":/out/envvars.txt:ro \
97+
-v "$(pwd)":/out \
98+
-v "$(pwd)/envvars.txt":/envvars.txt:ro \
10499
gcr.io/cluster-api-provider-vsphere/release/manifests:latest \
105100
-c management-cluster
106101

107-
done generating ./out/addons.yaml
102+
done generating ./out/management-cluster/addons.yaml
108103
done generating ./config/default/manager_image_patch.yaml
109-
done generating ./out/cluster.yaml
110-
done generating ./out/machines.yaml
111-
done generating ./out/machineset.yaml
112-
done generating ./out/provider-components.yaml
104+
done generating ./out/management-cluster/cluster.yaml
105+
done generating ./out/management-cluster/machines.yaml
106+
done generating ./out/management-cluster/machineset.yaml
107+
done generating ./out/management-cluster/provider-components.yaml
113108

114109
*** Finished creating initial example yamls in ./out
115110

116-
The files ./out/cluster.yaml and ./out/machines.yaml need to be updated
111+
The files ./out/management-cluster/cluster.yaml and ./out/management-cluster/machines.yaml need to be updated
117112
with information about the desired Kubernetes cluster and vSphere environment
118113
on which the Kubernetes cluster will be created.
119114

@@ -128,14 +123,14 @@ Once the manifests are generated, `clusterctl` may be used to create the managem
128123
clusterctl create cluster \
129124
--provider vsphere \
130125
--bootstrap-type kind \
131-
--cluster management-cluster/cluster.yaml \
132-
--machines management-cluster/machines.yaml \
133-
--provider-components management-cluster/provider-components.yaml \
134-
--addon-components management-cluster/addons.yaml \
135-
--kubeconfig-out management-cluster/kubeconfig
126+
--cluster ./out/management-cluster/cluster.yaml \
127+
--machines ./out/management-cluster/machines.yaml \
128+
--provider-components ./out/management-cluster/provider-components.yaml \
129+
--addon-components ./out/management-cluster/addons.yaml \
130+
--kubeconfig-out ./out/management-cluster/kubeconfig
136131
```
137132

138-
Once `clusterctl` has completed successfully, the file `management-cluster/kubeconfig` may be used to access the new management cluster. This is the **admin** `kubeconfig` for the management cluster, and it may be used to spin up additional clusters with Cluster API. However, the creation of roles with limited access, is recommended before creating additional clusters.
133+
Once `clusterctl` has completed successfully, the file `./out/management-cluster/kubeconfig` may be used to access the new management cluster. This is the **admin** `kubeconfig` for the management cluster, and it may be used to spin up additional clusters with Cluster API. However, the creation of roles with limited access, is recommended before creating additional clusters.
139134

140135
**NOTE**: From this point forward `clusterctl` is no longer required to provision new clusters. Workload clusters should be provisioned by applying Cluster API resources directly on the management cluster using `kubectl`.
141136

@@ -146,21 +141,16 @@ With your management cluster bootstrapped, it's time to reap the benefits of Clu
146141
Using the same Docker command as above, generate resources for a new cluster, this time with a different name:
147142

148143
```shell
149-
# create the output directory for the workload cluster manifests,
150-
# only required for Linux to work around permissions issues on volume mounts
151-
$ mkdir -p workload-cluster-1
152-
153144
$ docker run --rm \
154-
--user "$(id -u):$(id -g)" \
155-
-v "$(pwd)/workload-cluster-1":/out \
156-
-v "$(pwd)/envvars.txt":/out/envvars.txt:ro \
157-
gcr.io/cluster-api-provider-vsphere/release/manifests:latest \
158-
-c workload-cluster-1
145+
-v "$(pwd)":/out \
146+
-v "$(pwd)/envvars.txt":/envvars.txt:ro \
147+
gcr.io/cluster-api-provider-vsphere/release/manifests:latest \
148+
-c workload-cluster-1
159149
```
160150

161151
**NOTE**: The above step is not required to manage your Cluster API resources at this point but is used to simplify this guide. You should manage your Cluster API resources in the same way you would manage your Kubernetes application manifests. Please use the generated manifests only as a reference.
162152

163-
The Cluster and Machine resource in `workload-cluster-1/cluster.yaml` and `workload-cluster-1/machines.yaml` defines the workload cluster with the initial control plane node:
153+
The Cluster and Machine resource in `./out/workload-cluster-1/cluster.yaml` and `./out/workload-cluster-1/machines.yaml` defines the workload cluster with the initial control plane node:
164154

165155
```yaml
166156
---
@@ -212,7 +202,7 @@ spec:
212202
controlPlane: "1.13.6"
213203
```
214204
215-
To add 3 additional worker nodes to your cluster, see the generated machineset file `workload-cluster-1/machineset.yaml`:
205+
To add 3 additional worker nodes to your cluster, see the generated machineset file `./out/workload-cluster-1/machineset.yaml`:
216206

217207
```yaml
218208
apiVersion: "cluster.k8s.io/v1alpha1"
@@ -258,27 +248,27 @@ Use `kubectl` with the `kubeconfig` for the management cluster to provision the
258248
1. Export the management cluster's `kubeconfig` file:
259249

260250
```shell
261-
export KUBECONFIG="$(pwd)/management-cluster/kubeconfig"
251+
export KUBECONFIG="$(pwd)/out/management-cluster/kubeconfig"
262252
```
263253

264254
2. Create the workload cluster by applying the cluster manifest:
265255

266256
```shell
267-
$ kubectl apply -f workload-cluster-1/cluster.yaml
257+
$ kubectl apply -f ./out/workload-cluster-1/cluster.yaml
268258
cluster.cluster.k8s.io/workload-cluster-1 created
269259
```
270260

271261
3. Create the control plane nodes for the workload cluster by applying the machines manifest:
272262

273263
```shell
274-
$ kubectl apply -f workload-cluster-1/machines.yaml
264+
$ kubectl apply -f ./out/workload-cluster-1/machines.yaml
275265
machine.cluster.k8s.io/workload-cluster-1-controlplane-1 created
276266
```
277267

278268
4. Create the worker nodes for the workload cluster by applying the machineset manifest:
279269

280270
```shell
281-
$ kubectl apply -f workload-cluster-1/machineset.yaml
271+
$ kubectl apply -f ./out/workload-cluster-1/machineset.yaml
282272
machineset.cluster.k8s.io/workload-cluster-1-machineset-1 created
283273
```
284274

@@ -299,9 +289,9 @@ The `kubeconfig` file to access workload clusters should be accessible as a Kube
299289

300290
```shell
301291
kubectl get secret workload-cluster-1-kubeconfig -o=jsonpath='{.data.value}' | \
302-
{ base64 -d 2>/dev/null || base64 -D; } >workload-cluster-1/kubeconfig
292+
{ base64 -d 2>/dev/null || base64 -D; } >./out/workload-cluster-1/kubeconfig
303293
```
304294

305-
The new `workload-cluster-1/kubeconfig` file may now be used to access the workload cluster.
295+
The new `./out/workload-cluster-1/kubeconfig` file may now be used to access the workload cluster.
306296

307297
**NOTE**: Workload clusters do not have any addons applied aside from those added by kubeadm. Nodes in your workload clusters will be in the `NotReady` state until you apply a CNI addon. The `addons.yaml` files generated above have a default Calico addon which you can use, otherwise apply custom addons based on your use-case.

hack/generate-yaml.sh

Lines changed: 18 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -20,10 +20,11 @@ set -o pipefail
2020

2121
# Change directories to the parent directory of the one in which this
2222
# script is located.
23-
cd "$(dirname "${BASH_SOURCE[0]}")/.."
23+
cd "${WORKDIR:-$(dirname "${BASH_SOURCE[0]}")/..}"
24+
BUILDDIR="${BUILDDIR:-.}"
2425

2526
OUT_DIR="${OUT_DIR:-}"
26-
TPL_DIR=./cmd/clusterctl/examples/vsphere
27+
TPL_DIR="${BUILDDIR}"/cmd/clusterctl/examples/vsphere
2728

2829
OVERWRITE=
2930
CLUSTER_NAME="${CLUSTER_NAME:-capv-mgmt-example}"
@@ -85,8 +86,8 @@ export MANAGER_IMAGE="${CAPV_MANAGER_IMAGE}"
8586
mkdir -p "${OUT_DIR}"
8687

8788
# Load an envvars.txt file if one is found.
88-
# shellcheck disable=SC1090
89-
[ -e "${OUT_DIR}/envvars.txt" ] && source "${OUT_DIR}/envvars.txt"
89+
# shellcheck disable=SC1091
90+
[ "${DOCKER_ENABLED-}" ] && [ -e "/envvars.txt" ] && source "/envvars.txt"
9091

9192
# shellcheck disable=SC2034
9293
ADDON_TPL_FILE="${TPL_DIR}"/addons.yaml.template
@@ -103,8 +104,8 @@ MACHINESET_TPL_FILE="${TPL_DIR}"/machineset.yaml.template
103104
# shellcheck disable=SC2034
104105
MACHINESET_OUT_FILE="${OUT_DIR}"/machineset.yaml
105106

106-
CAPI_CFG_DIR=./vendor/sigs.k8s.io/cluster-api/config
107-
CAPV_CFG_DIR=./config
107+
CAPI_CFG_DIR="${BUILDDIR}"/vendor/sigs.k8s.io/cluster-api/config
108+
CAPV_CFG_DIR="${BUILDDIR}"/config
108109

109110
COMP_OUT_FILE="${OUT_DIR}"/provider-components.yaml
110111
# shellcheck disable=SC2034
@@ -176,8 +177,12 @@ verify_cpu_mem_dsk VSPHERE_DISK_GIB 20
176177
record_and_export KUBERNETES_VERSION ":-${KUBERNETES_VERSION}"
177178

178179
do_envsubst() {
179-
python hack/envsubst.py >"${2}" <"${1}"
180-
echo "done generating ${2}"
180+
python "${BUILDDIR}/hack/envsubst.py" >"${2}" <"${1}"
181+
if [ "${DOCKER_ENABLED-}" ]; then
182+
echo "done generating ${2/\/build/.}"
183+
else
184+
echo "done generating ${2}"
185+
fi
181186
}
182187

183188
# Create the output files by substituting the templates with envrionment vars.
@@ -191,7 +196,7 @@ done
191196
kustomize build "${CAPI_CFG_DIR}"/default/; } >"${COMP_OUT_FILE}"
192197

193198
cat <<EOF
194-
Done generating ${COMP_OUT_FILE}
199+
done generating ${COMP_OUT_FILE}
195200
196201
*** Finished creating initial example yamls in ${OUT_DIR}
197202
@@ -201,3 +206,7 @@ Done generating ${COMP_OUT_FILE}
201206
202207
Enjoy!
203208
EOF
209+
210+
# If running in Docker then ensure the contents of the OUT_DIR have the
211+
# the same owner as the volume mounted to the /out directory.
212+
[ "${DOCKER_ENABLED}" ] && chown -R "$(stat -c '%u:%g' /out)" "${OUT_DIR}"

hack/tools/generate-yaml/Dockerfile

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ FROM ${BASE_IMAGE}
2020
LABEL "maintainer" "Andrew Kutz <[email protected]>"
2121

2222
# Run things out of the /build directory.
23+
ENV BUILDDIR /build
2324
WORKDIR /build
2425

2526
# Copy in the hack tooling.
@@ -40,7 +41,11 @@ RUN find . -type d -exec chmod 0777 \{\} \;
4041
ARG CAPV_MANAGER_IMAGE=gcr.io/cluster-api-provider-vsphere/ci/manager:latest
4142
ENV CAPV_MANAGER_IMAGE=${CAPV_MANAGER_IMAGE}
4243

43-
# The YAML is always written to the /out directory. Mount the volumes there.
44-
ENV OUT_DIR /out
44+
# Change the working directory to /.
45+
ENV WORKDIR /out
46+
WORKDIR /out
4547

46-
ENTRYPOINT [ "./hack/generate-yaml.sh" ]
48+
# Indicate that this is being execute in a container.
49+
ENV DOCKER_ENABLED 1
50+
51+
ENTRYPOINT [ "/build/hack/generate-yaml.sh" ]

0 commit comments

Comments
 (0)