Skip to content

Commit dd8599c

Browse files
authored
Merge pull request #380 from andrewsykim/improve-docs
docs: clarify workflow using clusterctl and kubectl
2 parents 5c7030a + a167556 commit dd8599c

File tree

2 files changed

+33
-58
lines changed

2 files changed

+33
-58
lines changed

README.md

Lines changed: 2 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -19,57 +19,6 @@ You can reach the maintainers of this project at:
1919

2020
Participation in the Kubernetes community is governed by the [Kubernetes Code of Conduct](code-of-conduct.md).
2121

22-
### Quick Start
22+
### Getting Started
2323

24-
Go [here](docs/README.md) for an example of how to get up and going with the cluster api using vSphere.
25-
26-
### Where to get the containers
27-
28-
The containers for this provider are currently hosted at `gcr.io/cnx-cluster-api/`. Each release of the
29-
container are tagged with the release version appropriately. Please note, the release tagging changed to
30-
stay uniform with the main cluster api repo. Also note, these are docker containers. A container runtime
31-
must pull them. They cannot simply be downloaded.
32-
33-
| vSphere provider version | container url |
34-
| --- | --- |
35-
| 0.1.0 | gcr.io/cnx-cluster-api/vsphere-cluster-api-provider:v0.1 |
36-
| 0.2.0 | gcr.io/cnx-cluster-api/vsphere-cluster-api-provider:0.2.0 |
37-
38-
| main Cluster API version | container url |
39-
| --- | --- |
40-
| 0.1.0 | gcr.io/k8s-cluster-api/cluster-api-controller:0.1.0 |
41-
42-
To use the appropriate version (instead of `:latest`), replace the version in the generated `provider-components.yaml`,
43-
described in the quick start guide.
44-
45-
### Compatibility Matrix
46-
47-
Below are tables showing the compatibility between versions of the vSphere provider, the main cluster api,
48-
kubernetes versions, and OSes. Please note, this table only shows version 0.2 of the vSphere provider. Due
49-
to the way this provider bootstrap nodes (e.g. using Ubuntu package manager to pull some components), there
50-
were changes in some packages that broke version 0.1 (but may get resolved at some point) so the compatibility
51-
tables for that provider version are not provided here.
52-
53-
Compatibility matrix for Cluster API versions and the vSphere provider versions.
54-
55-
| | Cluster API 0.1.0 |
56-
|--- | --- |
57-
| vSphere Provider 0.2.0 ||
58-
59-
Compatibility matrix for the vSphere provider versions and Kubernetes versions.
60-
61-
| |k8s 1.11.x|k8s 1.12.x|k8s 1.13.x|k8s 1.14.x|
62-
|---|---|---|---|---|
63-
| vSphere Provider 0.2.0 |||||
64-
65-
Compatibility matrix for the vSphere provider versions and node OS. Further OS support may be added in future releases.
66-
67-
| | Ubuntu Xenial Cloud Image | Ubuntu Bionic Cloud Image |
68-
| --- | --- | --- |
69-
| vSphere Provider 0.2.0 |||
70-
71-
Users may download the cloud images here:
72-
73-
[Ubuntu Xenial (16.04)](https://cloud-images.ubuntu.com/xenial/current/)
74-
75-
[Ubuntu Bionic (18.04)](https://cloud-images.ubuntu.com/bionic/current/)
24+
See the [Getting Started](docs/getting_started.md) guide to get up and going with Cluster API for vSphere.

docs/getting_started.md

Lines changed: 31 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -166,14 +166,26 @@ path that it ran (i.e. `out/kubeconfig`). This is the **admin** kubeconfig file
166166
going forward to spin up multiple clusters using Cluster API, however, it is recommended that you create dedicated roles
167167
with limited access before doing so.
168168

169+
Note that from this point forward, you no longer need to use `clusterctl` to provision clusters since your management cluster
170+
(the cluster used to manage workload clusters) has been created. Workload clusters should be provisioned by applying Cluster API resources
171+
directly on the management cluster using `kubectl`. More on this below.
172+
169173
## Managing Workload Clusters using the Management Cluster
170174

171175
With your management cluster bootstrapped, it's time to reap the benefits of Cluster API. From this point forward,
172176
clusters and machines (belonging to a cluster) are simply provisioned by creating `cluster`, `machine` and `machineset` resources.
173177

174-
Taking the generated `out/cluster.yaml` and `out/machine.yaml` file from earlier as a reference, you can create a cluster with the
175-
initial control plane node by just editing the name of the cluster and machine resource. For example, the following cluster and
176-
machine resource will provision a cluster named "prod-workload" with 1 initial control plane node:
178+
Using the same `prod-yaml` make target, generate Cluster API resources for a new cluster, this time with a different name:
179+
```
180+
$ CLUSTER_NAME=prod-workload make prod-yaml
181+
```
182+
183+
**NOTE**: The `make prod-yaml` target is not required to manage your Cluster API resources at this point but is used to simplify this guide.
184+
You should manage your Cluster API resources in the same way you would manage your application yaml files for Kubernetes. Use the
185+
generated yaml files from `make prod-yaml` as a reference.
186+
187+
The Cluster and Machine resource in `out/prod-workload/cluster.yaml` and `out/prod-workload/machines.yaml` defines your workload
188+
cluster with the initial control plane.
177189

178190
```yaml
179191
---
@@ -227,7 +239,7 @@ spec:
227239
controlPlane: "1.13.6"
228240
```
229241
230-
To add 3 additional worker nodes to your cluster, create a machineset like the following:
242+
To add 3 additional worker nodes to your cluster, see the generated machineset file `out/prod-workload/machineset.yaml`:
231243

232244
```yaml
233245
apiVersion: "cluster.k8s.io/v1alpha1"
@@ -269,7 +281,17 @@ spec:
269281
controlPlane: "1.13.6"
270282
```
271283

272-
Run `kubectl apply -f` to apply the above files on your management cluster and it should start provisioning the new cluster.
284+
Run `kubectl apply -f` to apply the above files on your management cluster and it should start provisioning the new cluster:
285+
```bash
286+
$ cd out/prod-workload
287+
$ kubectl apply -f cluster.yaml
288+
cluster.cluster.k8s.io/prod-workload created
289+
$ kubectl apply -f machines.yaml
290+
machine.cluster.k8s.io/prod-workload-controlplane-1 created
291+
$ kubectl apply -f machineset.yaml
292+
machineset.cluster.k8s.io/prod-workload-machineset-1 created
293+
```
294+
273295
Clusters that are provisioned by the management cluster that run your application workloads are called [Workload Clusters](https://github.com/kubernetes-sigs/cluster-api/blob/master/docs/book/GLOSSARY.md#workload-cluster).
274296

275297
The `kubeconfig` file to access workload clusters should be accessible as a Kubernetes Secret on the management cluster. As of today, the
@@ -286,3 +308,7 @@ $ kubectl get secret prod-workload-kubeconfig -o=jsonpath='{.data.value}' | base
286308
```
287309

288310
Now that you have the `kubeconfig` for your Workload Cluster, you can start deploying your applications there.
311+
312+
**NOTE**: workload clusters do not have any addons applied aside from those added by kubeadm. Nodes in your workload clusters
313+
will be in the `NotReady` state until you apply a CNI addon. The `addons.yaml` file generated from `make prod-yaml` has a default calico
314+
addon which you can use, otherwise apply custom addons based on your use-case.

0 commit comments

Comments
 (0)