Skip to content

Commit cc1c188

Browse files
author
Sameer Naik
committed
docs: improvements to the quickstart guides
1 parent 5a725a5 commit cc1c188

File tree

3 files changed

+51
-41
lines changed

3 files changed

+51
-41
lines changed

docs/quickstart-aks.md

Lines changed: 13 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -21,8 +21,8 @@ This document walks you through setting up an Azure Kubernetes Service (AKS) clu
2121
* [Microsoft Azure CLI](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest)
2222
* [Kubernetes CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
2323
* [BKPR installer](install.md)
24-
* [kubecfg](https://github.com/ksonnet/kubecfg/releases)
25-
* [jq](https://stedolan.github.io/jq/)
24+
* [`kubecfg`](https://github.com/ksonnet/kubecfg/releases)
25+
* [`jq`](https://stedolan.github.io/jq/)
2626

2727
### DNS requirements
2828

@@ -132,7 +132,7 @@ Please note, it can take a while for the DNS changes to propagate.
132132

133133
### Step 4: Access logging and monitoring dashboards
134134

135-
After the DNS changes have propagated, you should be able to access the Prometheus and Kibana dashboards by visiting `https://prometheus.${BKPR_DNS_ZONE}` and `https://kibana.${BKPR_DNS_ZONE}` respectively.
135+
After the DNS changes have propagated, you should be able to access the Prometheus, Kibana and Grafana dashboards by visiting `https://prometheus.${BKPR_DNS_ZONE}`, `https://kibana.${BKPR_DNS_ZONE}` and `https://grafana.${BKPR_DNS_ZONE}` respectively.
136136

137137
Congratulations! You can now deploy your applications on the Kubernetes cluster and BKPR will help you manage and monitor them effortlessly.
138138

@@ -171,7 +171,13 @@ Re-run the `kubeprod install` command, from the [Deploy BKPR](#step-2-deploy-bkp
171171
kubecfg delete kubeprod-manifest.jsonnet
172172
```
173173

174-
### Step 2: Delete the Azure DNS zone
174+
### Step 2: Wait for the `kubeprod` namespace to be deleted
175+
176+
```bash
177+
kubectl wait --for=delete ns/kubeprod --timeout=300s
178+
```
179+
180+
### Step 3: Delete the Azure DNS zone
175181

176182
```bash
177183
az network dns zone delete \
@@ -181,7 +187,7 @@ Re-run the `kubeprod install` command, from the [Deploy BKPR](#step-2-deploy-bkp
181187

182188
Additionally you should remove the NS entries configured at the domain registrar.
183189

184-
### Step 3: Delete Azure app registrations
190+
### Step 4: Delete Azure app registrations
185191

186192
```bash
187193
az ad app delete \
@@ -192,15 +198,15 @@ Re-run the `kubeprod install` command, from the [Deploy BKPR](#step-2-deploy-bkp
192198
--id $(jq -r .oauthProxy.client_id kubeprod-autogen.json)
193199
```
194200

195-
### Step 4: Delete the AKS cluster
201+
### Step 5: Delete the AKS cluster
196202

197203
```bash
198204
az aks delete \
199205
--name ${AZURE_AKS_CLUSTER} \
200206
--resource-group ${AZURE_RESOURCE_GROUP}
201207
```
202208

203-
### Step 5: Delete the Azure resource group
209+
### Step 6: Delete the Azure resource group
204210

205211
```bash
206212
az group delete --name ${AZURE_RESOURCE_GROUP}

docs/quickstart-eks.md

Lines changed: 27 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -18,12 +18,12 @@ This document walks you through setting up an Amazon Elastic Container Service f
1818

1919
* [Amazon AWS account](https://aws.amazon.com/)
2020
* [Amazon CLI](https://aws.amazon.com/cli/)
21-
* [`eksctl`](https://aws.amazon.com/blogs/opensource/eksctl-eks-cluster-one-command/)
21+
* [Amazon EKS CLI](https://eksctl.io/)
2222
* [Kubernetes CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
23-
* [`aws-iam-authenticator`](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html)
23+
* [AWS IAM Authenticator for Kubernetes](https://docs.aws.amazon.com/eks/latest/userguide/install-aws-iam-authenticator.html)
2424
* [BKPR installer](install.md)
25-
* [kubecfg](https://github.com/ksonnet/kubecfg/releases)
26-
* [jq](https://stedolan.github.io/jq/)
25+
* [`kubecfg`](https://github.com/ksonnet/kubecfg/releases)
26+
* [`jq`](https://stedolan.github.io/jq/)
2727

2828
### DNS requirements
2929

@@ -55,11 +55,13 @@ In this section, you will deploy an Amazon Elastic Container Service for Kuberne
5555

5656
```bash
5757
eksctl create cluster --name=${AWS_EKS_CLUSTER} \
58-
--color=fabulous \
5958
--nodes=3 \
6059
--version=${AWS_EKS_K8S_VERSION}
6160
```
62-
> **NOTE**: At the time of this writing, EKS clusters created with `eksctl` are affected by a [bug](https://github.com/awslabs/amazon-eks-ami/issues/193) that causes Elasticsearch to get into a crashloop. A temporary workaround consists of overriding the AMI used when creating the cluster. The AMI named `amazon-eks-node-1.10-v20190211` is known to work. You will need to find its ID that corresponds to the region and zone where you are creating the cluster. For instance:
61+
62+
> **TIP**: The `--ssh-access` command line flag to the `eks create cluster` command configures SSH access to the Kubernetes nodes. This is really useful when debugging issues that require you to log in to the nodes.
63+
64+
> **NOTE**: At the time of this writing, EKS clusters created with `eksctl` are affected by a [bug](https://github.com/awslabs/amazon-eks-ami/issues/193) that causes Elasticsearch to get into a crashloop. The workaround consists of overriding the AMI used when creating the cluster. The AMI named `amazon-eks-node-1.10-v20190211` is known to work. You will need to find its ID that corresponds to the region and zone where you are creating the cluster. For instance:
6365
>
6466
> | Region | AMI ID |
6567
> |:--------------:|:-----------------------:|
@@ -76,7 +78,6 @@ In this section, you will deploy an Amazon Elastic Container Service for Kuberne
7678
>
7779
> ```bash
7880
> eksctl create cluster --name=${AWS_EKS_CLUSTER} \
79-
> --color=fabulous \
8081
> --nodes=3 \
8182
> --version=${AWS_EKS_K8S_VERSION} \
8283
> --node-ami ami-074583f8d5a05e27b
@@ -115,20 +116,19 @@ If you are new to using BKPR on EKS, or if you want to create a new User Pool in
115116

116117
<p align="center"><img src="eks/1-new-user-pool.png" width=840/></p>
117118

118-
4. Go to the **Policies** section and select a password strength. Be sure to also select the **Only allow administrators to create users** option as shown below, otherwise anyone could potentially create a new user and log in:
119+
4. Go to the **Policies** and select the **Only allow administrators to create users** option, otherwise anyone would be able to sign up and gain access to services running in the cluster. **Save changes** before continuing to the next step:
119120

120121
<p align="center"><img src="eks/2-policies.png" width=840/></p>
121122

122-
5. Feel free to customize other sections, like **Tags**, to your liking. Once done, go back to the **Review** section:
123+
5. Feel free to customize other sections, like **Tags**, to your liking. Once done, go to the **Review** section and click on the **Create pool** button:
123124

124125
<p align="center"><img src="eks/3-review.png" width=840/></p>
125126

126-
6. Click the **Create pool** button.
127-
7. Go to **App integration -> Domain name** setting and configure the Amazon Cognito domain, which has to be unique to all users in an AWS Region. Once done, click the **Save changes** button:
127+
6. Go to **App integration > Domain name** setting and configure the Amazon Cognito domain, which has to be unique to all users in an AWS Region. Once done, click the **Save changes** button:
128128

129129
<p align="center"><img src="eks/4-domain.png" width=840/></p>
130130

131-
8. Select the **General settings** option, note the **Pool Id** and export its value:
131+
7. Select the **General settings** option, note the **Pool Id** and export its value:
132132

133133
```bash
134134
export AWS_COGNITO_USER_POOL_ID=eu-central-1_sHSdWT6VL
@@ -181,7 +181,7 @@ Please note that it can take a while for the DNS changes to propagate.
181181

182182
### Step 5: Access logging and monitoring dashboards
183183

184-
After the DNS changes have propagated, you should be able to access the Prometheus and Kibana dashboards by visiting `https://prometheus.${BKPR_DNS_ZONE}` and `https://kibana.${BKPR_DNS_ZONE}` respectively.
184+
After the DNS changes have propagated, you should be able to access the Prometheus, Kibana and Grafana dashboards by visiting `https://prometheus.${BKPR_DNS_ZONE}`, `https://kibana.${BKPR_DNS_ZONE}` and `https://grafana.${BKPR_DNS_ZONE}` respectively. Login with credentials created in the [Create a user](#create-a-user) step.
185185

186186
Congratulations! You can now deploy your applications on the Kubernetes cluster and BKPR will help you manage and monitor them effortlessly.
187187

@@ -220,14 +220,20 @@ Re-run the `kubeprod install` command from the [Deploy BKPR](#step-3-deploy-bkpr
220220
kubecfg delete kubeprod-manifest.jsonnet
221221
```
222222

223-
### Step 2: Delete the Hosted Zone in Route 53
223+
### Step 2: Wait for the `kubeprod` namespace to be deleted
224+
225+
```bash
226+
kubectl wait --for=delete ns/kubeprod --timeout=300s
227+
```
228+
229+
### Step 3: Delete the Hosted Zone in Route 53
224230

225231
```bash
226232
BKPR_DNS_ZONE_ID=$(aws route53 list-hosted-zones-by-name --dns-name "${BKPR_DNS_ZONE}" \
227233
--max-items 1 \
228234
--query 'HostedZones[0].Id' \
229235
--output text)
230-
aws route53 list-resource-record-sets --hosted-zone-id \${BKPR_DNS_ZONE_ID} \
236+
aws route53 list-resource-record-sets --hosted-zone-id ${BKPR_DNS_ZONE_ID} \
231237
--query '{ChangeBatch:{Changes:ResourceRecordSets[?Type != `NS` && Type != `SOA`].{Action:`DELETE`,ResourceRecordSet:@}}}' \
232238
--output json > changes
233239

@@ -243,26 +249,26 @@ Re-run the `kubeprod install` command from the [Deploy BKPR](#step-3-deploy-bkpr
243249

244250
Additionally you should remove the NS entries configured at the domain registrar.
245251

246-
### Step 3: Delete the BKPR user
252+
### Step 4: Delete the BKPR user
247253

248254
```bash
249255
ACCOUNT=$(aws sts get-caller-identity | jq -r .Account)
250256
aws iam detach-user-policy --user-name "bkpr-${BKPR_DNS_ZONE}" --policy-arn "arn:aws:iam::${ACCOUNT}:policy/bkpr-${BKPR_DNS_ZONE}"
251257
aws iam delete-policy --policy-arn "arn:aws:iam::${ACCOUNT}:policy/bkpr-${BKPR_DNS_ZONE}"
252-
ACCESS_KEY_ID=$(cat kubeprod-autogen.json | jq -r .externalDns.aws_access_key_id)
258+
ACCESS_KEY_ID=$(jq -r .externalDns.aws_access_key_id kubeprod-autogen.json)
253259
aws iam delete-access-key --user-name "bkpr-${BKPR_DNS_ZONE}" --access-key-id "${ACCESS_KEY_ID}"
254260
aws iam delete-user --user-name "bkpr-${BKPR_DNS_ZONE}"
255261
```
256262

257-
### Step 4: Delete the BKPR App Client
263+
### Step 5: Delete the BKPR App Client
258264

259265
```bash
260-
USER_POOL=$(cat kubeprod-autogen.json | jq -r .oauthProxy.aws_user_pool_id)
261-
CLIENT_ID=$(cat kubeprod-autogen.json | jq -r .oauthProxy.client_id)
266+
USER_POOL=$(jq -r .oauthProxy.aws_user_pool_id kubeprod-autogen.json)
267+
CLIENT_ID=$(jq -r .oauthProxy.client_id kubeprod-autogen.json)
262268
aws cognito-idp delete-user-pool-client --user-pool-id "${USER_POOL}" --client-id "${CLIENT_ID}"
263269
```
264270

265-
### Step 5: Delete the EKS cluster
271+
### Step 6: Delete the EKS cluster
266272

267273
```bash
268274
eksctl delete cluster --name ${AWS_EKS_CLUSTER}

docs/quickstart-gke.md

Lines changed: 11 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -24,8 +24,8 @@ This document walks you through setting up a Google Kubernetes Engine (GKE) clus
2424
* [Google Cloud SDK](https://cloud.google.com/sdk/)
2525
* [Kubernetes CLI](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
2626
* [BKPR installer](install.md)
27-
* [kubecfg](https://github.com/ksonnet/kubecfg/releases)
28-
* [jq](https://stedolan.github.io/jq/)
27+
* [`kubecfg`](https://github.com/ksonnet/kubecfg/releases)
28+
* [`jq`](https://stedolan.github.io/jq/)
2929

3030
### DNS and G Suite requirements
3131

@@ -215,15 +215,21 @@ Re-run the `kubeprod install` command, from the [Deploy BKPR](#step-2-deploy-bkp
215215
kubecfg delete kubeprod-manifest.jsonnet
216216
```
217217

218-
### Step 2: Delete the Cloud DNS zone
218+
### Step 2: Wait for the `kubeprod` namespace to be deleted
219+
220+
```bash
221+
kubectl wait --for=delete ns/kubeprod --timeout=300s
222+
```
223+
224+
### Step 3: Delete the Cloud DNS zone
219225

220226
```bash
221227
BKPR_DNS_ZONE_NAME=$(gcloud dns managed-zones list --filter dnsName:${BKPR_DNS_ZONE} --format='value(name)')
222228
gcloud dns record-sets import /dev/null --zone ${BKPR_DNS_ZONE_NAME} --delete-all-existing
223229
gcloud dns managed-zones delete ${BKPR_DNS_ZONE_NAME}
224230
```
225231

226-
### Step 3: Delete service account and IAM profile
232+
### Step 4: Delete service account and IAM profile
227233

228234
```bash
229235
GCLOUD_SERVICE_ACCOUNT=$(gcloud iam service-accounts list --filter "displayName:${BKPR_DNS_ZONE} AND email:bkpr-edns" --format='value(email)')
@@ -233,20 +239,12 @@ Re-run the `kubeprod install` command, from the [Deploy BKPR](#step-2-deploy-bkp
233239
gcloud iam service-accounts delete ${GCLOUD_SERVICE_ACCOUNT}
234240
```
235241

236-
### Step 4: Delete the GKE cluster
242+
### Step 5: Delete the GKE cluster
237243

238244
```bash
239245
gcloud container clusters delete ${GCLOUD_K8S_CLUSTER}
240246
```
241247

242-
### Step 5: Delete any leftover GCE disks
243-
244-
```bash
245-
GCLOUD_DISKS_FILTER=${GCLOUD_K8S_CLUSTER:0:18}
246-
gcloud compute disks delete --zone ${BKPR_DNS_ZONE} \
247-
$(gcloud compute disks list --filter name:${GCLOUD_DISKS_FILTER%-} --format='value(name)')
248-
```
249-
250248
## Further reading
251249

252250
- [BKPR FAQ](FAQ.md)

0 commit comments

Comments
 (0)