You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -132,7 +132,7 @@ Please note, it can take a while for the DNS changes to propagate.
132
132
133
133
### Step 4: Access logging and monitoring dashboards
134
134
135
-
After the DNS changes have propagated, you should be able to access the Prometheusand Kibana dashboards by visiting `https://prometheus.${BKPR_DNS_ZONE}`and `https://kibana.${BKPR_DNS_ZONE}` respectively.
135
+
After the DNS changes have propagated, you should be able to access the Prometheus, Kibana and Grafana dashboards by visiting `https://prometheus.${BKPR_DNS_ZONE}`, `https://kibana.${BKPR_DNS_ZONE}`and `https://grafana.${BKPR_DNS_ZONE}` respectively.
136
136
137
137
Congratulations! You can now deploy your applications on the Kubernetes cluster and BKPR will help you manage and monitor them effortlessly.
138
138
@@ -171,7 +171,13 @@ Re-run the `kubeprod install` command, from the [Deploy BKPR](#step-2-deploy-bkp
171
171
kubecfg delete kubeprod-manifest.jsonnet
172
172
```
173
173
174
-
### Step 2: Delete the Azure DNS zone
174
+
### Step 2: Wait for the `kubeprod` namespace to be deleted
@@ -55,11 +55,13 @@ In this section, you will deploy an Amazon Elastic Container Service for Kuberne
55
55
56
56
```bash
57
57
eksctl create cluster --name=${AWS_EKS_CLUSTER} \
58
-
--color=fabulous \
59
58
--nodes=3 \
60
59
--version=${AWS_EKS_K8S_VERSION}
61
60
```
62
-
> **NOTE**: At the time of this writing, EKS clusters created with `eksctl` are affected by a [bug](https://github.com/awslabs/amazon-eks-ami/issues/193) that causes Elasticsearch to get into a crashloop. A temporary workaround consists of overriding the AMI used when creating the cluster. The AMI named `amazon-eks-node-1.10-v20190211` is known to work. You will need to find its ID that corresponds to the region and zone where you are creating the cluster. For instance:
61
+
62
+
> **TIP**: The `--ssh-access` command line flag to the `eks create cluster` command configures SSH access to the Kubernetes nodes. This is really useful when debugging issues that require you to log in to the nodes.
63
+
64
+
> **NOTE**: At the time of this writing, EKS clusters created with `eksctl` are affected by a [bug](https://github.com/awslabs/amazon-eks-ami/issues/193) that causes Elasticsearch to get into a crashloop. The workaround consists of overriding the AMI used when creating the cluster. The AMI named `amazon-eks-node-1.10-v20190211` is known to work. You will need to find its ID that corresponds to the region and zone where you are creating the cluster. For instance:
63
65
>
64
66
> | Region | AMI ID |
65
67
> |:--------------:|:-----------------------:|
@@ -76,7 +78,6 @@ In this section, you will deploy an Amazon Elastic Container Service for Kuberne
4. Go to the **Policies**section and select a password strength. Be sure to also select the **Only allow administrators to create users** option as shown below, otherwise anyone could potentially create a new user and log in:
119
+
4. Go to the **Policies** and select the **Only allow administrators to create users** option, otherwise anyone would be able to sign up and gain access to services running in the cluster. **Save changes** before continuing to the next step:
5. Feel free to customize other sections, like **Tags**, to your liking. Once done, go back to the **Review** section:
123
+
5. Feel free to customize other sections, like **Tags**, to your liking. Once done, go to the **Review** section and click on the **Create pool** button:
7. Go to **App integration -> Domain name** setting and configure the Amazon Cognito domain, which has to be unique to all users in an AWS Region. Once done, click the **Save changes** button:
127
+
6. Go to **App integration > Domain name** setting and configure the Amazon Cognito domain, which has to be unique to all users in an AWS Region. Once done, click the **Save changes** button:
@@ -181,7 +181,7 @@ Please note that it can take a while for the DNS changes to propagate.
181
181
182
182
### Step 5: Access logging and monitoring dashboards
183
183
184
-
After the DNS changes have propagated, you should be able to access the Prometheusand Kibana dashboards by visiting `https://prometheus.${BKPR_DNS_ZONE}`and `https://kibana.${BKPR_DNS_ZONE}` respectively.
184
+
After the DNS changes have propagated, you should be able to access the Prometheus, Kibana and Grafana dashboards by visiting `https://prometheus.${BKPR_DNS_ZONE}`, `https://kibana.${BKPR_DNS_ZONE}`and `https://grafana.${BKPR_DNS_ZONE}` respectively. Login with credentials created in the [Create a user](#create-a-user) step.
185
185
186
186
Congratulations! You can now deploy your applications on the Kubernetes cluster and BKPR will help you manage and monitor them effortlessly.
187
187
@@ -220,14 +220,20 @@ Re-run the `kubeprod install` command from the [Deploy BKPR](#step-3-deploy-bkpr
220
220
kubecfg delete kubeprod-manifest.jsonnet
221
221
```
222
222
223
-
### Step 2: Delete the Hosted Zone in Route 53
223
+
### Step 2: Wait for the `kubeprod` namespace to be deleted
0 commit comments