In this project, I deployed NodeJs App with mongoDB on EKS cluster, I implemented a DevOps pipeline with two Git repos. The first, managed by Jenkins using two pipeline, the first one automates code quality analysis, Docker image builds and pushes to Docker Hub and triggers the second pipeline to update the manifest files that are uploaded in the other repo. The second repo contains Kubernetes manifests linked to ArgoCD for continuous deployment on an EKS cluster. ArgoCD automates deployment based on changes in manifest files. Utilized HTTPS to secure Jenkins, SonarQube servers, and Kubernetes ingress, I used Prometheus and Grafana deployed via HELM chart on the cluster, for monitoring both the application and the EKS cluster.
- Docker
- sonarqube
- certbot
- eksctl
- kubectl
- K8S (AWS EKS)
- Jenkins
- Argocd
- cert-manager
- Helm chart
- prometheus
- grafana
- Configure servers
- first server for Jenkins
- second one for Sonarqube
- use userData for installing the Jenkins and sonarqube attached in this repo
- note: I added configuration for Nginx to work as reverse proxy and later will install the tls certificate on these servers
- create two DNS record type A and map them two the server's IPs
- install the certbot utility to apply the TLS certificate
sudo snap install core; sudo snap refresh core sudo apt remove certbot sudo snap install --classic certbot sudo ln -s /snap/bin/certbot /usr/bin/certbot sudo certbot --nginx #you will need to add the domain name, email, and other configuration and certbot will configure the HTTPS and TLS configuration automatically
- verify that the TLS certificates installed correctly
-
Configure sonar and Jenkins and create two pipelines
- install the required sonarqube plugin
- create a project in Sonarqube to use its name and key in the Sonarqube in Jenkins file
- add the domain name of you sonarqube or ip in the sonarqube configuration in Jenkins server
- create a quality gateway in the sonarqube to stop the pipeline if the bugs or any thing else is above the threshold
- create the first pipeline to test the code quality and build and push the image to docker hub and trigger the second pipeline
- create a second pipeline to update the k8s manifest files
- install the required sonarqube plugin
-
Create the AWS EKS using eksctl tool
- first you have to add your aws credential to allow eksctl to utlize them and build the EKS cluster
- install the kubectl to communicate with the cluster
eksctl create cluster --name my-eks --region your-region --version you-cluster-version --nodegroup-name my-eks-node-group --node-type t3.small --nodes 2 --ssh-public-key your/puplic/ip/path --nodes-min 1 --nodes-max 3 --node-private-networking=false
-
Install the CSi driver for eks using eksctl to create EBS volume for the Application
# We have to do some steps before installing the CSI driver to allow the EKS to be authorized to provision EBS volume # variables used to create EKS export AWS_PROFILE="my-profile" # CHANGE ME IF YOU HAVE MUTIBLE AWS ACC export EKS_CLUSTER_NAME="my-cluster" # CHANGE ME export EKS_REGION="us-east-2" # CHANGE ME export EKS_VERSION="1.26" # CHANGE ME IF YOU NEED # variables used in automation export ROLE_NAME="${EKS_CLUSTER_NAME}_EBS_CSI_DriverRole" export ACCOUNT_ID=$(aws sts get-caller-identity \ --query "Account" \ --output text ) echo ${ACCOUNT_ID} export ACCOUNT_ROLE_ARN="arn:aws:iam::$ACCOUNT_ID:role/$ROLE_NAME" # Add OIDC Provider Support eksctl utils associate-iam-oidc-provider \ --cluster $EKS_CLUSTER_NAME \ --region $EKS_REGION \ --approve # AWS managed policy for CSI driver SA to make EBS API calls POLICY_ARN="arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy" # AWS IAM role bound to a Kubernetes service account eksctl create iamserviceaccount \ --name "ebs-csi-controller-sa" \ --namespace "kube-system" \ --cluster $EKS_CLUSTER_NAME \ --region $EKS_REGION \ --attach-policy-arn $POLICY_ARN \ --role-only \ --role-name $ROLE_NAME \ --approve # Create Addon eksctl create addon \ --name "aws-ebs-csi-driver" \ --cluster $EKS_CLUSTER_NAME \ --region=$EKS_REGION \ --service-account-role-arn $ACCOUNT_ROLE_ARN \ --force # Get status of the driver, must be STATUS=ACTIVE eksctl get addon \ --name "aws-ebs-csi-driver" \ --region $EKS_REGION \ --cluster $EKS_CLUSTER_NAME # You can check on the running EBS CSI pods with the following command: kubectl get pods \ --namespace "kube-system" \ --selector "app.kubernetes.io/name=aws-ebs-csi-driver" # note: There is other way to install the CSI driver using HELM chart :)
-
Install the nginx ingress controller using helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm upgrade --install ingress-nginx ingress-nginx \ --repo https://kubernetes.github.io/ingress-nginx \ --namespace ingress-nginx \ --create-namespace
-
Install the cert-manager to manage the TLS certificate
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.3/cert-manager.yaml
-
Deploy the manifest files using Argocd
- The second repo that will be connected to Argocd, here it is --> "https://github.com/AbdelrhmanAli123/ArgoCD-Repo"
- add your repo in the argocd
- create an application on ArgoCD and press the sync button
-
Add another DNS recorded for your Ingress controller
-
Create ingress resource and apply TLS certificate
- create the following resources (ingress resource, ClusterIssuer, Certificate)
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: kubeissuer
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: [email protected] #CHANGE ME
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: kubeissuer
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: kubecert
namespace: demo #CHANGE ME
spec:
secretName: demo
issuerRef:
name: kubeissuer
kind: ClusterIssuer
commonName: www.myapp.balloapi.online #CHANGE ME
dnsNames:
- www.myapp.balloapi.online #CHANGE ME
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
cert-manager.io/cluster-issuer: kubeissuer
kubernetes.io/ingress.class: nginx
name: kube-certs-ingress
namespace: demo #CHANGE ME
spec:
rules:
- host: www.myapp.balloapi.online #CHANGE ME
http:
paths:
- backend:
service:
name: kube-certs #CHANGE ME
port:
number: 80 #CHANGE ME
path: /
pathType: Prefix
tls:
- hosts:
- www.myapp.balloapi.online #CHANGE ME
secretName: kubeissuer -
Try to access the application to see if the TLS applied correctly !
-
Instll Prometheus and Grafana using Helm
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts helm repo update helm install prometheus prometheus-community/kube-prometheus-stack kubectl port-forward service/grafana 3000:80 # you can change the service to Nodeport instead of using port-forward -
Display the Grafana matrics
Congratulations on completing the project ! By following the detailed steps outlined in this README, you've successfully deployed a sample web application using argocd for continuous deployment.
