-
Notifications
You must be signed in to change notification settings - Fork 36
feat: 채윤희의 서비스 컨테이너 이미지 제작 #83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
65a068c
1525343
7ebecef
2d3ef92
6e60940
2dee6f9
7fac3f0
345b178
dc1e6be
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,10 @@ | ||
| FROM golang:alpine AS builder | ||
| WORKDIR /app | ||
| COPY main.go . | ||
| RUN CGO_ENABLED=0 go build -ldflags="-s -w" -o main main.go | ||
| # -s : (디버거를 위한) 심볼 테이블 제거, -w: 디버깅 정보 제거 | ||
|
|
||
| FROM scratch | ||
| COPY --from=builder /app/main /main | ||
|
|
||
| ENTRYPOINT ["/main"] |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,40 @@ | ||
| CLUSTER_NAME=my-cluster | ||
| APP_NAME=my-app | ||
| NAMESPACE=my-namespace | ||
| CHART_PATH=./charts | ||
|
|
||
| .PHONY: all docker-build docker-test create-cluster delete-cluster install helm-debug pod-debug test | ||
|
|
||
| all: docker-build create-cluster install test delete-cluster | ||
|
|
||
| docker-build: | ||
| docker build -t $(APP_NAME):latest . | ||
|
|
||
| docker-test: | ||
| make docker-build | ||
| docker images $(APP_NAME):latest | ||
| docker run -d -p "8000:8080" --rm --name $(APP_NAME) $(APP_NAME):latest | ||
| sleep 2 | ||
| curl http://localhost:8000/healthcheck | ||
| docker stop $(APP_NAME) | ||
|
|
||
| create-cluster: | ||
| k3d cluster create ${CLUSTER_NAME} --port "30080:30080@loadbalancer" | ||
| k3d image import ${APP_NAME}:latest -c ${CLUSTER_NAME} | ||
| docker exec k3d-${CLUSTER_NAME}-server-0 sh -c 'echo "nameserver 8.8.8.8" > /etc/resolv.conf' | ||
|
|
||
| delete-cluster: | ||
| k3d cluster delete ${CLUSTER_NAME} | ||
|
|
||
| install: | ||
| helm install ${APP_NAME} ${CHART_PATH}/ --wait | ||
|
|
||
| helm-debug: | ||
| helm template ${CHART_PATH}/ --debug | ||
|
|
||
| pod-debug: | ||
| kubectl describe pod | ||
|
|
||
| test: | ||
| curl http://localhost:30080/api/v1/dst03106 | ||
| curl http://localhost:30080/healthcheck | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| apiVersion: v2 | ||
| name: my-service | ||
| description: A minimal Helm chart for my app | ||
| version: 0.1.0 | ||
| appVersion: "1.0" |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,7 @@ | ||
| {{- define "my-chart.name" -}} | ||
| {{- .Chart.Name -}} | ||
| {{- end -}} | ||
|
|
||
| {{- define "my-chart.fullname" -}} | ||
| {{- printf "%s-%s" .Release.Name .Chart.Name | trunc 63 | trimSuffix "-" -}} | ||
| {{- end -}} |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,34 @@ | ||
| apiVersion: apps/v1 | ||
| kind: Deployment | ||
| metadata: | ||
| name: {{ include "my-chart.fullname" . }} | ||
| spec: | ||
| replicas: {{ .Values.replicaCount }} | ||
| # Existing ReplicaSets whose pods are selected by this | ||
| # ... will be the ones affected by this deployment. | ||
| selector: | ||
| matchLabels: | ||
| app: {{ include "my-chart.name" . }} | ||
| tier: backend | ||
| matchExpressions: # ref. https://kubernetes.io/ko/docs/concepts/overview/working-with-objects/labels/ | ||
| - {key : environment, operator: NotIn, values: [dev]} | ||
| # Template describes the pods that will be created. | ||
| template: | ||
| metadata: | ||
| labels: # labels과 selector는 object 타입 | ||
| app: {{ include "my-chart.name" . }} | ||
| tier: backend | ||
| environment: prod | ||
| spec: | ||
| containers: | ||
| - name: {{ .Chart.Name }} | ||
| image: "{{ .Values.image.name }}" | ||
| imagePullPolicy: {{ .Values.image.pullPolicy }} | ||
| env: | ||
| - name: PORT | ||
| value: "{{ .Values.service.targetPort }}" | ||
| ports: | ||
| - containerPort: {{ .Values.service.targetPort }} # 문서화용 데이터 | ||
| # List of ports to expose from the container. | ||
| # Not specifying a port here DOES NOT prevent that port from being exposed. | ||
|
Comment on lines
+31
to
+33
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 해당 containerPort는 values.service.targetport를 주입받고 있습니다.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. containers.env가 ports.containerPort와 동일하게, values.service.targetPort를 사용하도록 수정했습니다. (345b178) |
||
| # Any port which is listening on the default "0.0.0.0" address inside a container will be accessible from the network. | ||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,17 @@ | ||
| apiVersion: v1 | ||
| kind: Service | ||
| metadata: | ||
| name: {{ include "my-chart.fullname" . }} | ||
| spec: | ||
| type: {{ .Values.service.type }} | ||
| ports: | ||
| - name: http | ||
| port: {{ .Values.service.port }} | ||
| targetPort: {{ .Values.service.targetPort }} | ||
| {{- if and (or (eq .Values.service.type "NodePort") (eq .Values.service.type "LoadBalancer")) .Values.service.nodePort }} | ||
| nodePort: {{ .Values.service.nodePort }} | ||
| {{- end }} | ||
| protocol: TCP | ||
| selector: # 어떤 파드에 트래픽을 보낼지 라벨로 선택 | ||
| app: {{ include "my-chart.name" . }} | ||
| tier: backend |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,12 @@ | ||
| replicaCount: 1 | ||
|
|
||
| image: | ||
| name: my-app:latest | ||
| pullPolicy: IfNotPresent | ||
|
|
||
| service: | ||
| type: NodePort | ||
| port: 80 # Service가 제공하는 포트 번호 (클러스터 내부에서 사용) | ||
| nodePort: 30080 # 클러스터 외부에서 노드 IP를 통해 접근하는 포트 번호 | ||
| targetPort: 8080 # 실제 Pod 내부 컨테이너의 포트 번호 | ||
| # 30080 포트 요청 -> Kubernetes Service (80) -> 선택된 Pod의 컨테이너 내부 포트(8080)으로 전달 |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,38 @@ | ||
| package main | ||
|
|
||
| import ( | ||
| "fmt" | ||
| "log" | ||
| "net/http" | ||
| "os" | ||
| ) | ||
|
|
||
| func healthcheckHandler(w http.ResponseWriter, r *http.Request) { | ||
| w.Header().Set("Content-Type", "application/json") | ||
| fmt.Fprint(w, `{"status": "ok"}`) | ||
| } | ||
|
|
||
| func myHandler(w http.ResponseWriter, r *http.Request) { | ||
| w.Header().Set("Content-Type", "application/json") | ||
| fmt.Fprintln(w, `{"message": "Hello world"}`) | ||
| } | ||
|
|
||
| func getPort() string { | ||
| if port := os.Getenv("PORT"); port != "" { | ||
| return port | ||
| } | ||
| return "8080" | ||
| } | ||
|
|
||
| func main() { | ||
| http.HandleFunc("/healthcheck", healthcheckHandler) | ||
| http.HandleFunc("/api/v1/dst03106", myHandler) | ||
|
|
||
| port := getPort() | ||
| host := "0.0.0.0" | ||
| log.Printf("Server running on http://%s:%s\n", host, port) | ||
| err := http.ListenAndServe(host+":"+port, nil) | ||
| if err != nil { | ||
| log.Fatalf("Failed to start server: %v", err) | ||
| } | ||
| } |
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(스크린 샷: k3s 초기 DNS 설정 문제로 인한 외부 도메인 접근 실패)k3d를 이용해 띄운 k3s 컨테이너에서 DNS 이슈로 인해 Docker Hub 접근이 안 되어서, 쿠버네티스 시스템 이미지 다운로드가 실패했습니다. 그래서 우선 임시방편으로 아래와 같이 Google DNS(8.8.8.8)로 수정했습니다.
혹시 이런 DNS 문제의 더 깔끔한 해결 방법이 있다면 조언 부탁드립니다! (@Jack-R-lantern)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
해당 부분은 혹시 나중에 같이 진행해봐야할것 같습니다 저도 k3d는 잘 쓰지 않아서요
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
감사합니다! 제가 환경을 자세히 설명드리지 않았는데, colima로 VM을 띄운 뒤, 그 안에서 docker를 사용하고 있었습니다.
문제의 원인은 colima가 호스트의 네트워크 스택과 연결되도록, DNS를 Google DNS 대신 내부 IP로 설정했기 때문이며, docker도 이 설정을 따라간 것으로 추정하고 있습니다.