=================
== The Archive ==
=================

[그림과 실습으로 배우는 쿠버네티스 입문] 7장. 무상태 애플리케이션을 안전하게 만들기

|

cover.jpg

7.1 애플리케이션의 헬스 체크

7.1.1 Readiness probe

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: httpserver
  name: httpserver-readiness
spec:
  containers:
    - name: httpserver
      image: blux2/delayfailserver:1.1
      readinessProbe:
        httpGet:
          path: /healthz
          port: 8080
        initialDelaySeconds: 5
        periodSeconds: 5
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
~/gitFolders/build-breaking-fixing-kubernetes master*
❯ k apply --filename chapter-07/pod-readiness.yaml
pod/httpserver-readiness created

~/gitFolders/build-breaking-fixing-kubernetes master*
❯ k get pod --watch --namespace default           
NAME                   READY   STATUS    RESTARTS   AGE
httpserver-readiness   0/1     Running   0          13s
httpserver-readiness   1/1     Running   0          13s
httpserver-readiness   0/1     Running   0          28s
^C%                                                                             
~/gitFolders/build-breaking-fixing-kubernetes master* 59s
❯ k logs httpserver-readiness --namespace default 
2025/12/11 20:11:05 Starting server...
2025/12/11 20:11:13 Health Check: OK
2025/12/11 20:11:18 Error: Service Unhealthy
2025/12/11 20:11:23 Error: Service Unhealthy
2025/12/11 20:11:28 Error: Service Unhealthy
2025/12/11 20:11:28 Error: Service Unhealthy
2025/12/11 20:11:33 Error: Service Unhealthy
2025/12/11 20:11:38 Error: Service Unhealthy
2025/12/11 20:11:43 Error: Service Unhealthy
2025/12/11 20:11:48 Error: Service Unhealthy
2025/12/11 20:11:53 Error: Service Unhealthy
2025/12/11 20:11:58 Error: Service Unhealthy
2025/12/11 20:12:03 Error: Service Unhealthy
2025/12/11 20:12:08 Error: Service Unhealthy
2025/12/11 20:12:13 Error: Service Unhealthy
2025/12/11 20:12:18 Error: Service Unhealthy
2025/12/11 20:12:23 Error: Service Unhealthy

~/gitFolders/build-breaking-fixing-kubernetes master*
❯ k delete --filename chapter-07/pod-readiness.yaml --namespace default       
pod "httpserver-readiness" deleted from default namespace

7.1.2 Liveness probe

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: httpserver
  name: httpserver-liveness
spec:
  containers:
    - name: httpserver
      image: blux2/delayfailserver:1.1
      livenessProbe:
        httpGet:
          path: /healthz
          port: 8080
        initialDelaySeconds: 5
        periodSeconds: 5
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
readinessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 10
  periodSeconds: 5
livenessProbe:
  httpGet:
    path: /health
    port: 8080
  initialDelaySeconds: 30
  periodSeconds: 5
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
~/gitFolders/build-breaking-fixing-kubernetes master*
❯ k apply --filename chapter-07/pod-liveness.yaml --namespace default  
pod/httpserver-liveness created

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --watch --namespace default                              
NAME                  READY   STATUS    RESTARTS   AGE
httpserver-liveness   1/1     Running   0          5s
httpserver-liveness   1/1     Running   1 (0s ago)   25s
httpserver-liveness   1/1     Running   2 (1s ago)   51s
httpserver-liveness   1/1     Running   3 (1s ago)   76s
httpserver-liveness   1/1     Running   4 (1s ago)   101s
httpserver-liveness   0/1     CrashLoopBackOff   4 (0s ago)   2m5s
httpserver-liveness   1/1     Running            5 (52s ago)   2m57s
httpserver-liveness   0/1     CrashLoopBackOff   5 (1s ago)    3m21s
^C%                                                                             
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡ 3m 23s
❯ k describe pod httpserver-liveness --namespace default               
Name:             httpserver-liveness
Namespace:        default
Priority:         0
Service Account:  default
Node:             kind-control-plane/172.20.0.2
Start Time:       Fri, 12 Dec 2025 05:18:16 +0900
Labels:           app=httpserver
Annotations:      <none>
Status:           Running
IP:               10.244.0.6
IPs:
  IP:  10.244.0.6
Containers:
  httpserver:
    Container ID:   containerd://9ee9bc1b95e11ab96dcbf3b2b1dcee71a9f6a33d2be7b9c3667e8baa4ca96347
    Image:          blux2/delayfailserver:1.1
    Image ID:       docker.io/blux2/delayfailserver@sha256:84c46dd90117eda4f2545504e8ce9b2e595eef9fedb02aa2e0dcaa0c13cfeba0
    Port:           <none>
    Host Port:      <none>
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Fri, 12 Dec 2025 05:21:12 +0900
      Finished:     Fri, 12 Dec 2025 05:21:36 +0900
    Ready:          False
    Restart Count:  5
    Liveness:       http-get http://:8080/healthz delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z7srg (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  kube-api-access-z7srg:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  3m50s                 default-scheduler  Successfully assigned default/httpserver-liveness to kind-control-plane
  Normal   Pulled     54s (x6 over 3m50s)   kubelet            Container image "blux2/delayfailserver:1.1" already present on machine
  Normal   Created    54s (x6 over 3m50s)   kubelet            Created container: httpserver
  Normal   Started    54s (x6 over 3m50s)   kubelet            Started container httpserver
  Warning  Unhealthy  30s (x18 over 3m35s)  kubelet            Liveness probe failed: HTTP probe failed with statuscode: 503
  Normal   Killing    30s (x6 over 3m25s)   kubelet            Container httpserver failed liveness probe, will be restarted
  Warning  BackOff    29s (x7 over 105s)    kubelet            Back-off restarting failed container httpserver in pod httpserver-liveness_default(792c30d7-d78d-4475-8048-f67615561f10)

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k delete --filename chapter-07/pod-liveness.yaml --namespace default 
pod "httpserver-liveness" deleted from default namespace

7.1.3 Startup probe

1
2
3
4
5
6
startupProbe:
  httpGet:
    path: /healthz
    port: liveness-port
  failureThreshold: 30
  periodSeconds: 10

7.1.4 [망가뜨리기] State는 Running이지만…

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k apply --filename chapter-07/deployment-destruction.yaml --namespace default
deployment.apps/hello-server created

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default                                                
NAME                            READY   STATUS              RESTARTS   AGE
hello-server-54577b6988-mmj4f   0/2     ContainerCreating   0          7s
hello-server-54577b6988-mqsw8   0/2     ContainerCreating   0          7s
hello-server-54577b6988-w6bsk   0/2     ContainerCreating   0          7s

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default
NAME                            READY   STATUS              RESTARTS   AGE
hello-server-54577b6988-mmj4f   0/2     ContainerCreating   0          15s
hello-server-54577b6988-mqsw8   1/2     Running             0          15s
hello-server-54577b6988-w6bsk   1/2     Running             0          15s

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default
NAME                            READY   STATUS    RESTARTS   AGE
hello-server-54577b6988-mmj4f   1/2     Running   0          20s
hello-server-54577b6988-mqsw8   1/2     Running   0          20s
hello-server-54577b6988-w6bsk   1/2     Running   0          20s
1
2
3
4
5
6
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default
NAME                            READY   STATUS    RESTARTS   AGE
hello-server-54577b6988-mmj4f   1/2     Running   0          4m21s
hello-server-54577b6988-mqsw8   1/2     Running   0          4m21s
hello-server-54577b6988-w6bsk   1/2     Running   0          4m21s
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k describe pod hello-server-54577b6988-w6bsk --namespace default 
Name:             hello-server-54577b6988-w6bsk
Namespace:        default
Priority:         0
Service Account:  default
Node:             kind-control-plane/172.20.0.2
Start Time:       Fri, 12 Dec 2025 05:26:55 +0900
Labels:           app=hello-server
                  pod-template-hash=54577b6988
Annotations:      <none>
Status:           Running
IP:               10.244.0.7
IPs:
  IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-server-54577b6988
Containers:
  hello-server:
    Container ID:   containerd://1a8eb613660f60804d619b5b3f3d8dba9008bac72b00520682276df45679c6a5
    Image:          blux2/hello-server:1.6
    Image ID:       docker.io/blux2/hello-server@sha256:035c114efa5478a148e5aedd4e2209bcc46a6d9eff3ef24e9dba9fa147a6568d
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 12 Dec 2025 05:27:00 +0900
    Ready:          False
    Restart Count:  0
    Liveness:       http-get http://:8080/health delay=10s timeout=1s period=5s #success=1 #failure=3
    Readiness:      http-get http://:8081/health delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6v55t (ro)
  busybox:
    Container ID:  containerd://9e55e526f62a8d03870e988ee4a190b2d62674f8c073ffaff39192fba01c6fc4
    Image:         busybox:1.36.1
    Image ID:      docker.io/library/busybox@sha256:6b219909078e3fc93b81f83cb438bd7a5457984a01a478c76fe9777a8c67c39e
    Port:          <none>
    Host Port:     <none>
    Command:
      sleep
      9999
    State:          Running
      Started:      Fri, 12 Dec 2025 05:27:08 +0900
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6v55t (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       False 
  ContainersReady             False 
  PodScheduled                True 
Volumes:
  kube-api-access-6v55t:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason     Age                     From               Message
  ----     ------     ----                    ----               -------
  Normal   Scheduled  5m1s                    default-scheduler  Successfully assigned default/hello-server-54577b6988-w6bsk to kind-control-plane
  Normal   Pulling    5m1s                    kubelet            Pulling image "blux2/hello-server:1.6"
  Normal   Pulled     4m56s                   kubelet            Successfully pulled image "blux2/hello-server:1.6" in 4.749s (4.749s including waiting). Image size: 3650825 bytes.
  Normal   Created    4m56s                   kubelet            Created container: hello-server
  Normal   Started    4m56s                   kubelet            Started container hello-server
  Normal   Pulling    4m56s                   kubelet            Pulling image "busybox:1.36.1"
  Normal   Pulled     4m48s                   kubelet            Successfully pulled image "busybox:1.36.1" in 4.843s (7.665s including waiting). Image size: 1909538 bytes.
  Normal   Created    4m48s                   kubelet            Created container: busybox
  Normal   Started    4m48s                   kubelet            Started container busybox
  Warning  Unhealthy  2m57s (x25 over 4m47s)  kubelet            Readiness probe failed: Get "http://10.244.0.7:8081/health": dial tcp 10.244.0.7:8081: connect: connection refused
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
  labels:
    app: hello-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      containers:
        - name: hello-server
          image: blux2/hello-server:1.6
          ports:
            - containerPort: 8080 # <----------------- 컨테이너 포트
          readinessProbe:
            httpGet:
              path: /health
              port: 8081 # <------------------------ Readiness probe 의 포트
            initialDelaySeconds: 5
            periodSeconds: 5
          livenessProbe:
            httpGet:
              path: /health
              port: 8080 # <------------------------ Liveness probe 의 포트
            initialDelaySeconds: 10
            periodSeconds: 5
        - name: busybox
          image: busybox:1.36.1
          command:
            - sleep
            - "9999"
1
2
3
4
5
6
7
8
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k logs hello-server-54577b6988-w6bsk --namespace default 
Defaulted container "hello-server" out of: hello-server, busybox
2025/12/11 20:27:00 Starting server on port 8080
2025/12/11 20:27:10 Health Status OK
2025/12/11 20:27:15 Health Status OK
2025/12/11 20:27:20 Health Status OK
# ...
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
package main

import (
    "fmt"
    "github.com/prometheus/client_golang/prometheus/promhttp"
    "log"
    "net/http"
    "os"
)

func main() {
    port := os.Getenv("PORT")
    if port == "" {
        port = "8080" // <------------- 환경 변수 PORT 가 없으면 8080 을 사용하도록 설정
    }

    http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
        if r.URL.Path != "/" {
            http.NotFound(w, r)
            return
        }
        fmt.Fprintf(w, "Hello, world! Let's learn Kubernetes!")
    })

    http.HandleFunc("/healthz", func(w http.ResponseWriter, r *http.Request) {
        if r.URL.Path != "/healthz" {
            http.NotFound(w, r)
            return
        }
        w.WriteHeader(http.StatusOK)
        fmt.Fprintf(w, "OK")
        log.Printf("Health Status OK")
    })

    http.Handle("/metrics", promhttp.Handler())

    log.Printf("Starting server on port %s\n", port)
    err := http.ListenAndServe(":"+port, nil)
    if err != nil {
        log.Fatal(err)
    }

}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k edit deployment --namespace default
deployment.apps/hello-server edited

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡ 20s
❯ k get pod --namespace default
NAME                            READY   STATUS        RESTARTS   AGE
hello-server-54577b6988-mmj4f   1/2     Running       0          8m25s
hello-server-54577b6988-mqsw8   1/2     Running       0          8m25s
hello-server-54577b6988-w6bsk   1/2     Terminating   0          8m25s
hello-server-5fd8bd6855-txf6k   1/2     Running       0          5s
hello-server-5fd8bd6855-zjl54   2/2     Running       0          12s

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default
NAME                            READY   STATUS    RESTARTS   AGE
hello-server-5fd8bd6855-rrsh5   2/2     Running   0          55s
hello-server-5fd8bd6855-txf6k   2/2     Running   0          62s
hello-server-5fd8bd6855-zjl54   2/2     Running   0          69s

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k delete --filename chapter-07/deployment-destruction.yaml --namespace default
deployment.apps "hello-server" deleted from default namespace

7.2 애플리케이션에 적절한 리소스 지정하기

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
  labels:
    app: hello-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      containers:
        - name: hello-server
          image: blux2/hello-server:1.6
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: "5Gi"
              cpu: "10m"
            limits:
              memory: "5Gi"
              cpu: "10m"
          readinessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 5
          livenessProbe:
            httpGet:
              path: /health
              port: 8080
            initialDelaySeconds: 10
            periodSeconds: 5

7.2.1 Resource requests로 컨테이너의 리소스 사용량 요구하기

1
2
3
4
resources:
  requests:
    memory: "64Mi"
    cpu: "10m"

7.2.2 Resource limits로 컨테이너의 리소스 사용량 제어하기

1
2
3
4
resources:
  limits:
    memory: "64Mi"
    cpu: "10m"

7.2.3 리소스의 단위

7.2.4 Pod의 Quality of Service(QoS) 클래스

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k apply --filename chapter-07/pod-resource-handson.yaml --namespace default 
pod/hello-server created

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default                                              
NAME           READY   STATUS              RESTARTS   AGE
hello-server   0/1     ContainerCreating   0          18s 

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default
NAME           READY   STATUS    RESTARTS   AGE
hello-server   1/1     Running   0          38s

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod hello-server --output jsonpath='{.status.qosClass}' --namespace default
Guaranteed%                                                                     

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k delete --filename chapter-07/pod-resource-handson.yaml --namespace default 
pod "hello-server" deleted from default namespace

7.2.5 [망가뜨리기] 또 Pod가 고장났다

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡ 16s
❯ k describe node --namespace default         
Name:               kind-control-plane
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=kind-control-plane
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 12 Dec 2025 06:17:42 +0900
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  kind-control-plane
  AcquireTime:     <unset>
  RenewTime:       Fri, 12 Dec 2025 06:19:07 +0900
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 12 Dec 2025 06:18:25 +0900   Fri, 12 Dec 2025 06:17:39 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 12 Dec 2025 06:18:25 +0900   Fri, 12 Dec 2025 06:17:39 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 12 Dec 2025 06:18:25 +0900   Fri, 12 Dec 2025 06:17:39 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 12 Dec 2025 06:18:25 +0900   Fri, 12 Dec 2025 06:18:04 +0900   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.20.0.2
  Hostname:    kind-control-plane
Capacity:
  cpu:                2
  ephemeral-storage:  22268480Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8113864Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  22268480Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8113864Ki
  pods:               110
System Info:
  Machine ID:                 226c3bc6cc9c40d09051f97a535d8602
  System UUID:                b724560d-a167-4eef-b257-3df8ad207aa4
  Boot ID:                    dc3726cf-bad7-454c-99be-cea850fc4b5b
  Kernel Version:             6.8.0-50-generic
  OS Image:                   Debian GNU/Linux 12 (bookworm)
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://2.1.3
  Kubelet Version:            v1.34.0
  Kube-Proxy Version:         
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
ProviderID:                   kind://docker/kind/kind-control-plane
Non-terminated Pods:          (9 in total)
  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
  kube-system                 coredns-66bc5c9577-5xz7n                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     80s
  kube-system                 coredns-66bc5c9577-888sn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     80s
  kube-system                 etcd-kind-control-plane                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         87s
  kube-system                 kindnet-qsl4f                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
  kube-system                 kube-apiserver-kind-control-plane             250m (12%)    0 (0%)      0 (0%)           0 (0%)         87s
  kube-system                 kube-controller-manager-kind-control-plane    200m (10%)    0 (0%)      0 (0%)           0 (0%)         88s
  kube-system                 kube-proxy-jzpd4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
  kube-system                 kube-scheduler-kind-control-plane             100m (5%)     0 (0%)      0 (0%)           0 (0%)         87s
  local-path-storage          local-path-provisioner-7b8c8ddbd6-rccsx       0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests    Limits
  --------           --------    ------
  cpu                950m (47%)  100m (5%)
  memory             290Mi (3%)  390Mi (4%)
  ephemeral-storage  0 (0%)      0 (0%)
  hugepages-1Gi      0 (0%)      0 (0%)
  hugepages-2Mi      0 (0%)      0 (0%)
  hugepages-32Mi     0 (0%)      0 (0%)
  hugepages-64Ki     0 (0%)      0 (0%)
Events:
  Type    Reason                   Age                From             Message
  ----    ------                   ----               ----             -------
  Normal  Starting                 78s                kube-proxy       
  Normal  Starting                 95s                kubelet          Starting kubelet.
  Normal  NodeHasSufficientMemory  95s (x8 over 95s)  kubelet          Node kind-control-plane status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    95s (x8 over 95s)  kubelet          Node kind-control-plane status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     95s (x7 over 95s)  kubelet          Node kind-control-plane status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  95s                kubelet          Updated Node Allocatable limit across pods
  Normal  Starting                 87s                kubelet          Starting kubelet.
  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  87s                kubelet          Node kind-control-plane status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    87s                kubelet          Node kind-control-plane status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     87s                kubelet          Node kind-control-plane status is now: NodeHasSufficientPID
  Normal  RegisteredNode           81s                node-controller  Node kind-control-plane event: Registered Node kind-control-plane in Controller
  Normal  NodeReady                68s                kubelet          Node kind-control-plane status is now: NodeReady
1
2
3
~/gitFolders/build-breaking-fixing-kubernetes master ⇡
❯ k apply --filename chapter-07/deployment-resource-handson.yaml --namespace default 
deployment.apps/hello-server created
1
2
3
4
5
6
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default
NAME                            READY   STATUS    RESTARTS   AGE
hello-server-554cb47c88-dccps   0/1     Pending   0          51s
hello-server-554cb47c88-qn5cc   0/1     Pending   0          51s
hello-server-554cb47c88-shng6   1/1     Running   0          51s
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k describe pod hello-server-554cb47c88-dccps --namespace default         
Name:             hello-server-554cb47c88-dccps
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-server
                  pod-template-hash=554cb47c88
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-server-554cb47c88
Containers:
  hello-server:
    Image:      blux2/hello-server:1.6
    Port:       8080/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     10m
      memory:  5Gi
    Requests:
      cpu:        10m
      memory:     5Gi
    Liveness:     http-get http://:8080/health delay=10s timeout=1s period=5s #success=1 #failure=3
    Readiness:    http-get http://:8080/health delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fjk7c (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-fjk7c:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  95s   default-scheduler  0/1 nodes are available: 1 Insufficient memory. no new claims to deallocate, preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k describe pod hello-server-554cb47c88-dccps --namespace default         
Name:             hello-server-554cb47c88-dccps
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-server
                  pod-template-hash=554cb47c88
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-server-554cb47c88
Containers:
  hello-server:
    Image:      blux2/hello-server:1.6
    Port:       8080/TCP
    Host Port:  0/TCP
    Limits:
      cpu:     10m
      memory:  5Gi
    Requests:
      cpu:        10m
      memory:     5Gi
    Liveness:     http-get http://:8080/health delay=10s timeout=1s period=5s #success=1 #failure=3
    Readiness:    http-get http://:8080/health delay=5s timeout=1s period=5s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-fjk7c (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-fjk7c:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  95s   default-scheduler  0/1 nodes are available: 1 Insufficient memory. no new claims to deallocate, preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k describe node --namespace default                             
Name:               kind-control-plane
Roles:              control-plane
Labels:             beta.kubernetes.io/arch=arm64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=arm64
                    kubernetes.io/hostname=kind-control-plane
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/control-plane=
Annotations:        node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 12 Dec 2025 06:17:42 +0900
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  kind-control-plane
  AcquireTime:     <unset>
  RenewTime:       Fri, 12 Dec 2025 06:21:30 +0900
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 12 Dec 2025 06:21:29 +0900   Fri, 12 Dec 2025 06:17:39 +0900   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 12 Dec 2025 06:21:29 +0900   Fri, 12 Dec 2025 06:17:39 +0900   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 12 Dec 2025 06:21:29 +0900   Fri, 12 Dec 2025 06:17:39 +0900   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Fri, 12 Dec 2025 06:21:29 +0900   Fri, 12 Dec 2025 06:18:04 +0900   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  172.20.0.2
  Hostname:    kind-control-plane
Capacity:
  cpu:                2
  ephemeral-storage:  22268480Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8113864Ki
  pods:               110
Allocatable:
  cpu:                2
  ephemeral-storage:  22268480Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  hugepages-32Mi:     0
  hugepages-64Ki:     0
  memory:             8113864Ki
  pods:               110
System Info:
  Machine ID:                 226c3bc6cc9c40d09051f97a535d8602
  System UUID:                b724560d-a167-4eef-b257-3df8ad207aa4
  Boot ID:                    dc3726cf-bad7-454c-99be-cea850fc4b5b
  Kernel Version:             6.8.0-50-generic
  OS Image:                   Debian GNU/Linux 12 (bookworm)
  Operating System:           linux
  Architecture:               arm64
  Container Runtime Version:  containerd://2.1.3
  Kubelet Version:            v1.34.0
  Kube-Proxy Version:         
PodCIDR:                      10.244.0.0/24
PodCIDRs:                     10.244.0.0/24
ProviderID:                   kind://docker/kind/kind-control-plane
Non-terminated Pods:          (10 in total)
  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
  default                     hello-server-554cb47c88-shng6                 10m (0%)      10m (0%)    5Gi (64%)        5Gi (64%)      2m5s
  kube-system                 coredns-66bc5c9577-5xz7n                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     3m45s
  kube-system                 coredns-66bc5c9577-888sn                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     3m45s
  kube-system                 etcd-kind-control-plane                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         3m52s
  kube-system                 kindnet-qsl4f                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3m45s
  kube-system                 kube-apiserver-kind-control-plane             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m52s
  kube-system                 kube-controller-manager-kind-control-plane    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m53s
  kube-system                 kube-proxy-jzpd4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
  kube-system                 kube-scheduler-kind-control-plane             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m52s
  local-path-storage          local-path-provisioner-7b8c8ddbd6-rccsx       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m45s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests      Limits
  --------           --------      ------
  cpu                960m (48%)    110m (5%)
  memory             5410Mi (68%)  5510Mi (69%)
  ephemeral-storage  0 (0%)        0 (0%)
  hugepages-1Gi      0 (0%)        0 (0%)
  hugepages-2Mi      0 (0%)        0 (0%)
  hugepages-32Mi     0 (0%)        0 (0%)
  hugepages-64Ki     0 (0%)        0 (0%)
Events:
  Type    Reason                   Age              From             Message
  ----    ------                   ----             ----             -------
  Normal  Starting                 3m43s            kube-proxy       
  Normal  Starting                 4m               kubelet          Starting kubelet.
  Normal  NodeHasSufficientMemory  4m (x8 over 4m)  kubelet          Node kind-control-plane status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    4m (x8 over 4m)  kubelet          Node kind-control-plane status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     4m (x7 over 4m)  kubelet          Node kind-control-plane status is now: NodeHasSufficientPID
  Normal  NodeAllocatableEnforced  4m               kubelet          Updated Node Allocatable limit across pods
  Normal  Starting                 3m52s            kubelet          Starting kubelet.
  Normal  NodeAllocatableEnforced  3m52s            kubelet          Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  3m52s            kubelet          Node kind-control-plane status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    3m52s            kubelet          Node kind-control-plane status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     3m52s            kubelet          Node kind-control-plane status is now: NodeHasSufficientPID
  Normal  RegisteredNode           3m46s            node-controller  Node kind-control-plane event: Registered Node kind-control-plane in Controller
  Normal  NodeReady                3m33s            kubelet          Node kind-control-plane status is now: NodeReady
1
2
3
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get deployment hello-server -o=jsonpath='{.spec.template.spec.containers[0].resources.requests}' --namespace default
{"cpu":"10m","memory":"5Gi"}%                                                   
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k edit deployment hello-server --namespace default
deployment.apps/hello-server edited

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡ 41s
❯ k get deployment hello-server -o=jsonpath='{.spec.template.spec.containers[0].resources.requests}' --namespace default
{"cpu":"10m","memory":"64Mi"}%                                                  

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default
NAME                           READY   STATUS    RESTARTS   AGE
hello-server-b54f97688-4pdpj   1/1     Running   0          113s
hello-server-b54f97688-c6kk7   1/1     Running   0          59s
hello-server-b54f97688-skn6d   1/1     Running   0          86s

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k delete --filename chapter-07/deployment-resource-handson.yaml --namespace default 
deployment.apps "hello-server" deleted from default namespace
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k apply --filename chapter-07/deployment-memory-leak.yaml --namespace default
deployment.apps/hello-server created

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default                                                
NAME                           READY   STATUS    RESTARTS   AGE
hello-server-585469975-8cwf6   1/1     Running   0          66s
hello-server-585469975-c4fq4   1/1     Running   0          66s
hello-server-585469975-vrksr   1/1     Running   0          66s

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k port-forward deployment/hello-server 8080:8080 --namespace default
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Handling connection for 8080

# Terminal 2
~
❯ curl localhost:8080
curl: (52) Empty reply from server
1
2
3
4
5
6
7
8
~
❯ k get pod --watch --namespace default                               
NAME                           READY   STATUS    RESTARTS   AGE
hello-server-585469975-8cwf6   1/1     Running   0          2m17s
hello-server-585469975-c4fq4   1/1     Running   0          2m17s
hello-server-585469975-vrksr   1/1     Running   0          2m17s
hello-server-585469975-8cwf6   0/1     OOMKilled   0          2m37s
^C%                                                                             
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
~ 30s
❯ k describe pod hello-server-585469975-8cwf6 --namespace default 
Name:             hello-server-585469975-8cwf6
Namespace:        default
Priority:         0
Service Account:  default
Node:             kind-control-plane/172.20.0.2
Start Time:       Fri, 12 Dec 2025 06:30:26 +0900
Labels:           app=hello-server
                  pod-template-hash=585469975
Annotations:      <none>
Status:           Running
IP:               10.244.0.9
IPs:
  IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-server-585469975
Containers:
  hello-server:
    Container ID:   containerd://4b7eb58c1deb814b2b9cb2c3f7349582520ffa4fb4bca672e4fb9ea484bd1ea8
    Image:          blux2/hello-server:1.7
    Image ID:       docker.io/blux2/hello-server@sha256:e34bb060e65c7f5cc58001c7e373e781e481b8875426227c3e1e4ac7709059af
    Port:           8080/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 12 Dec 2025 06:33:17 +0900
    Last State:     Terminated
      Reason:       OOMKilled
      Exit Code:    137
      Started:      Fri, 12 Dec 2025 06:30:49 +0900
      Finished:     Fri, 12 Dec 2025 06:33:02 +0900
    Ready:          True
    Restart Count:  1
    Limits:
      cpu:     10m
      memory:  100Mi
    Requests:
      cpu:        10m
      memory:     100Mi
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xrvmq (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  kube-api-access-xrvmq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   Guaranteed
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age                  From               Message
  ----    ------     ----                 ----               -------
  Normal  Scheduled  3m8s                 default-scheduler  Successfully assigned default/hello-server-585469975-8cwf6 to kind-control-plane
  Normal  Pulling    3m5s                 kubelet            Pulling image "blux2/hello-server:1.7"
  Normal  Pulled     2m59s                kubelet            Successfully pulled image "blux2/hello-server:1.7" in 5.72s (5.721s including waiting). Image size: 3650985 bytes.
  Normal  Created    31s (x2 over 2m59s)  kubelet            Created container: hello-server
  Normal  Pulled     31s                  kubelet            Container image "blux2/hello-server:1.7" already present on machine
  Normal  Started    17s (x2 over 2m45s)  kubelet            Started container hello-server
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
~
❯ k get pod hello-server-585469975-8cwf6 --output=jsonpath="{.status.containerStatuses[0].lastState}" --namespace default | jaq .
{
  "terminated": {
    "containerID": "containerd://5837d3c9a51b5aa00903cbcb884a654a8a634c7bd95714ac7085302ac486490e",
    "exitCode": 137,
    "finishedAt": "2025-12-11T21:33:02Z",
    "reason": "OOMKilled",
    "startedAt": "2025-12-11T21:30:49Z"
  }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
~
❯ k edit deployment/hello-server --namespace default 
deployment.apps/hello-server edited

~
❯ k get pod --namespace default
NAME                            READY   STATUS    RESTARTS   AGE
hello-server-86dff7b688-4ccgg   1/1     Running   0          107s
hello-server-86dff7b688-8pbds   1/1     Running   0          83s
hello-server-86dff7b688-rmtqh   1/1     Running   0          63s
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
# Terminal 1
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡ 4m 49s
❯ k port-forward deployment/hello-server 8080:8080 --namespace default
Forwarding from 127.0.0.1:8080 -> 8080
Forwarding from [::1]:8080 -> 8080
Handling connection for 8080
^C%                                                                             

# Terminal 2
~
❯ curl localhost:8080
Hello, world! Let's learn Kubernetes!%                                          
1
2
3
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡ 16s
❯ k delete --filename chapter-07/deployment-memory-leak.yaml --namespace default
deployment.apps "hello-server" deleted from default namespace

7.3 Pod 스케줄링의 편리한 기능 이해하기

7.3.1 Node selector로 노드 지정하기

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - name: nginx
      image: nginx:latest
  nodeSelector:
    disktype: ssd

7.3.2 Affinity와 Anti-affinity로 Pod 스케줄링을 유연하게 지정하기

Node affinity

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Pod
metadata:
  name: node-affinity-pod
spec:
  affinity:
    nodeAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 1
          preference:
            matchExpressions:
              - key: disktype
                operator: In
                values:
                  - ssd
  containers:
    - name: node-affinity-pod
      image: nginx:latest

Pod affinity 와 Pod anti-affinity

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: v1
kind: Pod
metadata:
  name: pod-anti-affinity
  labels:
    app: nginx
spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - weight: 100
          podAffinityTerm:
            labelSelector:
              matchExpressions:
                - key: app
                  operator: In
                  values:
                    - nginx
            topologyKey: kubernetes.io/hostname
  containers:
    - name: nginx
      image: nginx:latest

7.3.3 Pod 분산을 위한 Pod Topology Spread Constraints 설정하기

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
kind: Pod
apiVersion: v1
metadata:
  name: mypod
  labels:
    app: nginx
spec:
  topologySpreadConstraints:
    - maxSkew: 1
      topologyKey: zone
      whenUnsatisfiable: DoNotSchedule
      labelSelector:
        matchLabels:
          app: nginx
  containers:
    - name: nginx
      image: nginx:latest
flowchart TB
%% -----------------------------
%% (1) 현재 상태
%% -----------------------------
    subgraph S1["(1) 현재 상태: Node별 스케줄링된 Pod 개수의 차이"]
        direction LR
        A["Node A"]
        B["Node B"]
        C["Node C"]
    end
%% 차이(스큐) 계산(현재 상태 기준)
    A <-->|" 차이 0 "| B
    B <-->|" 차이 1 "| C
    A <-->|" 차이 1 "| C
flowchart TB
%% -----------------------------
%% (2) maxSkew로 후보 평가
%% -----------------------------
    subgraph S2["(2) maxSkew 1 이면 차이가 1보다 크지 않도록 스케줄링"]
        direction TB
        P["New Pod"]
        A["Node A"]
        B["Node B"]
        C["Node C"]
    end
    P -->|X 차이가 최대 2| A
    P -->|X 차이가 최대 2| B
    P -->|O 차이가 최대 0| C
    C --> R["결론: Node C에 스케줄링"]

7.3.4 Taint와 Toleration

TaintToleration
노드에 부여하는 설정Pod 에 부여하는 설정
오염관용 (노드가 가지는 Taint 를 Pod 가 허용할 수 있는지 여부를 설정하는 개념)
1
k taint nodes <대상 노드 이름> <label 이름>=<label 값>:<Taint 효과>
1
k taint nodes node1 disktype=ssd:NoSchedule
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - name: nginx
      image: nginx:latest
      imagePullPolicy: IfNotPresent
  tolerations:
    - key: "disktype"
      value: "ssd"
      operator: "Equal"
      effect: "NoSchedule"

7.3.5 Tips: Pod Priority와 Preemption

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
~
❯ k describe priorityClasses --namespace default
Name:              system-cluster-critical
Value:             2000000000
GlobalDefault:     false
PreemptionPolicy:  PreemptLowerPriority
Description:       Used for system critical pods that must run in the cluster, but can be moved to another node if necessary.
Annotations:       <none>
Events:            <none>

Name:              system-node-critical
Value:             2000001000
GlobalDefault:     false
PreemptionPolicy:  PreemptLowerPriority
Description:       Used for system critical pods that must not be moved from their current node.
Annotations:       <none>
Events:            <none>
1
2
3
4
5
6
7
8
~
❯ k get pods --all-namespaces -o jsonpath="{range.items[?(@.spec.priorityClassName=='system-node-critical')]}{.metadata.name}{'\t'}{.metadata.namespace}{'\n'}{end}"
etcd-kind-control-plane    kube-system
kindnet-qsl4f    kube-system
kube-apiserver-kind-control-plane    kube-system
kube-controller-manager-kind-control-plane    kube-system
kube-proxy-jzpd4    kube-system
kube-scheduler-kind-control-plane    kube-system

7.3.6 [망가뜨리기] Pod 스케줄링 실패

준비: kind 사용하기

먼저 kind 를 사용할 준비를 해야 한다. (kind/multinode-config.yaml)

1
2
3
4
5
6
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
  - role: worker
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
~/gitFolders/build-breaking-fixing-kubernetes master ⇡
❯ kind delete cluster
Deleting cluster "kind" ...
Deleted nodes: ["kind-control-plane"]

~/gitFolders/build-breaking-fixing-kubernetes master ⇡
❯ kind create cluster -n kind-multinode --config kind/multinode-config.yaml --image=kindest/node:v1.29.0
Creating cluster "kind-multinode" ...
 ✓ Ensuring node image (kindest/node:v1.29.0) 🖼 
 ✓ Preparing nodes 📦 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-kind-multinode"
You can now use your cluster with:

kubectl cluster-info --context kind-kind-multinode

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡ 1m 13s
❯ k get node
NAME                           STATUS   ROLES           AGE   VERSION
kind-multinode-control-plane   Ready    control-plane   40s   v1.29.0
kind-multinode-worker          Ready    <none>          22s   v1.29.0
kind-multinode-worker2         Ready    <none>          16s   v1.29.0

Pod를 스케줄링할 수 없는 실습

chapter-07/deployment-schedule-handson.yaml

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
  labels:
    app: hello-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    values:
                      - hello-server
                    operator: In
              topologyKey: kubernetes.io/hostname
      containers:
        - name: hello-server
          image: blux2/hello-server:1.8
          ports:
            - containerPort: 8080
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k apply --filename chapter-07/deployment-schedule-handson.yaml --namespace default 
deployment.apps/hello-server created

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default
NAME                           READY   STATUS    RESTARTS   AGE
hello-server-9c5ff67bd-kdt4s   0/1     Pending   0          31s
hello-server-9c5ff67bd-kg9xk   1/1     Running   0          31s
hello-server-9c5ff67bd-vtgcj   1/1     Running   0          31s

#
# 1개의 Pod 가 Pending 상태
# Pending 상태인 Pod 의 세부 사항을 확인
#

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k describe pod hello-server-9c5ff67bd-kdt4s --namespace default 
Name:             hello-server-9c5ff67bd-kdt4s
Namespace:        default
Priority:         0
Service Account:  default
Node:             <none>
Labels:           app=hello-server
                  pod-template-hash=9c5ff67bd
Annotations:      <none>
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    ReplicaSet/hello-server-9c5ff67bd
Containers:
  hello-server:
    Image:        blux2/hello-server:1.8
    Port:         8080/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2mppr (ro)
Conditions:
  Type           Status
  PodScheduled   False 
Volumes:
  kube-api-access-2mppr:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    Optional:                false
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  49s   default-scheduler  0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 node(s) didn't match pod anti-affinity rules. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get deployment hello-server --output=jsonpath="{.spec.template.spec.tolerations}" --namespace default | jaq  

# 아무것도 출력되지 않음

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get deployment hello-server --output=jsonpath="{.spec.template.spec.affinity}" --namespace default | jaq
{
  "podAntiAffinity": {
    "requiredDuringSchedulingIgnoredDuringExecution": [
      {
        "labelSelector": {
          "matchExpressions": [
            {
              "key": "app",
              "operator": "In",
              "values": [
                "hello-server"
              ]
            }
          ]
        },
        "topologyKey": "kubernetes.io/hostname"
      }
    ]
  }
}
1
2
3
4
5
6
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get nodes -o custom-columns='NAME:.metadata.name,TAINTS-KEY:.spec.taints[*].key'
NAME                           TAINTS-KEY
kind-multinode-control-plane   node-role.kubernetes.io/control-plane
kind-multinode-worker          <none>
kind-multinode-worker2         <none>
테스트 환경운영 환경
1선택해도 문제없음애플리케이션 서버를 컨트롤 플레인용 노드에 스케줄링하는 것은 적절하지 않기 때문에 권장하지 않음
2실습 환경에 따라 선택하기 어려울 수도 있음비용이 발생하기 때문에 불가피한 경우가 아니라면 추천하지 않음
3선택해도 문제없음일반적임
4선택해도 문제없음일반적임
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k scale deployment hello-server --replicas=2 --namespace default 
deployment.apps/hello-server scaled

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get pod --namespace default
NAME                           READY   STATUS    RESTARTS   AGE
hello-server-9c5ff67bd-kg9xk   1/1     Running   0          63m
hello-server-9c5ff67bd-vtgcj   1/1     Running   0          63m

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k delete --filename chapter-07/deployment-schedule-handson.yaml --namespace default 
deployment.apps "hello-server" deleted from default namespace

7.4 애플리케이션 스케일링하기

수평 스케일링수직 스케일링
개념동시에 동작하는 애플리케이션의 개수를 늘리는 것사용 리소스를 늘리는 것
예시하나의 서버에 대한 부하를 분산하기 위해 여러 대의 서버로 늘리는 경우애플리케이션이 필요로 하는 메모리가 늘어난만큼 사용할 수 있는 메모리를 늘리는 것

7.4.1 수평 스케일링

Horizontal Pod Autoscaler

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ kind delete cluster --name kind-multinode
Deleting cluster "kind-multinode" ...
Deleted nodes: ["kind-multinode-worker" "kind-multinode-control-plane" "kind-multinode-worker2"]

~/gitFolders/build-breaking-fixing-kubernetes master ⇡
❯ kind create cluster
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.34.0) 🖼
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/

#
# 이전 kind-multinode 삭제 후 기본으로 재설치한 뒤 진행
#

~/gitFolders/build-breaking-fixing-kubernetes master ⇡ 14s
❯ k apply --filename https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.4/components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

~/gitFolders/build-breaking-fixing-kubernetes master ⇡
❯ k patch --namespace kube-system deployment metrics-server --type=json --patch="[{'op': 'add', 'path':'/spec/template/spec/containers/0/args/-', 'value':'--kubelet-insecure-tls'}]"
deployment.apps/metrics-server patched

~/gitFolders/build-breaking-fixing-kubernetes master ⇡
❯ k get deployment metrics-server --namespace kube-system
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
metrics-server   1/1     1            1           2m44s

# READY 가 1/1, AVAILABLE 이 1 이면 정상
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hpa-handson
  labels:
    app: hello-server
spec:
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      containers:
        - name: hello-server
          image: blux2/hello-server:1.8
          ports:
            - containerPort: 8080
          resources:
            requests:
              memory: "10Mi"
              cpu: "5m"
            limits:
              memory: "10Mi"
              cpu: "5m"
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: hello-server-hpa
spec:
  minReplicas: 1
  maxReplicas: 10
  metrics:
    - resource:
        name: cpu
        target:
          averageUtilization: 50
          type: Utilization
      type: Resource
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: hpa-handson
---
apiVersion: v1
kind: Service
metadata:
  name: hello-server-service
spec:
  selector:
    app: hello-server
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
minReplicas, maxReplicas자동으로 조절될 Pod 개수의 최솟값과 최댓값을 지정
metrics조절의 기준이 되는 메트릭을 지정
target.averageUtilization애플리케이션의 바람직한 CPU 사용률을 지정 (50으로 지정하면 CPU 사용률이 언제나 50% 이하가 되도록 Pod 수를 조절하게 됨)
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# Terminal 1
~/gitFolders/build-breaking-fixing-kubernetes master ⇡
❯ k apply --filename chapter-07/hpa-hello-server.yaml --namespace default 
deployment.apps/hpa-handson created
horizontalpodautoscaler.autoscaling/hello-server-hpa created
service/hello-server-service created

~/gitFolders/build-breaking-fixing-kubernetes master* ⇡
❯ k get hpa --watch --namespace default                                  
NAME               REFERENCE                TARGETS              MINPODS   MAXPODS   REPLICAS   AGE
hello-server-hpa   Deployment/hpa-handson   cpu: <unknown>/50%   1         10        1          16s
hello-server-hpa   Deployment/hpa-handson   cpu: 0%/50%          1         10        1          61s
hello-server-hpa   Deployment/hpa-handson   cpu: 20%/50%         1         10        1          106s
hello-server-hpa   Deployment/hpa-handson   cpu: 0%/50%          1         10        1          2m1s
#...
1
2
3
4
5
6
7
# Terminal 2
~ 7s
❯ k --namespace default run --stdin --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://hello-server-service.default.svc.cluster.local:8080; done" 
All commands and output from this session will be recorded in container logs, including credentials and sensitive information passed through the command prompt.
If you don't see a command prompt, try pressing enter.
Hello, world! Let's learn Kubernetes!Hello, world! Let's learn Kubernetes!Hello, world! Let's learn Kubernetes!Hello, world! Let's learn Kubernetes!Hello, world! Let's learn Kubernetes!Hello, world! Let's learn Kubernetes!Hello, world! Let's learn Kubernetes!Hello, world! Let's learn Kubernetes!Hello, world! Let's learn Kubernetes!Hello, world! Let's learn Kubernetes!Hello, world! Let's learn Kubernetes!
#...
1
2
3
4
5
6
7
8
9
# Terminal 1
hello-server-hpa   Deployment/hpa-handson   cpu: 120%/50%        1         10        1          8m1s
hello-server-hpa   Deployment/hpa-handson   cpu: 120%/50%        1         10        3          8m16s
hello-server-hpa   Deployment/hpa-handson   cpu: 220%/50%        1         10        3          8m31s
hello-server-hpa   Deployment/hpa-handson   cpu: 140%/50%        1         10        5          8m46s
hello-server-hpa   Deployment/hpa-handson   cpu: 86%/50%         1         10        9          9m1s
hello-server-hpa   Deployment/hpa-handson   cpu: 15%/50%         1         10        9          9m16s
hello-server-hpa   Deployment/hpa-handson   cpu: 0%/50%          1         10        9          9m31s
hello-server-hpa   Deployment/hpa-handson   cpu: 2%/50%          1         10        9          10m
1
2
3
4
5
~/gitFolders/build-breaking-fixing-kubernetes master* ⇡ 13m 32s
❯ k delete --filename chapter-07/hpa-hello-server.yaml --namespace default 
deployment.apps "hpa-handson" deleted from default namespace
horizontalpodautoscaler.autoscaling "hello-server-hpa" deleted from default namespace
service "hello-server-service" deleted from default namespace

7.4.2 수직 스케일링

Vertical Pod Autoscaler

7.5 노드 정지에 대비하기

7.5.1 애플리케이션의 가용성을 보증하는 PodDisruptionBudget(PDB)

minAvailablemaxUnavailable
최소 몇 개의 Pod 가 정상이어야 하는지최대 몇 개의 Pod 가 비정상이어도 되는지
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
  labels:
    app: hello-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      containers:
        - name: hello-server
          image: blux2/hello-server:1.8
1
2
3
4
5
6
7
8
9
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: hello-server-pdb
spec:
  minAvailable: 2
  selector:
    matchLabels:
      app: hello-server

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
  labels:
    app: hello-server
spec:
  replicas: 3
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    values:
                      - hello-server
                    operator: In
              topologyKey: kubernetes.io/hostname
      containers:
        - name: hello-server
          image: blux2/hello-server:1.8
          ports:
            - containerPort: 8080
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: hello-server-pdb
spec:
  maxUnavailable: 10%
  selector:
    matchLabels:
      app: hello-server

Categories:

Tags: