使用 HELM 來發布服務

[文章目录]
  1. 安裝 HELM
  2. 安裝 Tiller 也就是 HELM Server role
  3. 錯誤紀錄
    1. 如沒建立 HELM RBAC 情況下進行 helm init
    2. initialize Helm on both client and server
    3. GCP Insufficient CPU
  4. 觀察 Tiller
  5. HELM 安裝範例:Consul service
    1. 安裝 helm 線上 stable/consul 服務範例
    2. 在 GKE 上發布 Consul-UI 服務
    3. 於 HELM Chart 中新增參數 loadBalancerSourceRanges
  6. 其他可能用上的指令

安裝 HELM

  • macOS 系統:
    brew install kubernetes-helm

  • Linux 系統:from script
    $ sudo curl https://raw.githubusercontent.com/helm/helm/master/scripts/get | bash

安裝 Tiller 也就是 HELM Server role

  • 建立 RBAC for helm
    $ vi helm-rbac-config.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
  • 套用 RBAC

    1
    2
    3
    $ kc apply -f helm-rbac-config.yaml
    serviceaccount/tiller created
    clusterrolebinding.rbac.authorization.k8s.io/tiller created
  • 進行初始化 helm init

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    $ helm init --service-account tiller
    Creating /Users/afu/.helm
    Creating /Users/afu/.helm/repository
    Creating /Users/afu/.helm/repository/cache
    Creating /Users/afu/.helm/repository/local
    Creating /Users/afu/.helm/plugins
    Creating /Users/afu/.helm/starters
    Creating /Users/afu/.helm/cache/archive
    Creating /Users/afu/.helm/repository/repositories.yaml
    Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
    Adding local repo with URL: http://127.0.0.1:8879/charts
    $HELM_HOME has been configured at /Users/afu/.helm.

    Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

    Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
    To prevent this, run `helm init` with the --tiller-tls-verify flag.
    For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
    Happy Helming!
  • 如需更新 Tille pod
    helm init --service-account tiller --upgrade

錯誤紀錄

如沒建立 HELM RBAC 情況下進行 helm init

下列回饋~請管理者給予適合的權限政策

1
2
3
4
5
6
7
8
9
$ helm init
$HELM_HOME has been configured at /home/afu/.helm.

Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.

Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

initialize Helm on both client and server

1
2
$ helm list
Error: configmaps is forbidden: User "system:serviceaccount:kube-system:default" cannot list resource "configmaps" in API group "" in the namespace "kube-system"

出現此錯誤,起因是 Tiller 在 K8s 環境內權限不足
需要額外設定 ServiceAccount ,參考下列官方說明文件

GCP Insufficient CPU

1
2
3
4
5
6
7
8
9
10
11
# Consul 部署後,查看狀態
$ kc get pod
NAME READY STATUS RESTARTS AGE
consul-qpp4r 0/1 Pending 0 0s
consul-server-0 0/1 Pending 0 1m
consul-server-1 0/1 Pending 0 1m
consul-server-2 0/1 Pending 0 1m

# 從 GKE 主控台查詢 consul-server-0 事件
# Stateful Set: consul-server:
0/1 nodes are available: 1 Insufficient cpu.

原因:目前 GKE node 關於 CPU 資源不足,需要調整 node 資源。

觀察 Tiller

1
2
3
4
5
6
7
8
9
10
11
12
$ helm list

$ kc get -n kube-system pod -l app=helm
NAME READY STATUS RESTARTS AGE
tiller-deploy-6f8d4f6c9c-tj8dd 1/1 Running 0 3m23s

$ kc logs -n kube-system tiller-deploy-6f8d4f6c9c-tj8dd
[main] 2019/01/19 06:13:00 Starting Tiller v2.12.1 (tls=false)
[main] 2019/01/19 06:13:00 GRPC listening on :44134
[main] 2019/01/19 06:13:00 Probes listening on :44135
[main] 2019/01/19 06:13:00 Storage driver is ConfigMap
[main] 2019/01/19 06:13:00 Max history per release is 0

HELM 安裝範例:Consul service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# Clone the chart repo
$ git clone https://github.com/hashicorp/consul-helm.git
$ cd consul-helm

# 檢查 Consul version tag
$ git log
$ git tag

# Run Helm
$ helm install --dry-run ./
$ helm install --name consul ./
NAME: consul
LAST DEPLOYED: Sat Jan 19 17:36:07 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-dns ClusterIP 10.106.118.143 <none> 53/TCP,53/UDP 1s
consul-server ClusterIP None <none> 8500/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 1s
consul-ui ClusterIP 10.96.148.23 <none> 80/TCP 0s

==> v1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
consul 3 3 0 3 0 <none> 0s

==> v1/StatefulSet
NAME DESIRED CURRENT AGE
consul-server 3 3 0s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
consul-dnlpp 0/1 ContainerCreating 0 0s
consul-fwvvs 0/1 ContainerCreating 0 0s
consul-hscvv 0/1 ContainerCreating 0 0s
consul-server-0 0/1 Pending 0 0s
consul-server-1 0/1 Pending 0 0s
consul-server-2 0/1 Pending 0 0s

==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
consul-server N/A 1 0 1s

==> v1/ConfigMap
NAME DATA AGE
consul-client-config 1 1s
consul-server-config 1 1s

刪除 release

1
2
3
4
5
6
7
$ helm delete --dry-run consul
release "consul" deleted
$ helm ls --all consul
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
consul 1 Sat Jan 19 17:36:07 2019 DELETED consul-0.5.0 default
$ helm del --purge consul
release "consul" deleted


安裝 helm 線上 stable/consul 服務範例

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
helm install stable/consul
NAME: youthful-stoat
LAST DEPLOYED: Sat Jan 19 18:07:33 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/ConfigMap
NAME DATA AGE
youthful-stoat-consul-tests 1 0s

==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
youthful-stoat-consul ClusterIP None <none> 8500/TCP,8400/TCP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP,8600/TCP,8600/UDP 0s
youthful-stoat-consul-ui NodePort 10.103.14.242 <none> 8500:30481/TCP 0s

==> v1beta1/StatefulSet
NAME DESIRED CURRENT AGE
youthful-stoat-consul 3 1 0s

==> v1/Pod(related)
NAME READY STATUS RESTARTS AGE
youthful-stoat-consul-0 0/1 Pending 0 0s

==> v1beta1/PodDisruptionBudget
NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE
youthful-stoat-consul-pdb N/A 1 0 0s

==> v1/Secret
NAME TYPE DATA AGE
youthful-stoat-consul-gossip-key Opaque 1 0s


NOTES:
1. Watch all cluster members come up.
$ kubectl get pods --namespace=default -w
2. Test cluster health using Helm test.
$ helm test youthful-stoat-consul
3. (Optional) Manually confirm consul cluster is healthy.
$ CONSUL_POD=$(kubectl get pods -l='release=youthful-stoat-consul' --output=jsonpath={.items[0].metadata.name})
$ kubectl exec $CONSUL_POD consul members --namespace=default | grep server

在 GKE 上發布 Consul-UI 服務

方法有二:

  1. 變更 consul-ui service type
1
2
3
4
5
6
7
8
# vi values.yaml
ui:
service:
enabled: true
type: LoadBalancer

# 生效
$ helm upgrade consul ./
  1. 建立 consul ingress(尚未成功)
1
2
3
4
5
6
7
8
9
10
11
12
# ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: consul-ui
spec:
rules:
- http:
paths:
- backend:
serviceName: consul-ui
servicePort: 80

於 HELM Chart 中新增參數 loadBalancerSourceRanges

此案例,loadBalancerSourceRanges 原先並未定義在 Chart 中。

  • 先新增 values.yaml
1
2
3
4
5
service:
enabled: true
type: LoadBalancer
loadBalancerSourceRanges:
- 61.216.133.43/32
  • 於指定的 templates/ui-service.yaml 中新增參數
1
2
3
4
5
6
7
8
9
10
11
spec:
ports:
- name: http
port: 80
targetPort: 8500
{{- if .Values.ui.service.type }}
type: {{ .Values.ui.service.type }}
{{- end }}
{{- if .Values.ui.service.loadBalancerSourceRanges }}
loadBalancerSourceRanges: {{ .Values.ui.service.loadBalancerSourceRanges }}
{{- end }}
  • 更新 helm release-name

helm upgrade consul ./


其他可能用上的指令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# list releases
helm list

# install a chart archive
helm install

# inspect a chart
helm inspect

# download a named release
helm get

# given a release name, delete the release from Kubernetes
helm delete

# fetch release history
helm history

# displays the status of the named release
helm status [flags] RELEASE-NAME

# test a release
helm test [RELEASE] [flags]

# print the client/server version information
helm version