K8S部署helm的简单记录
K8S版本环境:[root@m1 ~]# kubectl versionClient Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"...
K8S版本环境:
[root@m1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:13:03Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.4", GitCommit:"5ca598b4ba5abb89bb773071ce452e33fb66339d", GitTreeState:"clean", BuildDate:"2018-06-06T08:00:59Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
1、安装helm客户端及服务端Tiller
客户端
[root@k8s1 bin]# curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 7001 100 7001 0 0 2035 0 0:00:03 0:00:03 --:--:-- 2036
[root@k8s1 bin]# chmod 700 get_helm.sh
[root@k8s1 bin]# ./get_helm.sh
Downloading https://get.helm.sh/helm-v2.14.1-linux-amd64.tar.gz
Preparing to install helm and tiller into /usr/local/bin
helm installed into /usr/local/bin/helm
tiller installed into /usr/local/bin/tiller
Run 'helm init' to configure helm.
服务端
安装前先在每个节点上面安装socat
[root@k8s3 ~]# ansible k8snodes -m yum -a "name=socat state=installed" -o -f 6
[root@k8s1 /]# helm init
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
默认的镜像拉取不了
pod/tiller-deploy-595f59d579-99xwg 0/1 ImagePullBackOff 0 2m
[root@k8s1 /]# kubectl get pod tiller-deploy-595f59d579-99xwg -o yaml -n kube-system
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: 2019-07-04T01:36:55Z
generateName: tiller-deploy-595f59d579-
labels:
app: helm
name: tiller
pod-template-hash: "1519158135"
name: tiller-deploy-595f59d579-99xwg
namespace: kube-system
ownerReferences:
- apiVersion: extensions/v1beta1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: tiller-deploy-595f59d579
uid: 34036cfc-9dfc-11e9-a946-000c2992f6b9
resourceVersion: "2213371"
selfLink: /api/v1/namespaces/kube-system/pods/tiller-deploy-595f59d579-99xwg
uid: 34acb835-9dfc-11e9-a946-000c2992f6b9
spec:
automountServiceAccountToken: true
containers:
- env:
- name: TILLER_NAMESPACE
value: kube-system
- name: TILLER_HISTORY_MAX
value: "0"
image: gcr.io/kubernetes-helm/tiller:v2.14.1
imagePullPolicy: IfNotPresent
下载一个,再放到私有仓库
阿里云有镜像
[root@k8s2 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1
v2.14.1: Pulling from google_containers/tiller
e7c96db7181b: Pull complete
def2a4ea1207: Pull complete
eba3e5d4aab0: Pull complete
94e8118fc9e9: Pull complete
Digest: sha256:f8002b91997fdc2c15a9c2aa994bea117b5b1683933f3144369862f0883c3c42
Status: Downloaded newer image for registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.14.1
上传到私有仓库
[root@k8s2 ~]# docker tag ac22eb1f780e re.bcdgptv.com.cn/tiller:v2.14.1
[root@k8s2 ~]# docker push re.bcdgptv.com.cn/tiller:2.14.1
The push refers to repository [re.bcdgptv.com.cn/tiller]
tag does not exist: re.bcdgptv.com.cn/tiller:2.14.1
[root@k8s2 ~]# docker push re.bcdgptv.com.cn/tiller:v2.14.1
The push refers to repository [re.bcdgptv.com.cn/tiller]
44ab9d246256: Pushed
e30f43dc76fd: Pushed
80f3b3bae304: Pushed
f1b5933fe4b5: Pushed
v2.14.1: digest: sha256:f8002b91997fdc2c15a9c2aa994bea117b5b1683933f3144369862f0883c3c42 size: 1163
指定从私有仓库拉取镜像
[root@k8s1 helm]# helm init --upgrade --tiller-image re.bcdgptv.com.cn/tiller:v2.14.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/chart
$HELM_HOME has been configured at /root/.helm.
helm客户端与tiller服务端的版本须得一致,否则报错:
[root@k8s1 helm]# helm list
Error: incompatible versions client[v2.14.1] server[v2.9.1]
配置好帐号和角色并授权
[root@k8s1 helm]# cat accountrole.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: tiller
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: tiller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: tiller
namespace: kube-system
[root@k8s1 helm]# kubectl create -f accountrole.yaml
serviceaccount "tiller" created
clusterrolebinding.rbac.authorization.k8s.io "tiller" created
[root@k8s1 helm]# kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
运行命令测试部署状态
[root@k8s1 helm]# helm list
此时为空
2、创建chart测试
[root@k8s1 test]# helm create hello-svc
Creating hello-svc
[root@k8s1 test]# ls
hello-svc
[root@k8s1 test]# tree --charset ASCII hello-svc/
hello-svc/
|-- charts
|-- Chart.yaml
|-- templates
| |-- deployment.yaml
| |-- _helpers.tpl
| |-- ingress.yaml
| |-- NOTES.txt
| |-- service.yaml
| `-- tests
| `-- test-connection.yaml
`-- values.yaml
3 directories, 8 files
修改镜像地址快速部署测试
[root@k8s1 hello-svc]# cat templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "hello-svc.fullname" . }}
labels:
{{ include "hello-svc.labels" . | indent 4 }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "hello-svc.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "hello-svc.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}" #此处标注了values.yaml文件哪个键值对应镜像名称和版本
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 80
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
readinessProbe:
httpGet:
path: /
port: http
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
修改为本地私有仓库
[root@k8s1 hello-svc]# cat values.yaml
# Default values for hello-svc.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: re.bcdgptv.com.cn/nginx
tag: v1.79
pullPolicy: IfNotPresent
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths: []
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
[root@k8s1 hello-svc]# helm install ./
NAME: yummy-kitten
LAST DEPLOYED: Thu Jul 4 16:49:58 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/Deployment
NAME READY UP-TO-DATE AVAILABLE AGE
yummy-kitten-hello-svc 0/1 0 0 0s
==> v1/Service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
yummy-kitten-hello-svc ClusterIP 10.254.183.143 <none> 80/TCP 0s
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=hello-svc,app.kubernetes.io/instance=yummy-kitten" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
查看K8S的信息,服务及POD拉起来了
[root@k8s1 hello-svc]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/yummy-kitten-hello-svc-77c6ff5b55-6kpt9 1/1 Running 0 4m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 91d
service/yummy-kitten-hello-svc ClusterIP 10.254.183.143 <none> 80/TCP 4m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.apps/yummy-kitten-hello-svc 1 1 1 1 4m
NAME DESIRED CURRENT READY AGE
replicaset.apps/yummy-kitten-hello-svc-77c6ff5b55 1 1 1 4m
[root@k8s1 hello-svc]# kubectl describe pod/yummy-kitten-hello-svc-77c6ff5b55-6kpt9
Name: yummy-kitten-hello-svc-77c6ff5b55-6kpt9
Namespace: default
Node: k8s1/192.168.137.71
Start Time: Thu, 04 Jul 2019 16:49:58 +0800
Labels: app.kubernetes.io/instance=yummy-kitten
app.kubernetes.io/name=hello-svc
pod-template-hash=3372991611
Annotations: <none>
Status: Running
IP: 172.30.99.4
Controlled By: ReplicaSet/yummy-kitten-hello-svc-77c6ff5b55
Containers:
hello-svc:
Container ID: docker://e7f54513eca295eba24a2b500db21b6515a475dbdf47fdbf2e1ce0838d82f60d
Image: re.bcdgptv.com.cn/nginx:v1.79
Image ID: docker-pullable://re.bcdgptv.com.cn/nginx@sha256:b1f5935eb2e9e2ae89c0b3e2e148c19068d91ca502e857052f14db230443e4c2
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 04 Jul 2019 16:53:46 +0800
Ready: True
Restart Count: 0
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7xs9j (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-7xs9j:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7xs9j
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned yummy-kitten-hello-svc-77c6ff5b55-6kpt9 to k8s1
Normal SuccessfulMountVolume 4m kubelet, k8s1 MountVolume.SetUp succeeded for volume "default-token-7xs9j"
Normal Pulling 3m kubelet, k8s1 pulling image "re.bcdgptv.com.cn/nginx:v1.79"
Normal Pulled 57s kubelet, k8s1 Successfully pulled image "re.bcdgptv.com.cn/nginx:v1.79"
Normal Created 34s kubelet, k8s1 Created container
Normal Started 23s kubelet, k8s1 Started container
[root@k8s1 hello-svc]# curl http://10.254.183.143
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
更多推荐
所有评论(0)