Kubernetes搭建 Helm 与 Swift 环境
helm背景上一篇文章简单介绍了Kubernetes 单机版的开发测试环境minikube的搭建和环境配置,在很长的一段时间里面我也没有搭建了,今天突然下了一个最新版,发现启动启动不了,于是又乖乖minikube 退回到了0.25.2版本,由于最近都在使用Helm与Swift进行项目的部署发布,本文简单记录步骤.概念HelmHelm是 k8s 集群部署和管理应用的一个包管理...
helm
背景
上一篇文章简单介绍了Kubernetes 单机版的开发测试环境minikube
的搭建和环境配置,在很长的一段时间里面我也没有搭建了,今天突然下了一个最新版,发现启动启动不了,于是又乖乖minikube
退回到了0.25.2
版本,由于最近都在使用Helm
与Swift
进行项目的部署发布,本文简单记录步骤.
概念
Helm
Helm是 k8s 集群部署和管理应用的一个包管理工具,使用Chart
文件配置项目,隐藏了 k8s 本身的一些基本概念,具有版本回滚,方便升级的一些特性.Helm
分为客户端和服务端,服务端tiller
运行在 k8s 集群中,用户管理和部署,更新 k8s集群中的应用,客户端有 Kubernetes-helm
的客户端(命令行)工具,用于和服务端Tiller
进行连接和通讯.
Swift
因为为了方便在代码中间调用Helm
的一些接口不是特别方便,例如tiller
都是走grpc
协议等,所以社区的小伙伴们开发了这么一个代理(Swift),封装成为restful
的 http的接口形式,方便各种语言和tiller
进行通信和操作.
搭建
本文 环境:
kubernetes 1.9.4
minikube 0.25.2
helm 2.12.1
swift 0.10.0
swift的版本需要和 helm 版本对应,参考https://github.com/appscode/swift
启动 minikube 环境
minikube start --kubernetes-version v1.9.4 --extra-config=apiserver.Authorization.Mode=RBAC
- 必须开启 RBAC 的权限的模式,否则后续安装会有很多问题,比如默认的模式没有
cluster-admin
的集群权限,无法安装swift
等. - 如果用 helm 安装 swift 遇到这样的问题,就是这个原因.
Error: release my-searchlight failed: clusterroles.rbac.authorization.k8s.io "my-searchlight" is forbidden: attempt to grant extra privileges: [PolicyRule{APIGroups:["apiextensions.k8s.io"], Resources:["customresourcedefinitions"], Verbs:["*"]} PolicyRule{APIGroups:["extensions"], Resources:["thirdpartyresources"], Verbs:["*"]} PolicyRule{APIGroups:["monitoring.appscode.com"], Resources:["*"], Verbs:["*"]} PolicyRule{APIGroups:["storage.k8s.io"], Resources:["*"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["secrets"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["secrets"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["componentstatuses"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["componentstatuses"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["persistentvolumes"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["persistentvolumes"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["persistentvolumeclaims"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["persistentvolumeclaims"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["patch"]} PolicyRule{APIGroups:[""], Resources:["pods"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["patch"]} PolicyRule{APIGroups:[""], Resources:["nodes"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["namespaces"], Verbs:["get"]} PolicyRule{APIGroups:[""], Resources:["namespaces"], Verbs:["list"]} PolicyRule{APIGroups:[""], Resources:["namespaces"], Verbs:["patch"]} PolicyRule{APIGroups:[""], Resources:["namespaces"], Verbs:["watch"]} PolicyRule{APIGroups:[""], Resources:["pods/exec"], Verbs:["create"]} PolicyRule{APIGroups:[""], Resources:["events"], Verbs:["create"]} PolicyRule{APIGroups:[""], Resources:["events"], Verbs:["list"]}] user=&{system:serviceaccount:kube-system:tilleruser a61ce1ed-0a6d-11e9-babc-0800274b952b [system:serviceaccounts system:serviceaccounts:kube-system system:authenticated] map[]} ownerrules=[] ruleResolutionErrors=[clusterroles.rbac.authorization.k8s.io "cluster-admin" not found]
- 用 k8s 1.9.4的原因在于,1.10.0没有默认的
kube-dns
模块了,有些不方便,而 swift 和 tiller 通信依赖 dns. - 和上文一样配置一下代理,便于获取一些墙外的资源.
Helm 客户端安装
brew install kubernetes-helm
helm version
Kube-dns安装
- 等待
kube-dns
安装完成
kubectl get deployment -n kube-system --watch
- 创建 serviceaccount 账户
kubectl create serviceaccount kube-dns -n kube-system
- 为
kube-dns
的 deployment设置关联关系
kubectl patch deployment kube-dns -n kube-system -p '{"spec":{"template":{"spec":{"serviceAccountName":"kube-dns"}}}}'
- 等待 dns 相关 pods 启动完成
kubectl get pods -n kube-system --watch
dns
部署 Tiller
- Tiller 作为 Helm 的服务端,版本和客户端保持一致.
- 部署 Tiller 之前需要为 tiller 赋予 k8s 的操作权限,创建流程如下
$ kubectl create serviceaccount tiller --namespace kube-system
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller
$ helm version --short
Client: v2.12.1+g02a47c7
Server: v2.12.1+g02a47c7
-
等待 tiller 部署完成
tiller
部署 Swift
- 先为 helm 添加个 appscode 的仓库
$ helm repo add appscode https://charts.appscode.com/stable/
$ helm repo update
$ helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879/charts
appscode https://charts.appscode.com/stable/
- helm中搜索 swift
$ helm search swift
NAME CHART VERSION APP VERSION DESCRIPTION
appscode/swift 0.10.0 0.10.0 Swift by AppsCode - Ajax friendly Helm Tiller Proxy
stable/swift 0.6.3 0.7.3 DEPRECATED swift by AppsCode - Ajax friendly Helm Tiller ...
- 安装 0.10.0
helm install appscode/swift --name my-swift
- 检查是否安装完成
kubectl get pods --all-namespaces -l app=swift --watch
swift
测试
- 获取service 查看一下 ip 和端口
$ kubectl get svc swift-my-swift
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
swift-my-swift NodePort 10.107.55.194 <none> 9855:31743/TCP,50055:32523/TCP,56790:30092/TCP 3h58m
minikube ssh
后进入 minikube 访问,成功返回,说明已经 ok
$ curl http://10.107.55.194:9855/tiller/v2/version/json
{"Version":{"sem_ver":"v2.12.1","git_commit":"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e","git_tree_state":"clean"}}
- 上面 service 中其实默认的 Type 应该是
Cluster IP
,为了宿主机能够访问,改成了NodePort
,方法:
kubectl patch svc swift -n kube-system -p '{"spec":{"type":"NodePort"}}'
- 宿主机试一下
$ minikube ip
192.168.99.100
$ curl http://192.168.99.100:31743/tiller/v2/version/json
{"Version":{"sem_ver":"v2.12.1","git_commit":"02a47c7249b1fc6d8fd3b94e6b4babf9d818144e","git_tree_state":"clean"}}%
-
dashborad查看
dashboard
大功告成!
ps:刚开始启动minikube 只用了minikube start
折腾了半天.....
如何测试
kube-dns
是否在工作,参考官方案例
https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
相关的错误
- 无法解析内部 dns
I1228 09:25:08.241821 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, TRANSIENT_FAILURE
I1228 09:25:08.243483 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, CONNECTING
W1228 09:25:28.242336 1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy.kube-system.svc:44134 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I1228 09:25:28.242368 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, TRANSIENT_FAILURE
I1228 09:25:28.242629 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, CONNECTING
W1228 09:25:53.349620 1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy.kube-system.svc:44134 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I1228 09:25:53.349990 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, TRANSIENT_FAILURE
I1228 09:25:53.350133 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, CONNECTING
W1228 09:26:32.635786 1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy.kube-system.svc:44134 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: i/o timeout". Reconnecting...
I1228 09:26:32.635949 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, TRANSIENT_FAILURE
I1228 09:26:32.636553 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, CONNECTING
W1228 09:27:12.647474 1 clientconn.go:1158] grpc: addrConn.createTransport failed to connect to {tiller-deploy.kube-system.svc:44134 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp: lookup tiller-deploy.kube-system.svc on 10.96.0.10:53: read udp 172.17.0.4:58290->10.96.0.10:53: i/o timeout". Reconnecting...
I1228 09:27:12.647527 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, TRANSIENT_FAILURE
I1228 09:27:44.000042 1 pickfirst.go:71] pickfirstBalancer: HandleSubConnStateChange: 0xc00043da40, CONNECTING
W1228 09:28:08.235280 1 clientconn.go:830] Failed to dial tiller-deploy.kube-system.svc:44134: grpc: the connection is closing; please retry.
- 发现 minikube 起来后,根本没有
kube-dns
和kube-dashborad
,但是minikube addons list
中这两个却是enabled
的状态,参考解决方案https://github.com/kubernetes/minikube/issues/2619
原理就是手动装了kube-dns
组件让其正常工作.
参考
[1] https://github.com/appscode/swift/blob/master/docs/setup/rbac.md
[2] https://appscode.com/products/swift/0.10.0/guides/api/
作者:candyleer
链接:https://www.jianshu.com/p/8d40fd9422e2
更多推荐
所有评论(0)