基于Vagrant CoreOS的kubernetes一键部署
本地环境要求vagrant 1.7virtualbox 5.0部署支持的kubernetes集群一键部署,集成dns服务结构:主机作用服务e1etcd服务etcd2c1kubernetes控制机器kubelet、(proxy、apiserver、kube-scheduler、podMaster)容器w1kubernete
本地环境要求
vagrant 1.7
virtualbox 5.0
部署支持的kubernetes集群
一键部署,集成dns服务
结构:
主机 | 作用 | 服务 |
---|---|---|
e1 | etcd服务 | etcd2 |
c1 | kubernetes控制机器 | kubelet、(proxy、apiserver、kube-scheduler、podMaster)容器 |
w1 | kubernetes节点 | kubelet、(proxy)容器 |
本地主机kubectl配置
下载kubernetes客户端
ARCH=darwin; wget https://storage.googleapis.com/kubernetes-release/release/v1.0.6/bin/$ARCH/amd64/kubectl
环境变量ARCH基于当前机器的操作系统,值有”linux”或者”darwin”
下载完成后执行以下命令:
$ chmod +x kubectl $ mv kubectl /usr/local/bin/kubectl
配置kubectl
$ kubectl config set-cluster vagrant --server=https://172.17.4.101:443 --certificate-authority=${PWD}/ssl/ca.pem $ kubectl config set-credentials vagrant-admin --certificate-authority=${PWD}/ssl/ca.pem --client-key=${PWD}/ssl/admin-key.pem --client-certificate=${PWD}/ssl/admin.pem $kubectl config set-context vagrant --cluster=vagrant --user=vagrant-admin $ kubectl config use-context vagrant
客户端准备就绪,kubernetes集群启动完成之后可在当前宿主机上使用kubectl客户端了。
下载kubernetes集群安装程序
从github上下载基于vagrant的配置
coreos官方原版(原版需要本地机器能够直接访问gcr.io)
$ git clone https://github.com/coreos/coreos-kubernetes.git $ cd coreos-kubernetes/multi-node/
中国本土改良版(解决了各种镜像翻墙问题,直接从hub.docker.com上获取):
$ git clone https://github.com/shenshouer/coreos-kubernetes.git $ cd coreos-kubernetes/multi-node/
新增kubernetes v1.2.0-alpha.2版本支持
启动机器
复制config.rb.sample
到config.rb
并修改如下值:
\#$update_channel="alpha" \#$controller_count=1 \#$controller_vm_memory=512 \#$worker_count=1 \#$worker_vm_memory=512 \#$etcd_count=1 \#$etcd_vm_memory=512
执行vagrant up
启动集群
ging machine 'c1' up with 'virtualbox' provider... Bringing machine 'w1' up with 'virtualbox' provider... ==> e1: Box 'coreos-alpha' could not be found. Attempting to find and install... e1: Box Provider: virtualbox e1: Box Version: >= 766.0.0 ==> e1: Loading metadata for box 'http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json' e1: URL: http://alpha.release.core-os.net/amd64-usr/current/coreos_production_vagrant.json ==> e1: Adding box 'coreos-alpha' (v808.0.0) for provider: virtualbox e1: Downloading: http://alpha.release.core-os.net/amd64-usr/808.0.0/coreos_production_vagrant.box
注意: *启动过程中会去coreos官网下载core的coreos_production_vagrant.box,如果速度比较慢,请试用其他工具下载,下载完成之后再执行
vagrant box add coreos-alpha coreos_production_vagrant.box
将box添加到vagrant box列表中,然后在当前目录下执行vagrant up
即可启动集群。*
默认配置启动完成之后状态如下:
sope:vagrant goyoo$ vagrant status Current machine states: e1 running (virtualbox) c1 running (virtualbox) w1 running (virtualbox) This environment represents multiple VMs. The VMs are all listed above with their current state. For more information about a specific VM, run `vagrant status NAME`. sope:vagrant goyoo$
检查服务
使用vagrant ssh c1
登陆到kubernetes集群的控制机器上。
使用系统日志查看命令journalctl -f --system
可以看到系统正在龟速下载coreos官方的flannel镜像,以及一些其他的依赖镜像。等待吧!
Sep 23 04:10:49 c1 sdnotify-proxy[1370]: Unable to find image 'quay.io/coreos/flannel:0.5.3' locally Sep 23 04:10:49 c1 dockerd[1225]: time="2015-09-23T04:10:49.419376085Z" level=info msg="POST /v1.20/images/create?fromImage=quay.io%2Fcoreos%2Fflannel&tag=0.5.3" Sep 23 04:10:58 c1 sdnotify-proxy[1370]: Pulling repository quay.io/coreos/flannel Sep 23 04:11:02 c1 systemd[1]: Started OpenSSH per-connection server daemon (10.0.2.2:65423). Sep 23 04:11:02 c1 sshd[1384]: Accepted publickey for core from 10.0.2.2 port 65423 ssh2: RSA SHA256:1M4RzhMyWuFS/86uPY/ce2prh/dVTHW7iD2RhpquOZA Sep 23 04:11:04 c1 sdnotify-proxy[1370]: ac385fc755d9: Layer already being pulled by another client. Waiting. Sep 23 04:11:07 c1 systemd[1]: Starting Generate /run/coreos/motd... Sep 23 04:11:07 c1 systemd[1]: Started Generate /run/coreos/motd. Sep 23 04:12:07 c1 systemd[1]: Starting Generate /run/coreos/motd... Sep 23 04:12:07 c1 systemd[1]: Started Generate /run/coreos/motd. Sep 23 04:12:19 c1 systemd[1]: flanneld.service: Start operation timed out. Terminating.
等待所有需要的容器自动下载完成之后,集群将自动启动所有所需的基础服务。在当前宿主机上即可使用kubectl工具进行集群的操作了,如下:
sope:~ goyoo$ kubectl --namespace=kube-system get po -o wide NAME READY STATUS RESTARTS AGE NODE kube-apiserver-172.17.4.101 1/1 Running 0 2h 172.17.4.101 kube-controller-manager-172.17.4.101 1/1 Running 0 2h 172.17.4.101 kube-dns-v9-xr2h8 4/4 Running 1 2h 172.17.4.201 kube-podmaster-172.17.4.101 2/2 Running 2 2h 172.17.4.101 kube-proxy-172.17.4.101 1/1 Running 0 2h 172.17.4.101 kube-proxy-172.17.4.201 1/1 Running 0 2h 172.17.4.201 kube-scheduler-172.17.4.101 1/1 Running 0 2h 172.17.4.101
下一步部署Tectonic服务到集群
更多推荐
所有评论(0)