kubespray部署k8s version 1.0
一、部署原理基于vagrant和virtualbox,通过kubespray项目,控制ansible,部署高可用的k8s集群。二、部署环境1.部署拓扑2.部署软硬件硬件:物理机 CPU:Double CPU,14 core,2 threads per core,CPU Intel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz内存:64G软件:CentOS 7:kernel
一、部署原理
基于vagrant和virtualbox,通过kubespray项目,控制ansible,部署高可用的k8s集群。
二、部署环境
1.部署拓扑
2.部署软硬件
硬件:
物理机 CPU:Double CPU,14 core,2 threads per core,CPU Intel(R) Xeon(R) Gold 5120 CPU @ 2.20GHz
内存:64G
软件:
CentOS 7:kernel 3.10.0-327.22.2.el7.x86_64
vagrant:vagrant_2.1.0_x86_64.rpm
virtualbox:5.2
kubespray:master branch
ansible:2.5.2
k8s:1.9.5
三、部署步骤
1.安装vargrant
1)找到对应系统的包,并安装
sudo curl https://releases.hashicorp.com/vagrant/2.1.0/vagrant_2.1.0_x86_64.rpm -o vagrant_2.1.0_x86_64.rpm --progress
sudo rpm -ivh vagrant_2.1.0_x86_64.rpm
2)provider采用virtualbox
安装virtualbox的依赖
cd /etc/yum.repos.d/ wget http://download.virtualbox.org/virtualbox/rpm/rhel/virtualbox.repo
yum install VirtualBox-5.2
sudo /sbin/rcvboxdrv setup
2.git ansible部署代码
sudo git clone https://github.com/kubernetes-sigs/kubespray.git
3.利用vagrant代码生成虚拟机
在Vargrantfile的同级目录,运行
sudo vagrant up //启动环境
4.通过pip安装ansible等必须工具
sudo pip install ansible netaddr Jinja2
5.通过ansible代码部署k8s
sudo ansible-playbook -vv -i inventory/mycluster/hosts.ini cluster.yml --become //部署k8s
-vv 详细打印日志
-i 插入主机配置
--become 让执行线程具有特殊权限,因为过程中很多地方都会用到特殊权限
如果加sudo,又没有在host.ini中指定ssh登陆方式,则用root的用户名和私钥进行登陆,私钥名称必须是id_rsa;
如果不加sudo,又没有在host.ini中指定ssh登陆方式,则用当前用户名和私钥进行登陆,私钥名称必须是id_rsa;
如果不加sudo,在host.ini中指定ssh登陆方式及私钥存储位置,则以指定的用户名和私钥进行登陆;
6.验证成功部署
#登陆虚拟机
sudo vagrant ssh node2
注意:
1.virtualbox只能启动一次,若在执行vagrant的过程中Ctrl+C终止,则VirtualBox并没有终止,则再次运行VirtBox的时候,将导致无法创建VirtualBox
的hostonly模式网卡
此时应该重启VirtualBox即可
四、ansible-playbook自动化部署分析
部署过程中,ansible需要完成以下role中的task,顺序基本如下,过程中会有task会复用:
0.playbook初始化
启动playbook,为playbook静态导入相关配置,从cluster.yml开始编排部署;
1.download
执行downlaod role,下载必要的软件;
2.kubespray-default
执行kubespray-default role,对kubespray的main.yml进行检查,以及在kubespray运行过程中,进行一些默认配置;
3.bastion-ssh-config
执行bastion-ssh-config role,配置bastion-ssh;
4.bootstrap-os
执行bootstrap-os role,以确认操作系统类型(如CentOS、Ubuntu、CoreOS等)、检查安装包要求、分配inventory中的名字到未配置主机名的主机等;
5.adduser
执行adduser role,在每个node上并行添加用户kube,用户id:997,所属组id: 995,家目录:/home/kube,shell:/sbin/nologin
adduser : User | Create User
6.kubernetes
执行kubernetes role,其又嵌套多个子任务:
其中,
1)client
完成Set external kube-apiserver endpoint、Gather certs for admin kubeconfig等
2)kubeadm
利用官方集群部署工具;
3)master
完成master的部署。
4)node
完成node的部署。
5)preinstall
preinstall:安装kubernetes之前先对各项部署需求进行检查:
ansible版本不能太低、操作系统类型不支持、网络插件不支持、不兼容的网络插件和cloud provider、布尔值被设置成字符串、master主存太小、etcd主机数量为偶数、当dashboard开启时,RBAC没有开启、当inscure端口禁止,如果RBAC和匿名认证没有开启、kernel版本不能太低、老credential目录存在,新credential目录存在,将老credential目录的数据剪切到新credential目录、CoreOS强制容器使用二进制目录、二进制目录存在与否、检查是否atmoic host、set_fact、检查resolveconf、检测kubelet是否配置、检查是否早期DNS配置阶段、定位resolv.conf、定位temporary resolvconf cloud init file、检查/etc/dhclient.conf是否存在等。
6)secret
主要完成授权与认证,与CA相关。
7.docker
执行docker role,完成docker的配置与安装,具体包括docker存储的安装与配置等;
8.rkt
执行rkt role ,包括 gather os specific variables for rkt、rkt : install rkt pkg;
9.vault
执行vault role,完成kubernetes的secret管理;
10.etcd
执行etcd role,对etcd进行配置、安装、运行;
11.kubernetes-apps
运行kubernetes-apps role,主要对kubernetes的app进行管理,包括helm、kpm、policy_controller等。
12.network_plugin
运行network_plugin role,主要对几种网络方式进行配置和运行,包括flanneld、canal、weave等;
13.dnsmasq
运行dnsmasq role,完成dnsmasq的配置、运行等;
具体流程见附件。
五、附加信息
1.支持的Linux发行版本
- Container Linux by CoreOS
- Debian Jessie, Stretch, Wheezy
- Ubuntu 16.04
- CentOS/RHEL 7
- Fedora/CentOS Atomic
- openSUSE Leap 42.3/Tumbleweed
Note: Upstart/SysV init based OS types are not supported.
2.支持的组件版本
- kubernetes v1.9.5
- etcd v3.2.4
- flanneld v0.10.0
- calico v2.6.8
- canal (given calico/flannel versions)
- cilium v1.0.0-rc8
- contiv v1.1.7
- weave v2.3.0
- docker v17.03 (see note)
- rkt v1.21.0 (see Note 2)
Note: kubernetes doesn't support newer docker versions. Among other things kubelet currently breaks on docker's non-standard version numbering (it no longer uses semantic versioning). To ensure auto-updates don't break your cluster look into e.g. yum versionlock plugin or apt pin).
Note 2: rkt support as docker alternative is limited to control plane (etcd and kubelet). Docker is still used for Kubernetes cluster workloads and network plugins' related OS services. Also note, only one of the supported network plugins can be deployed for a given single cluster.
3.要求
- Ansible v2.4 (or newer) and python-netaddr is installed on the machine that will run Ansible commands
- Jinja 2.9 (or newer) is required to run the Ansible Playbooks
- The target servers must have access to the Internet in order to pull docker images.
- The target servers are configured to allow IPv4 forwarding.
- Your ssh key must be copied to all the servers part of your inventory.
- The firewalls are not managed, you'll need to implement your own rules the way you used to. in order to avoid any issue during deployment you should disable your firewall.
- If kubespray is ran from non-root user account, correct privilege escalation method should be configured in the target servers. Then the ansible_become flag or command parameters --become or -b should be specified.
4.网络插件
You can choose between 6 network plugins. (default: calico, except Vagrant uses flannel)
- flannel: gre/vxlan (layer 2) networking.
- calico: bgp (layer 3) networking.
- canal: a composition of calico and flannel plugins.
- cilium: layer 3/4 networking (as well as layer 7 to protect and secure application protocols), supports dynamic insertion of BPF bytecode into the Linux kernel to implement security services, networking and visibility logic.
- contiv: supports vlan, vxlan, bgp and Cisco SDN networking. This plugin is able to apply firewall policies, segregate containers in multiple network and bridging pods onto physical networks.
- weave: Weave is a lightweight container overlay network that doesn't require an external K/V database cluster. (Please refer to weave troubleshooting documentation).
The choice is defined with the variable kube_network_plugin. There is also an option to leverage built-in cloud provider networking instead. See also Network checker.
更多推荐
所有评论(0)