ansible+kubeadm部署k8s
主要使用ansible工具来批量管理我们需要的主机,这些主机通过接收我们编写的playbook文件来部署k8s集群,以此来实现我们的自动化部署。ansible是新出现的自动化运维工具,基于Python开发,集合了众多运维工具(puppet、cfengine、chef、func、fabric)的优点,实现了批量系统配置、批量程序部署、批量运行命令等功能。ansible是基于模块工作的,本身没有批量部
主要使用ansible工具来批量管理我们需要的主机,这些主机通过接收我们编写的playbook文件来部署k8s集群,以此来实现我们的自动化部署。
ansible是新出现的自动化运维工具,基于Python开发,集合了众多运维工具(puppet、cfengine、chef、func、fabric)的优点,实现了批量系统配置、批量程序部署、批量运行命令等功能。ansible是基于模块工作的,本身没有批量部署的能力。真正具有批量部署的是ansible所运行的模块,ansible只是提供一种框架。k8s是容器集群管理系统,是一个开源的平台,可以实现容器集群的自动化部署、自动扩缩容、维护等功能。
一.准备服务器节点
ansible:192.168.184.134
master:192.168.184.102
node1: 192.168.184.103
node2: 192.168.184.104
二. 配置Ansible
1.安装ansible
配置yum源为阿里云
# cd /etc/yum.repos.d/
备份自带的yum源
# mv CentOS-Base.repo CentOS-Base.repo.backup
下载阿里云yum源
# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
# yum clean all
# yum -y install ansible-2.9.7-1.el7.ans.noarch.rpm / yum -y install ansible
2.设置免密连接
[root@ansible ~]# ssh-keygen
[root@ansible ~]# ssh-copy-id root@192.168.184.102
[root@ansible ~]# ssh-copy-id root@192.168.184.103
[root@ansible ~]# ssh-copy-id root@192.168.184.104
2.在Ansible服务器上的/etc/hosts文件中添加k8s服务器节点信息
[root@ansible ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.184.102 master
192.168.184.103 node1
192.168.184.104 node2
[root@ansible ~]# cat /etc/redhat-release
CentOS Linux release 7.7.1908 (Core)
3.在Ansible服务器上的/etc/ansible/hosts文件中添加k8s服务器节点
[root@ansible ~]# vim /etc/ansible/hosts
[k8s-all]
192.168.184.102
192.168.184.103
192.168.184.104
[master]
192.168.184.102
[nodes]
192.168.184.103
192.168.184.104
//测试主机组长成员主机是否在线
[root@ansible ~]# ansible k8s-all -m ping
三. 修改k8s集群各节点/etc/hosts
1.创建playbook文件并执行
[root@ansible ~]# cat hosts_playbook.yml
```powershell
- hosts: nodes
remote_user: root
tasks:
- name: backup /etc/hosts
shell: mv /etc/hosts /etc/host_bak
- name: copy localhosts file to remote
copy: src=/etc/hosts dest=/etc/ owner=root group=root mode=0644
[root@ansible ~]# ansible-playbook hosts_playbook.yml
四. 安装Docker
1.创建playbook文件并执行
· 所有节点安装docker
[root@ansible ~]# cat install_docker_playbook.yml
- hosts: k8s-all
remote_user: root
vars:
docker_version: 18.09.2
tasks:
- name: install dependencies
shell: yum install -y yum-utils device-mapper-persistent-data lvm2
- name: docker-repo
shell: yum-config-manager --add-repo https://mirrors.ustc.edu.cn/docker-ce/linux/centos/docker-ce.repo
- name: install docker
yum: name=docker-ce-{{docker_version}} state=present
- name: start docker
shell: systemctl start docker && systemctl enable docker
[root@ansible ~]# vim /etc/ansible/ansible.cfg
deprecation_warnings = false ## 179默认是true,并且不生效
[root@ansible ~]# ansible-playbook install_docker_playbook.yml
五. 部署k8s master
1.开始部署之前,需要做一些初始化处理:关闭防火墙、关闭selinux、禁用swap、配置k8s阿里云yum源等,所有操作放在脚本 before.sh 中,并在2中playbook中通过script模块执行
[root@ansible ~]# cat before.sh
#!/bin/bash
#防火墙
systemctl disable firewalld
systemctl stop firewalld
setenforce 0
#禁用swap
swapoff -a
#修改内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#重新加载配置文件
sysctl --system
#配置阿里k8s yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#更新缓存
yum clean all -y && yum makecache -y && yum repolist -y
2.创建playbook文件,只针对master节点,安装kubectl,kubeadm,kubelet,以及flannel
yum -y install git
git clone https://gitee.com/lm_py/kube-flannel.yaml.git
[root@ansible ~]# cat deloy_master_playbook.yml
- hosts: master
remote_user: root
vars:
kube_version: 1.18.0-0
k8s_version: v1.18.0
k8s_master: 192.168.184.102
tasks:
- name: before
script: ./before.sh
- name: install kube***
yum: name={{item}} state=present
with_items:
- kubectl-{{kube_version}}
- kubeadm-{{kube_version}}
- kubelet-{{kube_version}}
- name: init k8s
shell: kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version {{k8s_version}} --apiserver-advertise-address {{k8s_master}} --pod-network-cidr=10.244.0.0/16 --token-ttl 0
- name: config kube
shell: mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config
- name: copy flannel yaml file
copy: src=/root/kube-flannel.yml dest=/tmp/kube-flannel.yml
- name: install flannel
shell: kubectl apply -f /tmp/kube-flannel.yml
- name: get join command
shell: kubeadm token create --print-join-command
register: join_command
- name: show join command
debug: var=join_command verbosity=0
[root@ansible ~]# vim /etc/ansible/ansible.cfg
command_warnings = False ### 187行
[root@ansible ~]# ansible-playbook deploy_master_playbook.yml
六. 部署k8s node
1.同master一样,开始部署之前,需要做一些初始化处理,所有操作放在脚本 before.sh 中,并在playbook中通过script模块执行
[root@ansible ~]# cat deloy_nodes_playbook.yml
- hosts: nodes
remote_user: root
vars:
kube_version: 1.18.0-0
tasks:
- name: before
script: ./before.sh
- name: install kube***
yum: name={{item}} state=present
with_items:
- kubeadm-{{kube_version}}
- kubelet-{{kube_version}}
- name: start kubelet
shell: systemctl enable kubelet && systemctl start kubelet
- name: join cluster
shell: kubeadm join 192.168.184.102:6443 --token hbjg13.hx2hsjibb9a02wl9 --discovery-token-ca-cert-hash sha256:ecb6cab37a34d2513700d637c27eb2868d6b35a568afd1cb331e03ab6a839e33
[root@ansible ~]# ansible-playbook deloy_nodes_playbook.yml
2.在master节点上通过kubectl get nodes看到加入到集群中的节点,并且status为Ready状态
[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 21m v1.18.0
node1 Ready <none> 16m v1.18.0
node2 Ready <none> 16m v1.18.0
[root@localhost ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-2pcvt 1/1 Running 0 21m
coredns-7ff77c879f-lfgml 1/1 Running 0 21m
etcd-master 1/1 Running 0 21m
kube-apiserver-master 1/1 Running 0 21m
kube-controller-manager-master 1/1 Running 0 21m
kube-flannel-ds-amd64-jdr7v 1/1 Running 0 17m
kube-flannel-ds-amd64-mv79j 1/1 Running 0 18m
kube-flannel-ds-amd64-pdhfz 1/1 Running 0 17m
kube-proxy-gcm8t 1/1 Running 0 21m
kube-proxy-ngsfj 1/1 Running 0 17m
kube-proxy-rhgh2 1/1 Running 0 17m
kube-scheduler-master 1/1 Running 0 21m
更多推荐
所有评论(0)