二进制一键部署K8S集群实战
本次部署采用ansible一键部署完成官方文档https://github.com/lizhenliang/ansible-install-k8s准备三台服务器ip角色服务192.168.106.102K8S-masterkube-apiserver,kube-controller-manager,kube-scheduler,etcd192.168.106.103K8S-node1kubelet
本次部署采用ansible一键部署完成
官方文档 https://github.com/lizhenliang/ansible-install-k8s
准备三台服务器
ip | 角色 | 服务 |
---|---|---|
192.168.106.102 | K8S-master | kube-apiserver,kube-controller-manager,kube-scheduler,etcd |
192.168.106.103 | K8S-node1 | kubelet,kube-proxy,docker etcd |
192.168.106.104 | K8S-node2 | kubelet,kube-proxy,docker,etcd |
1.0初始化系统
1.1服务器时间同步
yum -y install ntpdate git && ntpdate ntp.aliyun.com
1.2关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
1.3关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config # 永久
setenforce 0 # 临时
1.4 关闭swap
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久
1.5 写入hosts,配置主机名
cat >> /etc/hosts << EOF
192.168.106.102 k8s-master
192.168.106.103 k8s-node1
192.168.106.104 k8s-node2
EOF
hostnamectl set-hostname $1 #$1为配置得主机名
1.6将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system #生效
2.0 自动化部署
2.1 master节点安装ansible
yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum -y install ansible
2.2 下载ansible自动化脚本和所需文件
K8S-master下载ansible脚本脚本
git clone https://github.com/lizhenliang/ansible-install-k8s
https://pan.baidu.com/s/1EWnJoJjAD3GNqghOwgodWQ提取码:tlvz
mkdir /root/binary_pkg && mv binary_pkg.tar.gz /root/binary_pkg
tar zxf binary_pkg.tar.gz
2.3修改配置文件
2.3.1 修改hosts
vim ansible-install-k8s/hosts #三台机器改为单master节点集群
[master]
# 如果部署单Master,只保留一个Master节点
# 默认Naster节点也部署Node组件
192.168.106.102 node_name=k8s-master1
#192.168.31.62 node_name=k8s-master2
[node]
192.168.106.103 node_name=k8s-node1
192.168.106.104 node_name=k8s-node2
[etcd]
192.168.106.102 etcd_name=etcd-1
192.168.106.103 etcd_name=etcd-2
192.168.106.104 etcd_name=etcd-3
[lb]
# 如果部署单Master,该项忽略
192.168.31.63 lb_name=lb-master
192.168.31.71 lb_name=lb-backup
[k8s:children]
master
node
[newnode]
#192.168.31.91 node_name=k8s-node3
2.3.2 修改全局变量文件
vim ansible-install-k8s/group_vars/all.yml
# 安装目录
software_dir: '/root/binary_pkg'
k8s_work_dir: '/opt/kubernetes'
etcd_work_dir: '/opt/etcd'
tmp_dir: '/tmp/k8s'
# 集群网络
service_cidr: '10.0.0.0/24'
cluster_dns: '10.0.0.2' # 与roles/addons/files/coredns.yaml中IP一致,并且是service_cidr中的IP
pod_cidr: '10.244.0.0/16' # 与roles/addons/files/kube-flannel.yaml中网段一致
service_nodeport_range: '30000-32767'
cluster_domain: 'cluster.local'
# 高可用,如果部署单Master,该项忽略
vip: '192.168.31.88'
nic: 'ens33'
# 自签证书可信任IP列表,为方便扩展,可添加多个预留IP
cert_hosts:
# 包含所有LB、VIP、Master IP和service_cidr的第一个IP
k8s:
- 10.0.0.1
- 192.168.106.102
- 192.168.106.103
- 192.168.106.104
- 192.168.106.105
- 192.168.106.106
# 包含所有etcd节点IP
etcd:
- 192.168.106.102
- 192.168.106.103
- 192.168.106.104
2.4一键部署
单master执行 #-k 后面加上密码
ansible-playbook -i hosts single-master-deploy.yml -uroot -k
多Master版:ansible-playbook -i hosts multi-master-deploy.yml -uroot -k
2.5验证配置
程序执行完成如图所示
打开浏览器
https://192.168.106.102:30001/#/login
粘贴刚才生成的token
说明部署成功
检测集群状态也都正常!
3.0部署一个简单的应用测试
#创建一个pod
kubectl create deployment web --image=nginx
#创建一个服务server
kubectl expose deployment web --port=80 --target-port=80 --name=web --type=NodePort
kubectl get pods #检测
kubectl get svc #检测
node IP+端口正常访问
#查看nginx访问日志
kubectl logs web-5dcb957ccc-km4qn -f
更多推荐
所有评论(0)