初始

架构

image.png

规划

  • 角色规划
角色数量描述
管理节点1运行ansible/easzctl脚本,建议使用独立节点(1c1g即可 如果只准备管理一个集群 直接复用master即可)
etcd节点3注意etcd集群需要1,3,5,7…奇数个节点,一般复用master节点
master节点2高可用集群至少2个master节点
node节点2运行应用负载的节点,可根据需要提升机器配置/增加节点数
  • 服务器规划
IP角色描述
10.25.78.60管理部署节点运行ansible/easzctl脚本 部署机器
10.25.78.61etcd 、master节点etcd节点1 master节点1
10.25.78.62etcd 、master节点etcd节点2 master节点2
10.25.78.63etcd 、node节点etcd节点3 node节点1
10.25.78.64node节点node节点2 (因资源有限只有2个node 可根据需要提升机器配置/增加节点数
  • 其它信息
    • 服务器系统CentOS7.2-minimal

准备工作

所有节点

  • 配置IP和网关,保证能够互通,并能访问公网
  • 安装python(如果需要)并更新
yum update -y && yum install python -y

部署节点上(10.25.78.60)

  • 安装wget,下载阿里云上的repo文件并安装必要的软件
yum install wget -y
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum update -y 
yum install git python-pip vim expect libselinux-python python-netaddr -y 
  • 使用pip安装必需的包
pip install pip --upgrade -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
pip install ansible==2.6.12 -i http://mirrors.aliyun.com/pypi/simple/ --trusted-host mirrors.aliyun.com
  • 获取kubeasz代码
git clone -b 2.0.3 https://github.com/easzlab/kubeasz.git /etc/ansible
  • 配置免密登录所有节点,包括自己
[root@localhost ansible]# pwd
/etc/ansible
[root@localhost ansible]# cat host-list
localhost
10.25.78.61
10.25.78.62
10.25.78.63
10.25.78.64
[root@localhost ansible]# ./tools/yc-ssh-key-copy.sh ./host-list root password
  • 配置集群信息(只展示可能需要修改的部分)
[root@localhost ansible]# cp example/hosts.multi-node hosts
[root@localhost ansible]# cat hosts
# 'etcd' cluster should have odd member(s) (1,3,5,...)
# variable 'NODE_NAME' is the distinct name of a member in 'etcd' cluster
[etcd]
10.25.78.61 NODE_NAME=etcd1
10.25.78.62 NODE_NAME=etcd2
10.25.78.63 NODE_NAME=etcd3

# master node(s)
[kube-master]
10.25.78.61
10.25.78.62

# work node(s)
[kube-node]
10.25.78.63
10.25.78.64

# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="flannel"

  • 在部署节点验证ansible是否可免密登陆
[root@localhost ansible]# ansible all -m ping
10.25.78.63 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
10.25.78.62 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
10.25.78.64 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
10.25.78.61 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
  • 接下来需要设置要安装的k8s版本,本次安装v1.12
[root@localhost ansible]# cat ./tools/easzup | grep K8S_BIN_VER=v
export K8S_BIN_VER=v1.12.10

如果使用老的版本,最好是该主版本的最后一个小版本,例如v1.12.10,否则可能拉不到代码

  • 使用工具来拉取k8s的源码
[root@localhost ansible]# cd tools
[root@localhost ansible]# chmod +x easzup
[root@localhost tools]# ./easzup -D
...
Status: Downloaded newer image for easzlab/kubeasz:2.0.3
[INFO] Action successed : download_all
[root@localhost tools]# 

部署

执行安装

  • 建议按照顺序逐条执行,如果有报错方便调试(一般提示缺什么包),每条命令都可以反复执行
ansible-playbook 01.prepare.yml
ansible-playbook 02.etcd.yml
ansible-playbook 03.docker.yml
ansible-playbook 04.kube-master.yml
ansible-playbook 05.kube-node.yml
ansible-playbook 06.network.yml
ansible-playbook 07.cluster-addon.yml
  • 或者下面一条命令等价于依次执行上面这些命令
ansible-playbook 90.setup.yml

检查

[root@localhost ansible]# kubectl cluster-info
Kubernetes master is running at https://10.25.78.61:6443
CoreDNS is running at https://10.25.78.61:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
kubernetes-dashboard is running at https://10.25.78.61:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy
Metrics-server is running at https://10.25.78.61:6443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@localhost ansible]# 
[root@localhost ansible]# 
[root@localhost ansible]# kubectl get svc,pods --all-namespaces
NAMESPACE     NAME                              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                       AGE
default       service/kubernetes                ClusterIP   10.68.0.1       <none>        443/TCP                       6m40s
kube-system   service/heapster                  ClusterIP   10.68.239.179   <none>        80/TCP                        3m5s
kube-system   service/kube-dns                  ClusterIP   10.68.0.2       <none>        53/UDP,53/TCP,9153/TCP        3m24s
kube-system   service/kubernetes-dashboard      NodePort    10.68.175.174   <none>        443:21714/TCP                 3m5s
kube-system   service/metrics-server            ClusterIP   10.68.220.169   <none>        443/TCP                       3m19s
kube-system   service/traefik-ingress-service   NodePort    10.68.70.50     <none>        80:23456/TCP,8080:30373/TCP   2m58s

NAMESPACE     NAME                                              READY   STATUS    RESTARTS   AGE
kube-system   pod/coredns-66db97b58c-6gd5z                      1/1     Running   0          3m24s
kube-system   pod/coredns-66db97b58c-n8ghf                      1/1     Running   0          3m24s
kube-system   pod/heapster-6bfd8c7d4b-kgh4n                     1/1     Running   0          3m5s
kube-system   pod/kube-flannel-ds-amd64-5fnmz                   1/1     Running   0          4m3s
kube-system   pod/kube-flannel-ds-amd64-796r4                   1/1     Running   0          4m3s
kube-system   pod/kube-flannel-ds-amd64-9r8zj                   1/1     Running   0          4m2s
kube-system   pod/kube-flannel-ds-amd64-v5n7s                   1/1     Running   0          4m2s
kube-system   pod/kubernetes-dashboard-57dfd5b8df-xzn8g         1/1     Running   0          3m5s
kube-system   pod/metrics-server-7777cb45bf-4bqz8               1/1     Running   0          3m18s
kube-system   pod/traefik-ingress-controller-64b5f8f9cf-szlnp   1/1     Running   0          2m57s
[root@localhost ansible]# 
[root@localhost ansible]# 
[root@localhost ansible]# kubectl get nodes
NAME          STATUS                     ROLES    AGE     VERSION
10.25.78.61   Ready,SchedulingDisabled   master   6m23s   v1.12.10
10.25.78.62   Ready,SchedulingDisabled   master   6m23s   v1.12.10
10.25.78.63   Ready                      node     5m1s    v1.12.10
10.25.78.64   Ready                      node     5m1s    v1.12.10
[root@localhost ansible]# 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐