基于kubespray安装k8s集群,使用分支是v2.4.0。

https://github.com/kubernetes-incubator/kubespray/tree/v2.4.0

安装的版本是

kubernetes v1.9.2

etcd v3.2.4

calico v2.5.0

docker v1.13

前提是

Ansible v2.4 (or newer) and python-netaddr

Jinja 2.9 (or newer)

target servers must have access to the Internet

target servers are configured to allow IPv4 forwarding.

ssh key must be copied to all the servers

disable your firewall

阿里云买了三台按需云服务器,cents 7.4系统

yum update.

3.10.0-693.17.1.el7.x86_64

CentOS Linux release 7.4.1708 (Core)

47.91.217.39(公) 172.31.19.31(私有)

47.75.4.12(公) 172.31.19.29(私有) (ansible host)

47.91.213.83(公)172.31.19.30(私有)

1、登入三台机器关闭防火墙(阿里机器默认没有开启防火墙,依赖于外部的安全组)

hostnamectl set-hostname localhost

systemctl stop firewalld

systemctl disable firewalld

swapoff -a

2、登入ansible host

yum install -y epel-release

yum install -y python-pip python-netaddr ansible git

pip install --upgrade Jinja2

ssh-keygen

ssh-copy-id root@172.31.19.29

ssh-copy-id root@172.31.19.30

ssh-copy-id root@172.31.19.31

git clone https://github.com/kubernetes-incubator/kubespray.git

cd kubespray

git checkout v2.4.0 -b myv2.4.0

cp inventory/inventory.example inventory/inventory.cfg

修改inventory

vi inventory/inventory.cfg

[all]

node1 ansible_ssh_host=172.31.19.29 ip=172.31.19.29

node2 ansible_ssh_host=172.31.19.30 ip=172.31.19.30

node3 ansible_ssh_host=172.31.19.31 ip=172.31.19.31

[kube-master]

node1

node2

node3

[etcd]

node1

node2

node3

[kube-node]

node1

node2

node3

[k8s-cluster:children]

kube-node

kube-master

备份并修改安装的配置

cp inventory/group_vars/all.yml inventory/group_vars/all.yml.bak

cp inventory/group_vars/k8s-cluster.yml inventory/group_vars/k8s-cluster.yml.bak

vi inventory/group_vars/all.yml

bootstrap_os: centos

vi inventory/group_vars/k8s-cluster.yml

dashboard_enabled: false

kube_api_pwd: “hello-world8888”

###修改下载的包,将国外下载慢的包,改成用阿里的

vi roles/download/defaults/main.yml

etcd_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/etcd"

calicoctl_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/ctl"

calico_node_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/node"

calico_cni_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/cni"

calico_policy_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/kube-policy-controller"

calico_rr_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/routereflector"

hyperkube_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/hyper-kube"

pod_infra_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/pause-amd64"

nginx_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/nginx"

kubedns_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/k8s-dns-kube-dns-amd64"

dnsmasq_nanny_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/k8s-dns-dnsmasq-nanny-amd64"

dnsmasq_sidecar_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/k8s-dns-sidecar-amd64"

kubednsautoscaler_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/cluster-proportional-autoscaler-amd64"

vi roles/kubernetes-apps/ansible/defaults/main.yml

kubedns_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/k8s-dns-kube-dns-amd64"

dnsmasq_nanny_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/k8s-dns-dnsmasq-nanny-amd64"

dnsmasq_sidecar_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/k8s-dns-sidecar-amd64"

kubednsautoscaler_image_repo: "registry.cn-hangzhou.aliyuncs.com/linkcloud/cluster-proportional-autoscaler-amd64"

部署

ansible-playbook -b -i inventory/inventory.cfg cluster.yml --flush-cache


Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐