Centos7 虚机部署Kubernetes集群
在三台OpenStack虚机环境下搭建Kubernetes + Flannel。环境准备:1.1 服务器:IPHostNameRole172.28.43.13k8s-masterMaster,etcd, registry172.28.43.16k8s-node1Node-1172.28.43.18k8s-node2Node-21.2 三台服务器分别编辑/etc/hosts文件:172.28.43.
在三台OpenStack虚机环境下搭建Kubernetes + Flannel。
环境准备:
1.1 服务器:
IP HostName Role 172.28.43.13 k8s-master Master,etcd, registry 172.28.43.16 k8s-node1 Node-1 172.28.43.18 k8s-node2 Node-2 1.2 三台服务器分别编辑/etc/hosts文件:
172.28.43.16 k8s-node1
172.28.43.18 k8s-node2
172.28.43.13 k8s-master
172.28.43.13 etcd
172.28.43.13 registry1.3 三台服务器分别关闭防火墙
依次运行如下命令:
systemctl disable firewalld.service
systemctl stop firewalld.service安装部署etcd
2.1 安装etcd
三台服务器依次运行:
yum install etcd -y
2.2 配置etcd
编辑 /etc/etcd/etcd.conf 文件,修改如下内容:
[root@k8s-master etcd]# egrep -v "^#" /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
ETCD_NAME="k8s-master"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://k8s-master:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://k8s-master:2379,http://k8s-master:4001"
ETCD_INITIAL_CLUSTER="k8s-master=http://k8s-master:2380,k8s-node1=http://k8s-node1:2380,k8s-node2=http://k8s-node2:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
配置完后,启动etcd服务,运行如下命令:systemctl restart etcd
2.3 验证etcd集群,[root@k8s-master ~]# etcdctl member list
6c74f3fd7534bb5: name=k8s-node1 peerURLs=http://k8s-node1:2380 clientURLs=http://k8s-node1:2379,http://k8s-node1:4001 isLeader=true
a57a17f287dbe7bb: name=k8s-node2 peerURLs=http://k8s-node2:2380 clientURLs=http://k8s-node2:2379,http://k8s-node2:4001 isLeader=false
ffe21a7812eb7c5f: name=k8s-master peerURLs=http://k8s-master:2380 clientURLs=http://k8s-master:2379,http://k8s-master:4001 isLeader=false安装配置docker
3.1 安装docker
三台服务器上分别运行:
yum install docker -y
3.2 配置docker
编辑 /etc/sysconfig/docker 文件,添加一行:
OPTIONS='--insecure-registry registry:5000'
3.3 启动docker
设置开机自启动并开启服务
chkconfig docker on
service docker start安装配置Kubernetes
4.1 安装Kubernetes
三台服务器上分别运行:
yum install kubernetes -y
4.2 配置Kubernetes:
分master和node两种角色来配置:
4.2.1 在Kubernetes Master服务器上做如下配置:
kubernetes master需要运行以下组件:
* Kubernets API Server
* Kubernets Controller Manager
* Kubernets Scheduler
编辑/etc/kubernetes/apiserver,如下几行修改成下面的样子:
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,L
imitRanger,SecurityContextDeny,ResourceQuota"
编辑/etc/kubernetes/config,如下几行修改成下面的样子:
KUBE_MASTER="--master=http://k8s-master:8080"
4.2.3 启动服务并设置开机自启动
systemctl enable kube-apiserver.service
systemctl start kube-apiserver.service
systemctl enable kube-controller-manager.service
systemctl start kube-controller-manager.service
systemctl enable kube-scheduler.service
systemctl start kube-scheduler.service
4.2.3 在Kubernetes Node上做如下配置:
在kubernetes node上需要运行以下组件:
* Kubelet
* Kubernets Proxy
修改/etc/kubernetes/config文件,编辑如下行,
KUBE_MASTER="--master=http://k8s-master:8080"
修改/etc/kubernetes/kubelet,编辑如下行:
KUBELET_ADDRESS="--address=0.0.0.0"
# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=k8s-node1"
# location of the api-server
KUBELET_API_SERVER="--api-servers=http://k8s-master:8080"
4.2.4 启动服务并设置开机自启动
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl enable kube-proxy.service
systemctl start kube-proxy.service
4.3 验证Kubernetes集群是否成功
在k8s-master上运行如下命令,如果出现类似结果,则说明Kubernetes集群已经运行起来。
[root@k8s-master ~]# kubectl -s http://k8s-master:8080 get node
NAME STATUS AGE
k8s-node1 Ready 36s
k8s-node2 Ready 21s
[root@k8s-master ~]# kubectl get nodes
NAME STATUS AGE
k8s-node1 Ready 47s
k8s-node2 Ready 32s安装、配置Flannel
5.1 安装Flannel
在master和两台node服务器上分别运行:
yum install –y flannel
5.2 配置Flannel
在master和两台node服务器上分别编辑 /etc/sysconfig/flannel,修改如下一行:
FLANNEL_ETCD_ENDPOINTS=http://etcd:2379
注意:Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)
5.3 添加网络:
执行如下命令:
etcdctl mk /atomic.io/network/config '{ "Network": "192.168.0.0/24" }'
注意:添加的网络不可以是默认eth0所在的网络。
5.4 启动Flannel:
启动Flannel之后,需要依次重启docker、kubernete。
5.4.1 在master执行:
systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service
5.4.2 在node上执行:
systemctl enable flanneld.service
systemctl start flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service
5.5 验证
出现flannel0 docker0接口,基本可以判断启动无误。
5.5.1 Kubernetes Master上
[root@k8s-master ~]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:bf:15:1c brd ff:ff:ff:ff:ff:ff
inet 172.28.43.13/24 brd 172.28.43.255 scope global dynamic eth0
valid_lft 77879sec preferred_lft 77879sec
inet6 fe80::f816:3eff:febf:151c/64 scope link
valid_lft forever preferred_lft forever
5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:b3:1a:55:dc brd ff:ff:ff:ff:ff:ff
inet 192.168.37.1/24 scope global docker0
valid_lft forever preferred_lft forever
6: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none
inet 192.168.37.0/16 scope global flannel0
valid_lft forever preferred_lft forever
5.5.2 Kubernetes node上:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether fa:16:3e:ab:df:76 brd ff:ff:ff:ff:ff:ff
inet 172.28.43.16/24 brd 172.28.43.255 scope global dynamic eth0
valid_lft 77699sec preferred_lft 77699sec
inet6 fe80::f816:3eff:feab:df76/64 scope link
valid_lft forever preferred_lft forever
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500
link/none
inet 192.168.24.0/16 scope global flannel0
valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
link/ether 02:42:e3:54:9f:bb brd ff:ff:ff:ff:ff:ff
inet 192.168.24.1/24 scope global docker0
valid_lft forever preferred_lft forever
【完】
更多推荐
所有评论(0)