rhel7/centos7 使用二进制搭建k8s集群
使用二进制部署Kubernetes欢迎关注我的公众号《pencil带你玩转Linux》,回复“Linux学习资料”获取视频教程哦。一、部署节点说明:节点名称Ip地址主机名Master192.168.191.131k8s-masters01...
使用二进制部署Kubernetes
欢迎关注我的公众号《pencil带你玩转Linux》,回复“Linux学习资料”获取视频教程哦。
一、部署节点说明:
节点名称 | Ip地址 | 主机名 |
Master | 192.168.191.131 | k8s-masters01 |
Node1 | 192.168.191.130 | k8s-node-01 |
Node2 | 192.168.191.129 | k8s-node-02 |
二、 集群各功能组件描述:
1.Master节点要安装的功能组件:
Apiserver、 schedule、 controller manager 、 etcd
功能组件说明:
apiserver | APIServer负责对外提供RESTful的kubernetes API的服务,它是系统管理指令的统一接口,任何对资源的增删该查都要交给APIServer处理后再交给etcd,如图,kubectl(kubernetes提供的客户端工具,该工具内部是对kubernetes API的调用)是直接和APIServer交互的。 |
schedule | schedule负责调度Pod到合适的Node上,如果把scheduler看成一个黑匣子,那么它的输入是pod和由多个Node组成的列表,输出是Pod和一个Node的绑定。 kubernetes目前提供了调度算法,同样也保留了接口。用户根据自己的需求定义自己的调度算法。 |
controller manager | 如果APIServer做的是前台的工作的话,那么controller manager就是负责后台的。每一个资源都对应一个控制器。而control manager就是负责管理这些控制器的,比如我们通过APIServer创建了一个Pod,当这个Pod创建成功后,APIServer的任务就算完成了。 |
etcd | etcd是一个高可用的键值存储系统,kubernetes使用它来存储各个资源的状态,从而实现了Restful的API。 |
2.Node节点要安装的功能模块:
kube-proxy 、kubelet 、flanneld、 docker 、etcd
功能组件说明:
kube-proxy | 该模块实现了kubernetes中的服务发现和反向代理功能。kube-proxy支持TCP和UDP连接转发,默认基Round Robin算法将客户端流量转发到与service对应的一组后端pod。服务发现方面,kube-proxy使用etcd的watch机制监控集群中service和endpoint对象数据的动态变化,并且维护一个service到endpoint的映射关系,从而保证了后端pod的IP变化不会对访问者造成影响,另外,kube-proxy还支持session affinity。 |
kubelet | kublet是Master在每个Node节点上面的agent,是Node节点上面最重要的模块,它负责维护和管理该Node上的所有容器,但是如果容器不是通过kubernetes创建的,它并不会管理。本质上,它负责使Pod的运行状态与期望的状态一致。 |
flanneld | Flannel 是 Kubernetes 中常用的网络配置工具,用于配置第三层(网络层)网络结构,Flannel 需要在集群中的每台主机上运行一个名为 flanneld 的代理程序,负责从预配置地址空间中为每台主机分配一个网段。Flannel 直接使用 Kubernetes API 或 ETCD 存储网络配置、分配的子网以及任何辅助数据(如主机的公网 IP)。数据包使用几种后端机制之一进行转发,包括 VXLAN 和各种云集成。 |
docker | Docker是kubernetes中供pod运行的容器 |
三、关闭各节点的防火墙及selinux:
1.关闭防火墙及禁止开机启动
[root@ k8s-masters01 ~]# systemctl stop firewalld && systemctl disable firewalld
[root@ k8s-node01 ~]# systemctl stop firewalld && systemctl disable firewalld
[root@ k8s-node02 ~]# systemctl stop firewalld && systemctl disable firewalld
2.把selinux强制模式临时改为许可模式
[root@ k8s-masters01 ~]# setenforce 0
[root@ k8s-node01 ~]# setenforce 0
[root@ k8s-node02 ~]# setenforce 0
3.修改/etc/selinux/config配置文件关闭selinux
[root@ k8s-masters01 ~]# sed –i ‘/ SELINUX=enforcing/ SELINUX=disabled/g’ /etc/selinux/config
[root@ k8s-node01 ~]# sed –i ‘/ SELINUX=enforcing/ SELINUX=disabled/g’ /etc/selinux/config
[root@ k8s-node02 ~]# sed –i ‘/ SELINUX=enforcing/ SELINUX=disabled/g’ /etc/selinux/config
4.关闭Swap交换分区:
[root@ k8s-masters01 ~]#swapoff –a && sysctl –w vm.swappiness=0
[root@ k8s-node01 ~]#swapoff –a && sysctl –w vm.swappiness=0
[root@ k8s-node02 ~]#swapoff –a && sysctl –w vm.swappiness=0
5.在各节点中关闭自动挂载swap交换分区
[root@ k8s-masters01 ~]#vim /etc/fstab 把swap注释掉:
#UUID=a11cdd2a-eeed-4ed7-af6b-779094652766 swap swap defaults 0 0
6.重新挂载使修改生效:
mount –a
四、在node01、node02节点中安装docker:
1.安装必要的系统工具:
[root@ k8s-node01 ~]#s yum install –y yum-utils device-mapper-persistent-data lvm2
[root@ k8s-node02 ~]#s yum install –y yum-utils device-mapper-persistent-data lvm2
2.添加docker的yum源:
[root@ k8s-node01 ~]#yum-config-manager –add-repo http://mirrors.aliyun.com/docker- ce/linux/centos/docker-ce.repo
[root@ k8s-node02 ~]#yum-config-manager –add-repo http://mirrors.aliyun.com/docker- ce/linux/centos/docker-ce.repo
3.更新yum缓存:
[root@ k8s-node01 ~]#yum makecache fas
[root@ k8s-node02 ~]#yum makecache fast
4.安装docker-ce:
[root@ k8s-node01 ~]#yum -y install docker-ce
[root@ k8s-node02 ~]#yum -y install docker-ce
5.启动docker并设置成后台启动:
[root@ k8s-node01 ~]#systemctl start docker && systemctl enable docker
[root@ k8s-node02 ~]#systemctl start docker && systemctl enable docker
6.设置docker国内镜像加速:
[root@ k8s-node01 ~]#vim /etc/docker/daemon.json
加入:
{
"registry-mirrors": ["http://hub-mirror.c.163.com"]
}
五、部署k8s集群
1.在每个节点中创建k8s etcd,kubernetes安装的目录,如:
[root@ k8s-masters01 ~]# mkdir /k8s/etcd/{bin,cfg,ssl} -p
[root@ k8s-masters01 ~]# mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
2.在master节点中安装证书生成工具cfssl:
[root@ k8s-masters01 ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
[root@ k8s-masters01 ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
[root@ k8s-masters01 ~]#wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
3. 在master节点中将下载好的工具命令移到k8s安装目录
[root@ k8s-masters01 ~]#mv cfssl_linux-amd64 /usr/local/bin/cfssl
[root@ k8s-masters01 ~]#mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
[root@ k8s-masters01 ~]#mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
[root@ k8s-masters01 ~]#chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
4. 在master节点中导出生成证书的模板:
进入etcd证书目录:
[root@ k8s-masters01 ~]#cd /k8s/etcd/ ssl
5.导出证书模板:
[root@ k8s-masters01 ssl]#cfssl print-defaults config > ca-config.json
6.把ca-config.json修改为:
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
-------------------------------------------------------------------------------------------------------
7.导出请求颁发证书的文件:
[root@ k8s-masters01ssl]#cfssl print-defaults csr > ca-csr.json
8.改为:
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Guangdon",
"ST": "Shantou"
}
]
}
-------------------------------------------------------------------------------------------------------
8.生成证书:
[root@ k8s-masters01 ssl]#cfssl gencert -initca ca-csr.json | cfssljson -bare ca –
9.查看/k8s/etcd/ssl目录下,发现已生成:
ca-key.pem 、 ca.pem 两个文件
10.创建密钥生成模板(http通信加密证书):
[root@ k8s-masters01 ssl]#vim server-csr.json
{
"CN": "etcd",
"hosts": [
"192.168.191.131",
"192.168.191.129",
"192.168.191.130"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Guangdon",
"ST": "Shantou"
}
]
}
-------------------------------------------------------------------------------------------------------
#在hosts中添加入master01、node01、node02三个节点的ip
11.生成秘钥:
[root@ k8s-masters01 ssl]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
12.生成kube-proxy证书
[root@ k8s-masters01 ~]#vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
-------------------------------------------------------------------------------------------------------
[root@ k8s-masters01 ssl]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www kube-proxy.json | cfssljson -bare kube-proxy
13.把ca,server,admin,kube-proxy生成的pem文件拷贝到其他2台节点主机:
[root@ k8s-masters01 ssl]#scp server*pem admin*pem ca*pem kube-proxy*pem 192.168.191.129:/k8s/etcd/ssl/
[root@ k8s-masters01 ssl]#scp server*pem admin*pem ca*pem kube-proxy*pem 192.168.191.130k8s/etcd/ssl/
六、在每个节点中部署Etcd存储:
1.下载二进制包
[root@ k8s-masters01 ~]#wget https://github.com/etcd-io/etcd/releases/tag/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
2.解压二进制包:
[root@ k8s-masters01 ~]#tar –xzvf etcd-v3.3.10-linux-amd64.tar.gz
3.把 解压后的etcd,etcdctl可执行文件拷贝到/k8s/etcd/bin/目录下
[root@ k8s-masters01 ~]#cp etcd etcdctl /k8s/etcd/bin/
4.把解压后的etcd etcdctl二进制文件拷贝到node01、node02节点上
[root@ k8s-masters01 ~]#scp etcd etcdctl 192.168.191.130:/k8s/etcd/bin/
[root@ k8s-masters01 ~]#scp etcd etcdctl 192.168.191.129:/k8s/etcd/bin/
5.编辑etcd配置文件:
[root@ k8s-masters01 ~]#vim /k8s/kubernetes/cfg/etd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.191.131:2380"
ETCD_LISTEN_CLIENT_URLS="https:// 192.168.191.131:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https:// 192.168.191.131:2380"
ETCD_ADVERTISE_CLIENT_URLS="https:// 192.168.191.131:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.191.131:2380,etcd02=https:// 192.168.191.129:2380,etcd03=https:/ 192.168.191.130:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
-------------------------------------------------------------------------------------------------------
参数描述:
ETCD_NAME | 节点名称 |
ETCD_DATA_DIR | 数据目录 |
ETCD_LISTEN_PEER_URLS | 集群通信监听地址 |
ETCD_LISTEN_CLIENT_URLS | 客户端访问监听地址 |
ETCD_INITIAL_ADVERTISE_PEER_URLS | 集群通告地址 |
ETCD_ADVERTISE_CLIENT_URLS | 客户端通告地址 |
ETCD_INITIAL_CLUSTER | 集群节点地址 |
ETCD_INITIAL_CLUSTER_TOKEN | 集群Token |
ETCD_INITIAL_CLUSTER_STATE | 加入集群的当前状态,new是新集群,existing表示加入已有集群 |
6.将/k8s/kubernetes/cfg/etd 拷贝到其他两个node01、node02节点,并修改ETCD_LISTEN_PEER_URLS,ETCD_LISTEN_CLIENT_URLS,ETCD_INITIAL_ADVERTISE_PEER_URLS,ETCD_ADVERTISE_CLIENT_URLS字段的ip
注:不同节点的/k8s/kubernetes/cfg/etd配置:ETCD_NAME设置不同的节点名,ETCD_LISTEN_PEER_URLS,ETCD_LISTEN_CLIENT_URLS,ETCD_INITIAL_ADVERTISE_PEER_URLS,ETCD_ADVERTISE_CLIENT_URLS设置成对应节点的ip
7.创建system管理etcd:
[root@ k8s-masters01 ~]#vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/etcd
ExecStart=/k8s/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/k8s/etcd/ssl/server.pem \
--key-file=/k8s/etcd/ssl/server-key.pem \
--peer-cert-file=/k8s/etcd/ssl/server.pem \
--peer-key-file=/k8s/etcd/ssl/server-key.pem \
--trusted-ca-file=/k8s/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/k8s/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
-------------------------------------------------------------------------------------------------------
8.将/usr/lib/systemd/system/etcd.service拷贝到其他节点
[root@ k8s-masters01 ~]#scp /usr/lib/systemd/system/etcd.service 192.168.191.130: /usr/lib/systemd/system/
[root@ k8s-masters01 ~]#scp /usr/lib/systemd/system/etcd.service 192.168.191.129: /usr/lib/systemd/system/
9.把节点的以上配置完成后,启动etcd
[root@ k8s-masters01 ~]# systemctl daemon-reload
[root@ k8s-masters01 ~]#systemctl start etcd
[root@ k8s-masters01 ~]#systemctl enable etcd
10.部署完成后在每个节点中检查etcd集群状态:
进入etcd证书存放目录:
[root@ k8s-masters01 ~]# cd /k8s/etc/ssl
[root@ k8s-masters01 ssl]# /k8s/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.191.131:2379,https://192.168.191.130:2379,https://192.168.191.129:2379" cluster-health
七、在node1、node02节点中部署Flannel网络
1.Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段,在每个节点中执行下列命令:
[root@ k8s-node01 ssl]# /k8s/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.191.129:2379,https://192.168.191.130:2379,https://192.168.191.131:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
注意: /coreos.com/network/config '{ "Network": "192.168.191.0/24", "Backend": {"Type": "vxlan"}}' 设置键值对,/coreos.com/network/config为key,'{ "Network": "172.17.0.0/24", "Backend": {"Type": "vxlan"}}'为value,于docker0的IP地址相同
2.在node1、node2节点中创建flanneld system管理etcd:
[root@ k8s-node01 ~]#vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld
ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
-------------------------------------------------------------------------------------------------------
3.启动flanneld.service:
[root@ k8s-node01 ~]# systemctl daemon-reload
[root@ k8s- node01 ~]#systemctl start flanneld.service
[root@ k8s- node01 ~]#systemctl enable flanneld.service
4.启动成功后查看网络配置:
使用config或 ip addr show命令查看flannel配置的网络,将会发现生成了flannel:1虚拟网络
例:
[root@ k8s- node01 ~]#ifconfig
5.查看/run/flannel/subnet.env,将会发现生成了子网的变量信息:
可以看到flannel为你分配了一个子网ip,确保你在集群里是唯一的
6.获取flannel分配的网络:
切换到/k8s/etcd/ssl证书文件夹:
[root@ k8s-node01 ~]# cd /k8s/etcd/ssl
/k8s/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-[root@ k8s-node01 ~]# key.pem --endpoints="https://192.168.191.129:2379,https://192.168.191.130:2379,https://192.168.191.131:2379" ls /coreos.com/network/config
7.获取flannel路由表信息:
切换到/k8s/etcd/ssl证书文件夹:
[root@ k8s-node01 ~]# cd /k8s/etcd/ssl
[root@ k8s-node01 ssl]# /k8s/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.191.129:2379,https://192.168.191.130:2379,https://192.168.191.131:2379" ls /coreos.com/network/subnets
8.查看某个key的信息,比如172.17.38.0-24:
[root@ k8s-node01 ~]# /k8s/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.191.129:2379,https://192.168.191.130:2379,https://192.168.191.131:2379" get /coreos.com/network/subnets/172.17.38.0-24
9.修改node1、node2节点的docker.service文件应用于flannel网络:
[root@ k8s-node01 ~]# vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
-------------------------------------------------------------------------------------------------------
10.重新加载配置文件:
[root@ k8s-node01 ~]# systemctl daemon-reload
11.重启docker:
[root@ k8s-node01 ~]# systemctl restart docker
八.在master节点上部署组件
注:在部署Kubernetes之前一定要确保etcd、flannel、docker是正常工作的,否则先解决问题再继续。
以下步骤在master节点的服务器上操作:
1.切换到:/k8s/kubernetes/ssl
[root@ k8s-masters01 ~]#cd /k8s/kubernetes/ssl
2.创建CA证书:
证书模板:
[root@ k8s-masters01 ssl]#vim ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
ca证书配置文件:
[root@ k8s-masters01 ssl]#vim ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Guangdon",
"ST": "Shantou",
"O": "k8s",
"OU": "System"
}
]
}
-------------------------------------------------------------------------------------------------------
3.生成ca证书:
[root@ k8s-masters01 ssl]#cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
4.创建apiserver证书:
[root@ k8s-masters01 ssl]#vim server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.191.131",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Guangdon",
"ST": "Shantou",
"O": "k8s",
"OU": "System"
}
]
}
-------------------------------------------------------------------------------------------------------
5.生成apiserver证书:
[root@ k8s-masters01 ssl]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson –
bare server
6.创建kube-proxy证书:
vim kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Guangdon",
"ST": "Shantou",
"O": "k8s",
"OU": "System"
}
]
}
-------------------------------------------------------------------------------------------------------
7.生成kube-proxy证书:
[root@ k8s-masters01 ssl]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
8.下载kubernetes-server-linux-amd64.tar.gz二进制包
[root@ k8s-masters01 ~]# wget https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md/kubernetes-server-linux-amd64.tar.gz
9.生成随机字符串:
[root@ k8s-masters01 ~]#head -c 16 /dev/urandom | od -An -t x | tr -d ' '
86bad53a3628b2b371fb7ee4b86ed1c4
10.创建 TLS Bootstrapping Token:
[root@ k8s-masters01 ~]#vim /k8s/kubernetes/cfg/token.csv
86bad53a3628b2b371fb7ee4b86ed1c4,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
------------------------------------------------------------------------------------------------------
token.csv参数解释:
86bad53a3628b2b371fb7ee4b86ed1c4 | 生成的随机字符串 |
kubelet-bootstrap | 访问集群的用户名 |
10001 | 用户组id |
system:kubelet-bootstrap | 集群用户组 |
11.创建apiserver配置文件:
[root@ k8s-masters01 ~]#vim /k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.191.131:2379,https://192.168.191.130:2379,https://192.168.191.129:2379 \
--bind-address=192.168.191.131 \
--secure-port=6443 \
--advertise-address=192.168.191.131 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
-------------------------------------------------------------------------------------------------------
参数说明:
--logtostderr | 启用日志 |
---v | 日志等级 |
--etcd-servers | etcd集群地址 |
--bind-address | 监听地址 |
--secure-port | https安全端口 |
--advertise-address | 集群通告地址 |
--allow-privileged | 启用授权 |
--service-cluster-ip-range | Service虚拟IP地址段 |
--enable-admission-plugins | 准入控制模块 |
--authorization-mode | 认证授权,启用RBAC授权和节点自管理 |
--enable-bootstrap-token-auth | 启用TLS bootstrap功能 |
--token-auth-file | token文件 |
--service-node-port-range | Service Node类型默认分配端口范围 |
12.创建kube-apiserver systemd文件:
[root@ k8s-masters01 ~]#vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
-------------------------------------------------------------------------------------------------------
13.重新加载配置文件:
[root@ k8s-masters01 ~]#systemctl daemon-reload
14.重新启动kube-apiserver服务:
[root@ k8s-masters01 ~]#systemctl start kube-apiserver
[root@ k8s-masters01 ~]#systemctl enable kube-apiserver
15.查看apiserver是否运行:
[root@ k8s-masters01 ~]#ps –ef | grep kube-apiserver
16.部署kube-scheduler
创建kube-scheduler配置文件:
[root@ k8s-masters01 ~]#vim /k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
-------------------------------------------------------------------------------------------------------
注:–leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
15.创建kube-scheduler systemcd文件:
[root@ k8s-masters01 ~]#vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
-------------------------------------------------------------------------------------------------------
16.重新加载配置文件:
[root@ k8s-masters01 ~]#systemctl daemon-reload
17.启动kube-scheduler服务
[root@ k8s-masters01 ~]#systemctl enable kube-scheduler
[root@ k8s-masters01 ~]#systemctl start kube-scheduler
查看kube-scheduler是否运行
[root@ k8s-masters01 ~]#ps -ef | grep kube-scheduler
18.创建kube-controller-manager配置文件
[root@ k8s-masters01 ~]#vim /k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
19.创建kube-controller-manager systemcd文件
vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
-------------------------------------------------------------------------------------------------------
20.重新加载配置文件
[root@ k8s-masters01 ~]#systemctl daemon-reload
21.启动kube-controller-manager服务
[root@ k8s-masters01 ~]#systemctl enable kube-controller-manager
[root@ k8s-masters01 ~]#systemctl start kube-controller-manager
22.查看kube-controller-manager运行状态:
[root@ k8s-masters01 ~]#ps –ef | grep kube-controller-manager
23. 将可执行文件路/k8s/kubernetes/ 添加到 PATH 变量中 :
[root@ k8s-masters01 ~]#vim /etc/profile
PATH=/k8s/kubernetes/bin:$PATH:$HOME/bin
[root@ k8s-masters01 ~]#source /etc/profile
24.查看master集群状态:
[root@ k8s-masters01 ~]# kubectl get cs,nodes
NAME STATUS MESSAGE ERROR
componentstatus/controller-manager Healthy ok
componentstatus/scheduler Healthy ok
componentstatus/etcd-2 Healthy {"health":"true"}
componentstatus/etcd-1 Healthy {"health":"true"}
componentstatus/etcd-0 Healthy {"health":"true"}
九、部署node节点组件
1.在matser节点中,进去刚才下载的kubernetes-server-linux-amd64.tar.gz二进制包解压出来kubernetes目录:
[root@ k8s-masters01 ~]# cd /root/kubernetes/server/bin
2.将kubelet kube-proxy 复制到/k8s/kubernetes/bin目录:
[root@ k8s-masters01 bin]# cp kubelet kube-proxy /k8s/kubernetes /bin
3.将这两个命令拷贝到其他node节点:
[root@ k8s-masters01 bin]#scp kubelet kube-proxy 192.168.191.130: /k8s/kubernetes /bin
[root@ k8s-masters01 bin]#scp kubelet kube-proxy 192.168.191.129: /k8s/kubernetes /bin
4.在master节点中创建kubelet bootstrapping kubeconfig
设置BOOTSTRAP_TOKEN shell变量:
[root@ k8s-masters01 ~]#BOOTSTRAP_TOKEN=86bad53a3628b2b371fb7ee4b86ed1c4 #刚才随机生成的字符串
设置KUBE_APISERVER shell变量:
[root@ k8s-masters01 ~]#KUBE_APISERVER=”http://192.168.191.131:6443”
5.切换到/k8s/kubernetes/ssl下
[root@ k8s-masters01 bin]#cd /k8s/kubernetes/ssl
6.创建 kubelet bootstrap kubeconfig 文件
1.创建集群参数:
[root@ k8s-masters01ssl]#kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=bootstrap.kubeconfig
2.创建客户端参数:
[root@ k8s-masters01ssl]#kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=bootstrap.kubeconfig
3.创建上下文参数:
[root@ k8s-masters01ssl]#kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig
4.设置默认上下文:
[root@ k8s-masters01ssl]#kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
5.创建kube-proxy kubeconfig文件:
[root@ k8s-masters01ssl]#kubectl config set-cluster kubernetes --certificate-authority=./ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kube-proxy.kubeconfig
[root@ k8s-masters01ssl]#kubectl config set-credentials kube-proxy --client-certificate=./kube-proxy.pem --client-key=./kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig
[root@ k8s-masters01ssl]#kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig
[root@ k8s-masters01ssl]#kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
6.将生成bootstrap.kubeconfig kube-proxy.kubeconfig文件拷贝到/k8s/kubernetes/cfg,以及其他节点的/k8s/kubernetes/cfg:
[root@ k8s-masters01ssl]#cp bootstrap.kubeconfig kube-proxy.kubeconfig /k8s/kubernetes/cfg/
[root@ k8s-masters01ssl]#scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.191.129:/k8s/kubernetes/cfg/
[root@ k8s-masters01ssl]#scp bootstrap.kubeconfig kube-proxy.kubeconfig 192.168.191.130:/k8s/kubernetes/cfg/
7.在所有节点上创建如下:
1.创建kubelet模板文件:
[root@ k8s-node01 ~]#vim /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.191.129 #设置为对应节点的ip
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
--------------------------------------------------------------------------------------
#把address:修改为节点对应IP地址
2.创建kubelet配置文件:
[root@ k8s-node01 ~]#vim /k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.191.129 \
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
-------------------------------------------------------------------------------------------------------
注:把--hostname-override=192.168.191.129修改为节点对应的IP地址
3.创建kubelet systemcd文件
[root@ k8s-node01 ~]#vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
-------------------------------------------------------------------------------------------------------
4.将kubelet-bootstrap用户绑定到系统集群角色,在master中添加
[root@ k8s-master01 ~]#kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
5.在node1,node2节点中启动kubelet服务
6.重新加载配置文件:
[root@ k8s-node01 ~]#systemctl daemon-reload
7.启动kubelet服务并设为开机启动
[root@ k8s-node01 ~]#systemctl enable kubelet
[root@ k8s-node01 ~]#systemctl start kubelet
15.在master节点中查看csr列表
[root@master01 ~]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs 39m kubelet-bootstrap Pending
node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s 5m5s kubelet-bootstrap Pending
node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s 5m5s kubelet-bootstrap Pending
16. 在master节点中手动 approve CSR 请求:
[root@master01 ~]# kubectl certificate approve node-csr-An1VRgJ7FEMMF_uyy6iPjyF5ahuLx6tJMbk2SMthwLs
[root@master01 ~]# kubectl certificate approve node-csr- node-csr-dWPIyP_vD1w5gBS4iTZ6V5SJwbrdMx05YyybmbW3U5s
17. 在master节点中查看当前集群状态:
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.191.129 Ready node 4h54m v1.13.0
192.168.191.130 Ready node 5h9m v1.13.0
192.168.191.131 Ready master 5h9m v1.13.0
18.在node01,node02节点上创建kube-proxy配置文件:
[root@node01 ~]#vim /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.191.131 \
--cluster-cidr=10.0.0.0/24 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
-------------------------------------------------------------------------------------------------------
注:--hostname-override:为节点ip
19. 在node01,node02节点中创建kube-proxy systemd文件:
[root@node01 ~]#vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
-------------------------------------------------------------------------------------------------------
20.重新加载配置文件:
[root@node01 ~]#systemctl daemon-reload
21.启动kube-proxy服务,并设为开机启动:
[root@node01 ~]#systemctl start kube-proxy
[root@node01 ~]#systemctl enable kube-proxy
22.在master节点中给集群中打上node或者master节点的标签
[root@master01 ~]# kubectl label node 192.168.191.131 node-role.kubernetes.io/master='master'
[root@master01 ~]# kubectl label node 192.168.191.130 node-role.kubernetes.io/node='node'
[root@master01 ~]# kubectl label node 192.168.191.129 node-role.kubernetes.io/node='node'
23.在master节点中查看集群状态:
[root@master01 ~]# kubectl get node,cs
NAME STATUS ROLES AGE VERSION
node/192.168.191.129 Ready node 5h10m v1.13.0
node/192.168.191.130 Ready node 5h25m v1.13.0
node/192.168.191.131 Ready master 5h25m v1.13.0
NAME STATUS MESSAGE ERROR
componentstatus/scheduler Healthy ok
componentstatus/etcd-2 Healthy {"health":"true"}
componentstatus/etcd-0 Healthy {"health":"true"}
componentstatus/etcd-1 Healthy {"health":"true"}
componentstatus/controller-manager Healthy ok
至此使用二进制方式搭建Kubernetes集群搭建完成!
十、创建docker局域网私有仓库
搭建局域网docker仓库用于存放本地构建的docker镜像.
在某台已经安装好docker-ce容器的服务器上搭建局域网docker仓库,如192.168.191.132
- 拉取安装 registry
[root@yyde ~]# docker pull registry
- 运行本地的 registry
[root@yyde ~]#mkdir –p /data/docker-imager
[root@yyde ~]# docker run -d -p 5000:5000 --restart=always --name registry \ -v /data/docker-imager/:/var/lib/registry \ registry:latesty
参数说明:
-d | 启动一个守护进程程序 |
-p | 指定端口号,一般为port1:port2形式,port1是宿主机器监听的端口,port2是对应的docker监听的程序 |
--restart=alway | 随着docker服务而启动,同时保留仓库信息 |
--name | 指定容器名称 |
-v | 挂载目录/data/docker-imager/为宿主机的目录,/var/lib/registry为容器里的目录 |
- 查看容器是否启动
运行docker ps –a查看registry容器是否启动
[root@yyde ~]#docker ps -a
- 配置node01、node02节点的docker配置使其能连接到192.168.191.132私有仓库:
编辑node01、node02节点的/etc/docker/daemon.json文件加入insecure-registries": ["192.168.191.132:5000"], 如node01节点:
- 重启docker
[root@node01 ~]#systemctl restart docker
更多推荐
所有评论(0)