kubernetes1.13.4+etcd3.3.12+flanneld0.11集群部署
一、下载链接Client Binarieshttps://dl.k8s.io/v1.13.4/kubernetes-client-linux-amd64.tar.gzServer Binarieshttps://dl.k8s.io/v1.13.4/kubernetes-server-linux-amd64.tar.gzNode Binarieshttps://dl.k8s.io/v1.13.4/k
一、下载链接
Client Binaries
https://dl.k8s.io/v1.13.4/kubernetes-client-linux-amd64.tar.gz
Server Binaries
https://dl.k8s.io/v1.13.4/kubernetes-server-linux-amd64.tar.gz
Node Binaries
https://dl.k8s.io/v1.13.4/kubernetes-node-linux-amd64.tar.gz
etcd
https://github.com/etcd-io/etcd/releases/download/v3.3.12/etcd-v3.3.12-linux-amd64.tar.gz
flannel
https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
二、角色划分
ES01 192.168.156.33 k8s-master etcd、kube-apiserver、kube-controller-manager、kube-scheduler、flanneld
ES02 192.168.156.34 k8s-node etcd、kubelet、docker、kube_proxy、flanneld
ES03 192.168.156.35 k8s-node etcd、kubelet、docker、kube_proxy、flanneld
ES04 192.168.156.36 k8s-node etcd、kubelet、docker、kube_proxy、flanneld
ES05 192.168.156.37 k8s-node etcd、kubelet、docker、kube_proxy、flanneld
三、Master部署
3.1 下载软件
wget https://dl.k8s.io/v1.13.4/kubernetes-server-linux-amd64.tar.gz
wget https://dl.k8s.io/v1.13.4/kubernetes-client-linux-amd64.tar.gz
wget https://github.com/etcd-io/etcd/releases/download/v3.3.12/etcd-v3.3.12-linux-amd64.tar.gz
wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
3.2 cfssl安装
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
3.3 创建etcd证书
mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
cd /k8s/etcd/ssl/
1)etcd ca配置
vi ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"etcd": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
~
2)etcd ca证书
vi ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
~
3)etcd server证书
vi server-csr.json
{
"CN": "etcd",
"hosts": [
"192.168.156.33",
"192.168.156.34",
"192.168.156.35",
"192.168.156.36",
"192.168.156.37"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}
4)生成etcd ca证书和私钥 初始化ca
[root@ES01 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca
2019/03/01 15:22:34 [INFO] generating a new CA key and certificate from CSR
2019/03/01 15:22:34 [INFO] generate received request
2019/03/01 15:22:34 [INFO] received CSR
2019/03/01 15:22:34 [INFO] generating key: rsa-2048
2019/03/01 15:22:34 [INFO] encoded CSR
2019/03/01 15:22:34 [INFO] signed certificate with serial number 296374334905376850047719544568708057151560029443
[root@ES01 ssl]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server-csr.json
生成server证书
[root@ES01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server
2019/03/01 15:24:21 [INFO] generate received request
2019/03/01 15:24:21 [INFO] received CSR
2019/03/01 15:24:21 [INFO] generating key: rsa-2048
2019/03/01 15:24:21 [INFO] encoded CSR
2019/03/01 15:24:21 [INFO] signed certificate with serial number 562877552757774590619270518994125302238569919195
2019/03/01 15:24:21 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@ES01 ssl]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
3.4 etcd安装
1)解压缩
tar -xvf etcd-v3.3.12-linux-amd64.tar.gz
cd etcd-v3.3.12-linux-amd64/
cp etcd etcdctl /k8s/etcd/bin/
并将执行文件拷贝至其他各个节点
[root@ES01 bin]# scp /k8s/etcd/bin/etcd etcdctl root@192.168.156.34:/k8s/etcd/bin/
root@192.168.156.34's password:
etcd 100% 18MB 18.4MB/s 00:00
etcdctl 100% 15MB 15.1MB/s 00:00
[root@ES01 bin]# scp /k8s/etcd/bin/etcd etcdctl root@192.168.156.35:/k8s/etcd/bin/
root@192.168.156.35's password:
etcd 100% 18MB 18.4MB/s 00:00
etcdctl 100% 15MB 15.1MB/s 00:00
[root@ES01 bin]# scp /k8s/etcd/bin/etcd etcdctl root@192.168.156.36:/k8s/etcd/bin/
root@192.168.156.36's password:
etcd 100% 18MB 18.4MB/s 00:00
etcdctl 100% 15MB 15.1MB/s 00:00
[root@ES01 bin]# scp /k8s/etcd/bin/etcd etcdctl root@192.168.156.37:/k8s/etcd/bin/
root@192.168.156.37's password:
etcd 100% 18MB 18.4MB/s 00:00
etcdctl 100% 15MB 15.1MB/s 00:01
[root@ES01 bin]#
2)配置etcd主文件(该文件需要进行在各个节点进行修改)
vi /k8s/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="ES01"
ETCD_DATA_DIR="/data1/etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.156.33:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.156.33:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.156.33:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.156.33:2379"
ETCD_INITIAL_CLUSTER="ES01=https://192.168.156.33:2380,ES02=https://192.168.156.34:2380,ES03=https://192.168.156.35:2380",ES04=https://192.168.156.36:2380,ES05=https://192.168.156.37:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
#[Security]
ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_CLIENT_CERT_AUTH="true"
ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem"
ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem"
ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
3)配置etcd启动文件
mkdir /data1/etcd
vi /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
WorkingDirectory=/data1/etcd/
EnvironmentFile=-/k8s/etcd/cfg/etcd.conf
set GOMAXPROCS to number of processors
ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" --cert-file=\"${ETCD_CERT_FILE}\" --key-file=\"${ETCD_KEY_FILE}\" --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" --client-cert-auth=\"${ETCD_CLIENT_CERT_AUTH}\" --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" --peer-trusted-ca-file=\"${ETCD_PEER_TRUSTED_CA_FILE}\" --peer-client-cert-auth=\"${ETCD_PEER_CLIENT_CERT_AUTH}\""
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
在ES01节点拷贝cfssl、cfssljson、cfssl-certinfo至其余各个节点(ES02、ES03、ES04、ES05)
[root@ES01 data1]# scp /usr/local/bin/cfssl root@192.168.156.34:/usr/local/bin/cfssl
root@192.168.156.34's password:
cfssl 100% 10MB 9.9MB/s 00:01
[root@ES01 data1]# scp /usr/local/bin/cfssl root@192.168.156.35:/usr/local/bin/cfssl
root@192.168.156.35's password:
cfssl 100% 10MB 9.9MB/s 00:01
[root@ES01 data1]# scp /usr/local/bin/cfssl root@192.168.156.36:/usr/local/bin/cfssl
root@192.168.156.36's password:
cfssl 100% 10MB 9.9MB/s 00:00
[root@ES01 data1]# scp /usr/local/bin/cfssl root@192.168.156.37:/usr/local/bin/cfssl
root@192.168.156.37's password:
cfssl 100% 10MB 9.9MB/s 00:00
[root@ES01 data1]# scp /usr/local/bin/cfssljson root@192.168.156.34:/usr/local/bin/cfssljson
root@192.168.156.34's password:
cfssljson 100% 2224KB 2.2MB/s 00:01
[root@ES01 data1]# scp /usr/local/bin/cfssljson root@192.168.156.35:/usr/local/bin/cfssljson
root@192.168.156.35's password:
cfssljson 100% 2224KB 2.2MB/s 00:00
[root@ES01 data1]# scp /usr/local/bin/cfssljson root@192.168.156.36:/usr/local/bin/cfssljson
root@192.168.156.36's password:
cfssljson 100% 2224KB 2.2MB/s 00:00
[root@ES01 data1]# scp /usr/local/bin/cfssljson root@192.168.156.37:/usr/local/bin/cfssljson
root@192.168.156.37's password:
cfssljson 100% 2224KB 2.2MB/s 00:00
[root@ES01 data1]# scp /usr/bin/cfssl-certinfo root@192.168.156.34:/usr/bin/cfssl-certinfo
root@192.168.156.34's password:
cfssl-certinfo 100% 6441KB 6.3MB/s 00:00
[root@ES01 data1]# scp /usr/bin/cfssl-certinfo root@192.168.156.35:/usr/bin/cfssl-certinfo
root@192.168.156.35's password:
cfssl-certinfo 100% 6441KB 6.3MB/s 00:00
[root@ES01 data1]# scp /usr/bin/cfssl-certinfo root@192.168.156.36:/usr/bin/cfssl-certinfo
root@192.168.156.36's password:
cfssl-certinfo 100% 6441KB 6.3MB/s 00:00
[root@ES01 data1]# scp /usr/bin/cfssl-certinfo root@192.168.156.37:/usr/bin/cfssl-certinfo
root@192.168.156.37's password:
cfssl-certinfo 100% 6441KB 6.3MB/s 00:00
在其他各个节点创建文件夹
mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
mkdir /data1/etcd
在ES01节点拷贝文件至其他各个节点
[root@ES01 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json server-key.pem server.pem root@192.168.156.34:/k8s/etcd/ssl/
root@192.168.156.34's password:
ca.csr 100% 956 0.9KB/s 00:00
ca.pem 100% 1265 1.2KB/s 00:00
ca-key.pem 100% 1679 1.6KB/s 00:00
ca-config.json 100% 288 0.3KB/s 00:00
[root@ES01 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json server-key.pem server.pem root@192.168.156.35:/k8s/etcd/ssl/
root@192.168.156.35's password:
ca.csr 100% 956 0.9KB/s 00:00
ca.pem 100% 1265 1.2KB/s 00:00
ca-key.pem 100% 1679 1.6KB/s 00:00
ca-config.json 100% 288 0.3KB/s 00:00
[root@ES01 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json server-key.pem server.pem root@192.168.156.36:/k8s/etcd/ssl/
root@192.168.156.36's password:
ca.csr 100% 956 0.9KB/s 00:00
ca.pem 100% 1265 1.2KB/s 00:00
ca-key.pem 100% 1679 1.6KB/s 00:00
ca-config.json 100% 288 0.3KB/s 00:00
[root@ES01 ssl]# scp ca.csr ca.pem ca-key.pem ca-config.json server-key.pem server.pem root@192.168.156.37:/k8s/etcd/ssl/
root@192.168.156.37's password:
ca.csr 100% 956 0.9KB/s 00:00
ca.pem 100% 1265 1.2KB/s 00:00
ca-key.pem 100% 1679 1.6KB/s 00:00
ca-config.json 100% 288 0.3KB/s 00:00
[root@ES01 ssl]# scp /k8s/etcd/cfg/etcd.conf root@192.168.156.34:/k8s/etcd/cfg/
root@192.168.156.34's password:
etcd.conf 100% 916 0.9KB/s 00:00
[root@ES01 ssl]# scp /k8s/etcd/cfg/etcd.conf root@192.168.156.35:/k8s/etcd/cfg/
root@192.168.156.35's password:
Permission denied, please try again.
root@192.168.156.35's password:
etcd.conf 100% 916 0.9KB/s 00:00
[root@ES01 ssl]# scp /k8s/etcd/cfg/etcd.conf root@192.168.156.36:/k8s/etcd/cfg/
root@192.168.156.36's password:
etcd.conf 100% 916 0.9KB/s 00:00
[root@ES01 ssl]# scp /k8s/etcd/cfg/etcd.conf root@192.168.156.37:/k8s/etcd/cfg/
root@192.168.156.37's password:
etcd.conf 100% 916 0.9KB/s 00:00
[root@ES01 ssl]# scp /usr/lib/systemd/system/etcd.service root@192.168.156.34:/usr/lib/systemd/system/etcd.service root@192.168.156.34's password:
etcd.service 100% 1118 1.1KB/s 00:00
[root@ES01 ssl]# scp /usr/lib/systemd/system/etcd.service root@192.168.156.35:/usr/lib/systemd/system/etcd.service
root@192.168.156.35's password:
etcd.service 100% 1118 1.1KB/s 00:00
[root@ES01 ssl]# scp /usr/lib/systemd/system/etcd.service root@192.168.156.36:/usr/lib/systemd/system/etcd.service
root@192.168.156.36's password:
etcd.service 100% 1118 1.1KB/s 00:00
[root@ES01 ssl]# scp /usr/lib/systemd/system/etcd.service root@192.168.156.37:/usr/lib/systemd/system/etcd.service
root@192.168.156.37's password:
etcd.service 100% 1118 1.1KB/s 00:00
5)服务检查(ES01)
[root@ES01 ~]# /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.156.33:2379,https://192.168.156.34:2379,https://192.168.156.35:2379,https://192.168.156.36:2379,https://192.168.156.37:2379" cluster-health
member 103495b6ad6fd8b8 is healthy: got healthy result from https://192.168.156.36:2379
member 2c74680105404ab4 is healthy: got healthy result from https://192.168.156.34:2379
member 2dbbdd1ab267d414 is healthy: got healthy result from https://192.168.156.33:2379
member 34eba487e326f75a is healthy: got healthy result from https://192.168.156.37:2379
member bc98969172ef100d is healthy: got healthy result from https://192.168.156.35:2379
cluster is healthy
3.5 生成kubernets证书与私钥(在ES01)
1)制作kubernetes ca证书(ES01)
cd /k8s/kubernetes/ssl
vi ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}
vi ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
[root@ES01 ssl]# pwd
/k8s/kubernetes/ssl
[root@ES01 ssl]# ls
ca-config.json ca-csr.json
[root@ES01 ssl]#
[root@ES01 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2019/03/01 19:08:14 [INFO] generating a new CA key and certificate from CSR
2019/03/01 19:08:14 [INFO] generate received request
2019/03/01 19:08:14 [INFO] received CSR
2019/03/01 19:08:14 [INFO] generating key: rsa-2048
2019/03/01 19:08:14 [INFO] encoded CSR
2019/03/01 19:08:14 [INFO] signed certificate with serial number 227787774750671777501572207560646025870892241745
[root@ES01 ssl]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
2)制作apiserver证书(ES01)
vi server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.254.0.1",
"127.0.0.1",
"192.168.156.33",
"192.168.156.34",
"192.168.156.35",
"192.168.156.36",
"192.168.156.37",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
[root@ES01 ssl]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server-csr.json
[root@ES01 ssl]# pwd
/k8s/kubernetes/ssl
[root@ES01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
2019/03/01 19:45:13 [INFO] generate received request
2019/03/01 19:45:13 [INFO] received CSR
2019/03/01 19:45:13 [INFO] generating key: rsa-2048
2019/03/01 19:45:13 [INFO] encoded CSR
2019/03/01 19:45:13 [INFO] signed certificate with serial number 474771533881387955306997105980830061567326778994
2019/03/01 19:45:13 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@ES01 ssl]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server.csr server-csr.json server-key.pem server.pem
[root@ES01 ssl]#
3)制作kube-proxy证书
vi kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
[root@ES01 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2019/03/01 19:48:44 [INFO] generate received request
2019/03/01 19:48:44 [INFO] received CSR
2019/03/01 19:48:44 [INFO] generating key: rsa-2048
2019/03/01 19:48:44 [INFO] encoded CSR
2019/03/01 19:48:44 [INFO] signed certificate with serial number 539421754845438380725733134716070932061817066414
2019/03/01 19:48:44 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@ES01 ssl]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem kube-proxy.csr kube-proxy-csr.json kube-proxy-key.pem kube-proxy.pem server.csr server-csr.json server-key.pem server.pem
3.6部署kubernetes server(ES01)
kubernetes master 节点运行如下组件: kube-apiserver kube-scheduler kube-controller-manager kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式,master三节点高可用模式下可用
1)解压缩文件
[root@ES01 kubernetes]# tar -zxvf kubernetes-server-linux-amd64.tar.gz
kubernetes/
kubernetes/addons/
kubernetes/LICENSES
kubernetes/server/
kubernetes/server/bin/
kubernetes/server/bin/kube-controller-manager
kubernetes/server/bin/cloud-controller-manager.tar
kubernetes/server/bin/kube-controller-manager.tar
kubernetes/server/bin/kube-proxy
kubernetes/server/bin/cloud-controller-manager
kubernetes/server/bin/kube-apiserver.tar
kubernetes/server/bin/kube-scheduler
kubernetes/server/bin/kubelet
kubernetes/server/bin/kube-scheduler.tar
kubernetes/server/bin/kubeadm
kubernetes/server/bin/kube-apiserver.docker_tag
kubernetes/server/bin/apiextensions-apiserver
kubernetes/server/bin/kube-controller-manager.docker_tag
kubernetes/server/bin/kube-proxy.docker_tag
kubernetes/server/bin/cloud-controller-manager.docker_tag
kubernetes/server/bin/kube-scheduler.docker_tag
kubernetes/server/bin/kube-proxy.tar
kubernetes/server/bin/kubectl
kubernetes/server/bin/hyperkube
kubernetes/server/bin/mounter
kubernetes/server/bin/kube-apiserver
kubernetes/kubernetes-src.tar.gz
[root@ES01 kubernetes]# cd kubernetes/server/bin/
[root@ES01 bin]# cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
2)部署kube-apiserver组件 创建TLS Bootstrapping Token
[root@ES01 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
042f1f91d608647f90071b57139be0a1
[root@ES01 bin]# vi /k8s/kubernetes/cfg/token.csv
vim /k8s/kubernetes/cfg/token.csv
042f1f91d608647f90071b57139be0a1,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
创建Apiserver配置文件
vim /k8s/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.156.33:2379,https://192.168.156.34:2379,https://192.168.156.35:2379,https://192.168.156.36:2379,https://192.168.156.37:2379 \
--bind-address=192.168.156.33 \
--secure-port=6443 \
--advertise-address=192.168.156.33 \
--allow-privileged=true \
--service-cluster-ip-range=10.254.0.0/16 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/k8s/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/k8s/kubernetes/ssl/server.pem \
--tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \
--client-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/k8s/etcd/ssl/ca.pem \
--etcd-certfile=/k8s/etcd/ssl/server.pem \
--etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
创建apiserver systemd文件
vim /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver
ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动服务
[root@ES01 bin]# systemctl daemon-reload
[root@ES01 bin]# systemctl enable kube-apiserver
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@ES01 bin]# systemctl start kube-apiserver
[root@ES01 bin]# systemctl status kube-apiserver
鈼m kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-03-01 20:06:05 CST; 6s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 26471 (kube-apiserver)
CGroup: /system.slice/kube-apiserver.service
26471 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.156.33:2379,https://192.168.156.34:2379,https://192.168.156.35:2379,https://192.168.156.36:2379,https://192.168.156.37:2379...
Mar 01 20:06:09 ES01 kube-apiserver[26471]: [restful] 2019/03/01 20:06:09 log.go:33: [restful/swagger] listing is available at https://192.168.156.33:6443/swaggerapi
Mar 01 20:06:09 ES01 kube-apiserver[26471]: [restful] 2019/03/01 20:06:09 log.go:33: [restful/swagger] https://192.168.156.33:6443/swaggerui/ is mapped to folder /swagger-ui/
Mar 01 20:06:09 ES01 kube-apiserver[26471]: I0301 20:06:09.221569 26471 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,N...AdmissionWebhook.
Mar 01 20:06:09 ES01 kube-apiserver[26471]: I0301 20:06:09.221593 26471 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,Persisten...ok,ResourceQuota.
Mar 01 20:06:09 ES01 kube-apiserver[26471]: I0301 20:06:09.232571 26471 compact.go:54] compactor already exists for endpoints [https://192.168.156.33:2379 https://192.168.156.34:2379 https://192.168.156.35:2379 htt....168.156.37:2379]
Mar 01 20:06:09 ES01 kube-apiserver[26471]: I0301 20:06:09.232966 26471 store.go:1414] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices
Mar 01 20:06:09 ES01 kube-apiserver[26471]: I0301 20:06:09.233030 26471 reflector.go:169] Listing and watching apiregistration.APIService from storage/cacher.go:/apiregistration.k8s.io/apiservices
Mar 01 20:06:09 ES01 kube-apiserver[26471]: I0301 20:06:09.241985 26471 compact.go:54] compactor already exists for endpoints [https://192.168.156.33:2379 https://192.168.156.34:2379 https://192.168.156.35:2379 htt....168.156.37:2379]
Mar 01 20:06:09 ES01 kube-apiserver[26471]: I0301 20:06:09.242139 26471 store.go:1414] Monitoring apiservices.apiregistration.k8s.io count at <storage-prefix>//apiregistration.k8s.io/apiservices
Mar 01 20:06:09 ES01 kube-apiserver[26471]: I0301 20:06:09.242211 26471 reflector.go:169] Listing and watching apiregistration.APIService from storage/cacher.go:/apiregistration.k8s.io/apiservices
Hint: Some lines were ellipsized, use -l to show in full.
[root@ES01 bin]# ps -ef |grep kube-apiserver
root 26471 1 39 20:06 ? 00:00:09 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.156.33:2379,https://192.168.156.34:2379,https://192.168.156.35:2379,https://192.168.156.36:2379,https://192.168.156.37:2379 --bind-address=192.168.156.33 --secure-port=6443 --advertise-address=192.168.156.33 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem
root 26496 26229 0 20:06 pts/1 00:00:00 grep --color=auto kube-apiserver
[root@ES01 bin]# netstat -tulpn |grep kube-apiserve
tcp 0 0 192.168.156.33:6443 0.0.0.0: LISTEN 26471/kube-apiserve
tcp 0 0 127.0.0.1:8080 0.0.0.0: LISTEN 26471/kube-apiserve
3)部署kube-scheduler组件 创建kube-scheduler配置文件
vim /k8s/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
参数备注: –address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求; –kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver; –leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;
创建kube-scheduler systemd文件(ES01)
vim /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler
ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动服务
[root@ES01 bin]# systemctl daemon-reload
[root@ES01 bin]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@ES01 bin]# systemctl start kube-scheduler.service
[root@ES01 bin]# systemctl status kube-scheduler.service
鈼m kube-scheduler.service - Kubernetes Scheduler
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-03-01 20:19:27 CST; 8s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 26579 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
鈹斺攢26579 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect
Mar 01 20:19:29 ES01 kube-scheduler[26579]: I0301 20:19:29.638620 26579 shared_informer.go:123] caches populated
Mar 01 20:19:29 ES01 kube-scheduler[26579]: I0301 20:19:29.738762 26579 shared_informer.go:123] caches populated
Mar 01 20:19:29 ES01 kube-scheduler[26579]: I0301 20:19:29.838896 26579 shared_informer.go:123] caches populated
Mar 01 20:19:29 ES01 kube-scheduler[26579]: I0301 20:19:29.939028 26579 shared_informer.go:123] caches populated
Mar 01 20:19:29 ES01 kube-scheduler[26579]: I0301 20:19:29.939060 26579 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
Mar 01 20:19:30 ES01 kube-scheduler[26579]: I0301 20:19:30.039177 26579 shared_informer.go:123] caches populated
Mar 01 20:19:30 ES01 kube-scheduler[26579]: I0301 20:19:30.039199 26579 controller_utils.go:1034] Caches are synced for scheduler controller
Mar 01 20:19:30 ES01 kube-scheduler[26579]: I0301 20:19:30.039231 26579 leaderelection.go:205] attempting to acquire leader lease kube-system/kube-scheduler...
Mar 01 20:19:30 ES01 kube-scheduler[26579]: I0301 20:19:30.045260 26579 leaderelection.go:214] successfully acquired lease kube-system/kube-scheduler
Mar 01 20:19:30 ES01 kube-scheduler[26579]: I0301 20:19:30.145475 26579 shared_informer.go:123] caches populated
4)部署kube-controller-manager组件 创建kube-controller-manager配置文件
vim /k8s/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.254.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \
--root-ca-file=/k8s/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
创建kube-controller-manager systemd文件
vim /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager
ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动服务
[root@ES01 ~]# cd /k8s/kubernetes/bin/
[root@ES01 bin~]# systemctl daemon-reload
[root@ES01 bin~]# systemctl enable kube-controller-manager
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@ES01 bin~]# systemctl start kube-controller-manager
[root@ES01 bin~]# systemctl status kube-controller-manager
鈼m kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2019-03-01 20:29:50 CST; 13s ago
Docs: https://github.com/kubernetes/kubernetes
Main PID: 26673 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
鈹斺攢26673 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.254.0.0/16 --cluster-name=kubernetes --cluster-sign...
Mar 01 20:29:53 ES01 kube-controller-manager[26673]: I0301 20:29:53.330870 26673 resource_quota_controller.go:427] syncing resource quota controller with updated resources from discovery: map[/v1, Resource=limitrang..., Resource=poddi
Mar 01 20:29:53 ES01 kube-controller-manager[26673]: I0301 20:29:53.330993 26673 resource_quota_monitor.go:180] QuotaMonitor unable to use a shared informer for resource "extensions/v1beta1, Resource=networkpolicies...=networkpolicies
Mar 01 20:29:53 ES01 kube-controller-manager[26673]: I0301 20:29:53.331021 26673 resource_quota_monitor.go:243] quota synced monitors; added 0, kept 29, removed 0
Mar 01 20:29:53 ES01 kube-controller-manager[26673]: E0301 20:29:53.331033 26673 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=...networkpolicies"
Mar 01 20:29:53 ES01 kube-controller-manager[26673]: I0301 20:29:53.405908 26673 shared_informer.go:123] caches populated
Mar 01 20:29:53 ES01 kube-controller-manager[26673]: I0301 20:29:53.405929 26673 controller_utils.go:1034] Caches are synced for garbage collector controller
Mar 01 20:29:53 ES01 kube-controller-manager[26673]: I0301 20:29:53.405936 26673 garbagecollector.go:245] synced garbage collector
Mar 01 20:30:01 ES01 kube-controller-manager[26673]: I0301 20:30:01.844490 26673 cronjob_controller.go:111] Found 0 jobs
Mar 01 20:30:01 ES01 kube-controller-manager[26673]: I0301 20:30:01.845987 26673 cronjob_controller.go:119] Found 0 cronjobs
Mar 01 20:30:01 ES01 kube-controller-manager[26673]: I0301 20:30:01.845997 26673 cronjob_controller.go:122] Found 0 groups
Hint: Some lines were ellipsized, use -l to show in full.
3.7 验证kubeserver服务
设置环境变量(ES01、ES02、ES03、ES04、ES05)
vim /etc/profile
PATH=/k8s/kubernetes/bin:$PATH
source /etc/profile
查看master服务状态
[root@ES01 bin]# source /etc/profile
[root@ES01 bin]# kubectl get cs,nodes
NAME STATUS MESSAGE ERROR
componentstatus/controller-manager Healthy ok
componentstatus/scheduler Healthy ok
componentstatus/etcd-0 Healthy {"health":"true"}
componentstatus/etcd-3 Healthy {"health":"true"}
componentstatus/etcd-4 Healthy {"health":"true"}
componentstatus/etcd-1 Healthy {"health":"true"}
componentstatus/etcd-2 Healthy {"health":"true"}
四、Node部署(ES02、ES03、ES04、ES05)
kubernetes work 节点运行如下组件: docker kubelet kube-proxy flannel
4.1 Docker环境安装
[root@ES02 ~]# cd /etc/yum.repos.d/
[root@ES02 yum.repos.d]# wget \
https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
--2019-03-01 20:39:14-- https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Resolving mirrors.aliyun.com (mirrors.aliyun.com)... 124.238.245.105, 124.238.245.99, 36.104.137.251, ...
Connecting to mirrors.aliyun.com (mirrors.aliyun.com)|124.238.245.105|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2640 (2.6K) [application/octet-stream]
100%[==================================================================================================================================================================================================>] 2,640 --.-K/s in 0s
2019-03-01 20:39:14 (56.5 MB/s) - 鈥榙ocker-ce.repo鈥ved [2640/2640]
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker
4.2 部署kubelet组件
kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如exec、run、logs 等; kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况; 为确保安全,只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)
1)安装二进制文件(在ES02、ES03、ES04、ES05)
wget https://dl.k8s.io/v1.13.4/kubernetes-node-linux-amd64.tar.gz
tar zxvf kubernetes-node-linux-amd64.tar.gz
cd kubernetes/node/bin/
cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/
2)复制相关证书到node节点(在ES01节点操作即可)
[root@ES01 ssl]# scp .pem 192.168.156.34:/k8s/kubernetes/ssl/
root@192.168.156.34's password:
ca-key.pem 100% 1675 1.6KB/s 00:00
ca.pem 100% 1359 1.3KB/s 00:00
kube-proxy-key.pem 100% 1679 1.6KB/s 00:00
kube-proxy.pem 100% 1403 1.4KB/s 00:00
server-key.pem 100% 1675 1.6KB/s 00:00
server.pem 100% 1643 1.6KB/s 00:00
[root@ES01 ssl]# scp .pem 192.168.156.35:/k8s/kubernetes/ssl/
root@192.168.156.35's password:
ca-key.pem 100% 1675 1.6KB/s 00:00
ca.pem 100% 1359 1.3KB/s 00:00
kube-proxy-key.pem 100% 1679 1.6KB/s 00:00
kube-proxy.pem 100% 1403 1.4KB/s 00:00
server-key.pem 100% 1675 1.6KB/s 00:00
server.pem 100% 1643 1.6KB/s 00:00
[root@ES01 ssl]# scp .pem 192.168.156.36:/k8s/kubernetes/ssl/
root@192.168.156.36's password:
ca-key.pem 100% 1675 1.6KB/s 00:00
ca.pem 100% 1359 1.3KB/s 00:00
kube-proxy-key.pem 100% 1679 1.6KB/s 00:00
kube-proxy.pem 100% 1403 1.4KB/s 00:00
server-key.pem 100% 1675 1.6KB/s 00:00
server.pem 100% 1643 1.6KB/s 00:00
[root@ES01 ssl]# scp .pem 192.168.156.37:/k8s/kubernetes/ssl/
root@192.168.156.37's password:
ca-key.pem 100% 1675 1.6KB/s 00:00
ca.pem 100% 1359 1.3KB/s 00:00
kube-proxy-key.pem 100% 1679 1.6KB/s 00:00
kube-proxy.pem 100% 1403 1.4KB/s 00:00
server-key.pem 100% 1675 1.6KB/s 00:00
server.pem 100% 1643 1.6KB/s 00:00
3)创建kubelet bootstrap kubeconfig文件 通过脚本实现(该脚本在其中一个node节点执行,并将生成的bootstrap.kubeconfig、kube-proxy.kubeconfig
拷贝至其他各个node)
vim /k8s/kubernetes/cfg/environment.sh
#!/bin/bash
#创建kubelet bootstrapping kubeconfig
BOOTSTRAP_TOKEN=042f1f91d608647f90071b57139be0a1
KUBE_APISERVER="https://192.168.156.33:6443"
#设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=/k8s/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=/k8s/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \
--client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
执行脚本
[root@ES02 cfg]# pwd
/k8s/kubernetes/cfg
[root@ES02 cfg]# sh environment.sh
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
[root@ES02 cfg]# ls
bootstrap.kubeconfig environment.sh kube-proxy.kubeconfig
[root@ES02 cfg]# scp bootstrap.kubeconfig root@192.168.156.35:/k8s/kubernetes/cfg/
root@192.168.156.35's password:
bootstrap.kubeconfig 100% 2168 2.1KB/s 00:00
[root@ES02 cfg]# scp bootstrap.kubeconfig root@192.168.156.36:/k8s/kubernetes/cfg/
root@192.168.156.36's password:
bootstrap.kubeconfig 100% 2168 2.1KB/s 00:00
[root@ES02 cfg]# scp bootstrap.kubeconfig root@192.168.156.37:/k8s/kubernetes/cfg/
root@192.168.156.37's password:
bootstrap.kubeconfig 100% 2168 2.1KB/s 00:00
[root@ES02 cfg]# scp kube-proxy.kubeconfig root@192.168.156.35:/k8s/kubernetes/cfg/
root@192.168.156.35's password:
kube-proxy.kubeconfig 100% 6274 6.1KB/s 00:00
[root@ES02 cfg]# scp kube-proxy.kubeconfig root@192.168.156.36:/k8s/kubernetes/cfg/
root@192.168.156.36's password:
kube-proxy.kubeconfig 100% 6274 6.1KB/s 00:00
[root@ES02 cfg]# scp kube-proxy.kubeconfig root@192.168.156.37:/k8s/kubernetes/cfg/
root@192.168.156.37's password:
kube-proxy.kubeconfig 100% 6274 6.1KB/s 00:00
4)创建kubelet参数配置模板文件(除了master节点都需要操作,并注意在各个节点修改address地址)
vim /k8s/kubernetes/cfg/kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.156.34 每个节点使用自己的ip
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.254.0.10"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
5)创建kubelet配置文件
vim /k8s/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.156.34 \ 每个节点使用自己的ip
--kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \
--config=/k8s/kubernetes/cfg/kubelet.config \
--cert-dir=/k8s/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
6)创建kubelet systemd文件
vim /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/k8s/kubernetes/cfg/kubelet
ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
7)将kubelet-bootstrap用户绑定到系统集群角色(仅仅在master节点操作即可)
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
注意这个默认连接localhost:8080端口,可以在master上操作(ES01)
[root@ES01 ssl]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
8)启动服务 systemctl daemon-reload systemctl enable kubelet systemctl start kubelet(ES02、ES03、ES04、ES05)
[root@ES02 cfg]# systemctl daemon-reload
[root@ES02 cfg]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@ES02 cfg]# systemctl start kubelet
[root@ES02 cfg]# systemctl status kubelet
鈼[0m kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-03-02 08:43:09 CST; 27s ago
Main PID: 5153 (kubelet)
Memory: 19.6M
CGroup: /system.slice/kubelet.service
鈹斺攢5153 /k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.156.34 --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstra...
Mar 02 08:43:09 ES02 kubelet[5153]: I0302 08:43:09.660783 5153 feature_gate.go:206] feature gates: &{map[]}
Mar 02 08:43:10 ES02 kubelet[5153]: I0302 08:43:10.236183 5153 server.go:825] Using self-signed cert (/k8s/kubernetes/ssl/kubelet.crt, /k8s/kubernetes/ssl/kubelet.key)
Mar 02 08:43:10 ES02 kubelet[5153]: I0302 08:43:10.257042 5153 mount_linux.go:180] Detected OS with systemd
Mar 02 08:43:10 ES02 kubelet[5153]: I0302 08:43:10.257091 5153 server.go:407] Version: v1.13.4
Mar 02 08:43:10 ES02 kubelet[5153]: I0302 08:43:10.257135 5153 feature_gate.go:206] feature gates: &{map[]}
Mar 02 08:43:10 ES02 kubelet[5153]: I0302 08:43:10.257184 5153 feature_gate.go:206] feature gates: &{map[]}
Mar 02 08:43:10 ES02 kubelet[5153]: I0302 08:43:10.257271 5153 plugins.go:103] No cloud provider specified.
Mar 02 08:43:10 ES02 kubelet[5153]: I0302 08:43:10.257282 5153 server.go:523] No cloud provider specified: "" from the config file: ""
Mar 02 08:43:10 ES02 kubelet[5153]: I0302 08:43:10.257308 5153 bootstrap.go:65] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
Mar 02 08:43:10 ES02 kubelet[5153]: I0302 08:43:10.258817 5153 bootstrap.go:96] No valid private key and/or certificate found, reusing existing private key or creating a new one
9)Master接受kubelet CSR请求 可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书,如下是手动 approve CSR请求操作方法 查看CSR列表(ES01操作)
[root@ES01 ssl]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-5ZPVS1KILeu4rAciA9dkNWj45xB-YKfuttXlk3JETAs 65s kubelet-bootstrap Pending
node-csr-FAGM2YSxDJFWZvH1Q6TLDV1a83hq6gbkzynST-Uj0LI 61s kubelet-bootstrap Pending
node-csr-WybuNF2V1tk8HPbZWTptVNml6_TGL2NQoSZmrj-WjFY 63s kubelet-bootstrap Pending
node-csr-wcay-zaG8zEqkujTfrFfU2oi4WAZeUjP1EP4H68PWWY 68s kubelet-bootstrap Pending
接受node(master节点ES01操作即可)
[root@ES01 ssl]# kubectl certificate approve node-csr-5ZPVS1KILeu4rAciA9dkNWj45xB-YKfuttXlk3JETAs
certificatesigningrequest.certificates.k8s.io/node-csr-5ZPVS1KILeu4rAciA9dkNWj45xB-YKfuttXlk3JETAs approved
[root@ES01 ssl]# kubectl certificate approve node-csr-FAGM2YSxDJFWZvH1Q6TLDV1a83hq6gbkzynST-Uj0LI
certificatesigningrequest.certificates.k8s.io/node-csr-FAGM2YSxDJFWZvH1Q6TLDV1a83hq6gbkzynST-Uj0LI approved
[root@ES01 ssl]# kubectl certificate approve node-csr-WybuNF2V1tk8HPbZWTptVNml6_TGL2NQoSZmrj-WjFY
certificatesigningrequest.certificates.k8s.io/node-csr-WybuNF2V1tk8HPbZWTptVNml6_TGL2NQoSZmrj-WjFY approved
[root@ES01 ssl]# kubectl certificate approve node-csr-wcay-zaG8zEqkujTfrFfU2oi4WAZeUjP1EP4H68PWWY
certificatesigningrequest.certificates.k8s.io/node-csr-wcay-zaG8zEqkujTfrFfU2oi4WAZeUjP1EP4H68PWWY approved
再查看CSR
[root@ES01 ssl]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-5ZPVS1KILeu4rAciA9dkNWj45xB-YKfuttXlk3JETAs 9m52s kubelet-bootstrap Approved,Issued
node-csr-FAGM2YSxDJFWZvH1Q6TLDV1a83hq6gbkzynST-Uj0LI 9m48s kubelet-bootstrap Approved,Issued
node-csr-WybuNF2V1tk8HPbZWTptVNml6_TGL2NQoSZmrj-WjFY 9m50s kubelet-bootstrap Approved,Issued
node-csr-wcay-zaG8zEqkujTfrFfU2oi4WAZeUjP1EP4H68PWWY 9m55s kubelet-bootstrap Approved,Issued
4.3部署kube-proxy组件
kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡
1)创建 kube-proxy 配置文件(ES02、ES03、ES04、ES05,注意修改每个节点的ip地址)
vim /k8s/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=192.168.156.34 \
--cluster-cidr=10.254.0.0/16 \
--kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
2)创建kube-proxy systemd文件
vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy
ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
3)启动服务 systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy(非master节点操作,ES02、ES03、ES04、ES05)
[root@ES02 cfg]# systemctl daemon-reload
[root@ES02 cfg]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@ES02 cfg]# systemctl start kube-proxy
[root@ES02 cfg]# systemctl status kube-proxy
鈼[0m kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-03-02 09:02:59 CST; 34s ago
Main PID: 5416 (kube-proxy)
Memory: 11.1M
CGroup: /system.slice/kube-proxy.service
鈥5416 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=192.168.156.34 --cluster-cidr=10.254.0.0/16 --kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig
Mar 02 09:03:29 ES02 kube-proxy[5416]: I0302 09:03:29.655404 5416 proxier.go:664] Syncing iptables rules
Mar 02 09:03:29 ES02 kube-proxy[5416]: I0302 09:03:29.665651 5416 iptables.go:327] running iptables-save [-t filter]
Mar 02 09:03:29 ES02 kube-proxy[5416]: I0302 09:03:29.666639 5416 iptables.go:327] running iptables-save [-t nat]
Mar 02 09:03:29 ES02 kube-proxy[5416]: I0302 09:03:29.667968 5416 iptables.go:391] running iptables-restore [--noflush --counters]
Mar 02 09:03:29 ES02 kube-proxy[5416]: I0302 09:03:29.670089 5416 proxier.go:641] syncProxyRules took 14.70572ms
Mar 02 09:03:29 ES02 kube-proxy[5416]: I0302 09:03:29.670117 5416 bounded_frequency_runner.go:221] sync-runner: ran, next possible in 0s, periodic in 30s
Mar 02 09:03:31 ES02 kube-proxy[5416]: I0302 09:03:31.068147 5416 config.go:141] Calling handler.OnEndpointsUpdate
Mar 02 09:03:31 ES02 kube-proxy[5416]: I0302 09:03:31.068175 5416 config.go:141] Calling handler.OnEndpointsUpdate
Mar 02 09:03:33 ES02 kube-proxy[5416]: I0302 09:03:33.074082 5416 config.go:141] Calling handler.OnEndpointsUpdate
Mar 02 09:03:33 ES02 kube-proxy[5416]: I0302 09:03:33.074123 5416 config.go:141] Calling handler.OnEndpointsUpdate
4)查看集群状态
[root@ES01 ssl]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.156.34 Ready <none> 5m16s v1.13.4
192.168.156.35 Ready <none> 6m15s v1.13.4
192.168.156.36 Ready <none> 5m35s v1.13.4
192.168.156.37 Ready <none> 5m55s v1.13.4
注意期间要是kubelet,kube-proxy配置错误,比如监听IP或者hostname错误导致node not found,需要删除kubelet-client证书,重启kubelet服务,重启认证csr即可
[root@ES03 ssl]# ls
ca-key.pem ca.pem kubelet-client-2019-03-02-08-51-13.pem kubelet-client-current.pem kubelet.crt kubelet.key kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem
五 Flanneld网络部署
默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,为了部署步骤简洁明了,故flanneld放在后面安装 flannel服务需要先于docker启动。flannel服务启动时主要做了以下几步的工作: 从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录到/run/flannel/subnet.env中
5.1 etcd注册网段(仅在任意一个节点执行一次即可)
[root@ES01 ~]# /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="https://192.168.156.33:2379,https://192.168.156.34:2379,https://192.168.156.35:2379,https://192.168.156.36:2379,https://192.168.156.37:2379" set /k8s/network/config '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}
flanneld 当前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 写入配置 key 和网段数据; 写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;
5.2 flannel安装
1)解压安装(并将文件拷贝至各个节点)
[root@ES01 kubernetes]# tar -xvf flannel-v0.11.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
[root@ES01 kubernetes]# mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
[root@ES01 kubernetes]# ls
etcd-v3.3.12-linux-amd64 flannel-v0.11.0-linux-amd64.tar.gz kubernetes-client-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
etcd-v3.3.12-linux-amd64.tar.gz kubernetes kubernetes-node-linux-amd64.tar.gz README.md
[root@ES01 kubernetes]# cd /k8s/kubernetes/bin/
[root@ES01 bin]# scp flanneld mk-docker-opts.sh root@192.168.156.34:/k8s/kubernetes/bin/
root@192.168.156.34's password:
flanneld 100% 34MB 33.6MB/s 00:01
mk-docker-opts.sh 100% 2139 2.1KB/s 00:00
[root@ES01 bin]# scp flanneld mk-docker-opts.sh root@192.168.156.35:/k8s/kubernetes/bin/
root@192.168.156.35's password:
flanneld 100% 34MB 33.6MB/s 00:00
mk-docker-opts.sh 100% 2139 2.1KB/s 00:00
[root@ES01 bin]# scp flanneld mk-docker-opts.sh root@192.168.156.36:/k8s/kubernetes/bin/
root@192.168.156.36's password:
flanneld 100% 34MB 33.6MB/s 00:00
mk-docker-opts.sh 100% 2139 2.1KB/s 00:00
[root@ES01 bin]# scp flanneld mk-docker-opts.sh root@192.168.156.37:/k8s/kubernetes/bin/
root@192.168.156.37's password:
flanneld 100% 34MB 33.6MB/s 00:00
mk-docker-opts.sh 100% 2139 2.1KB/s 00:00
2)配置flanneld(各个节点均需配置)
vim /k8s/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.156.33:2379,https://192.168.156.34:2379,https://192.168.156.35:2379,https://192.168.156.36:2379,https://192.168.156.37:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem -etcd-prefix=/k8s/network"
创建flanneld systemd文件(各个节点均需配置)
vim /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/k8s/kubernetes/cfg/flanneld
ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
注意
mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 docker 启动时 使用这个文件中的环境变量配置 docker0 网桥; flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口; flanneld 运行时需要 root 权限;
3)配置Docker启动指定子网 修改EnvironmentFile=/run/flannel/subnet.env,ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS即可(ES02、ES03、ES04、ES05)
vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
4)启动服务 注意启动flannel前要关闭docker及相关的kubelet这样flannel才会覆盖docker0网桥(ES02、ES03、ES04、ES05,ES01节点仅开启flannel服务即可 )
[root@ES02 ~]# systemctl daemon-reload
[root@ES02 ~]# systemctl stop docker
[root@ES02 ~]# systemctl start flanneld
[root@ES02 ~]# systemctl status flanneld
鈼[0m flanneld.service - Flanneld overlay address etcd agent
Loaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled)
Active: active (running) since Sat 2019-03-02 12:05:02 CST; 45s ago
Process: 13147 ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env (code=exited, status=0/SUCCESS)
Main PID: 13118 (flanneld)
Memory: 11.7M
CGroup: /system.slice/flanneld.service
鈹斺攢13118 /k8s/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.156.33:2379,https://192.168.156.34:2379,https://192.168.156.35:2379,https://192.168.156.36:2379,https://192.168.156.37...
Mar 02 12:05:02 ES02 flanneld[13118]: I0302 12:05:02.022861 13118 iptables.go:155] Adding iptables rule: -s 10.254.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE
Mar 02 12:05:02 ES02 flanneld[13118]: I0302 12:05:02.024855 13118 iptables.go:155] Adding iptables rule: ! -s 10.254.0.0/16 -d 10.254.67.0/24 -j RETURN
Mar 02 12:05:02 ES02 systemd[1]: Started Flanneld overlay address etcd agent.
Mar 02 12:05:02 ES02 flanneld[13118]: I0302 12:05:02.025976 13118 main.go:429] Waiting for 22h59m55.336590207s to renew lease
Mar 02 12:05:02 ES02 flanneld[13118]: I0302 12:05:02.026959 13118 iptables.go:155] Adding iptables rule: ! -s 10.254.0.0/16 -d 10.254.0.0/16 -j MASQUERADE
Mar 02 12:05:03 ES02 flanneld[13118]: I0302 12:05:03.017620 13118 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
Mar 02 12:05:03 ES02 flanneld[13118]: I0302 12:05:03.017643 13118 iptables.go:167] Deleting iptables rule: -s 10.254.0.0/16 -j ACCEPT
Mar 02 12:05:03 ES02 flanneld[13118]: I0302 12:05:03.018628 13118 iptables.go:167] Deleting iptables rule: -d 10.254.0.0/16 -j ACCEPT
Mar 02 12:05:03 ES02 flanneld[13118]: I0302 12:05:03.019442 13118 iptables.go:155] Adding iptables rule: -s 10.254.0.0/16 -j ACCEPT
Mar 02 12:05:03 ES02 flanneld[13118]: I0302 12:05:03.021465 13118 iptables.go:155] Adding iptables rule: -d 10.254.0.0/16 -j ACCEPT
[root@ES02 ~]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@ES02 ~]# systemctl start docker
[root@ES02 ~]# systemctl status docker.service
鈼[0m docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-03-02 12:28:10 CST; 6s ago
Docs: https://docs.docker.com
Main PID: 15670 (dockerd)
Memory: 33.1M
CGroup: /system.slice/docker.service
鈹斺攢15670 /usr/bin/dockerd --bip=10.254.67.1/24 --ip-masq=false --mtu=1450
Mar 02 12:28:09 ES02 dockerd[15670]: time="2019-03-02T12:28:09.891177341+08:00" le...rpc
Mar 02 12:28:09 ES02 dockerd[15670]: time="2019-03-02T12:28:09.892894677+08:00" le...y2"
Mar 02 12:28:09 ES02 dockerd[15670]: time="2019-03-02T12:28:09.899306426+08:00" le...ds"
Mar 02 12:28:09 ES02 dockerd[15670]: time="2019-03-02T12:28:09.899914531+08:00" le...t."
Mar 02 12:28:09 ES02 dockerd[15670]: time="2019-03-02T12:28:09.993559255+08:00" le...e."
Mar 02 12:28:10 ES02 dockerd[15670]: time="2019-03-02T12:28:10.012209162+08:00" le...ay2
Mar 02 12:28:10 ES02 dockerd[15670]: time="2019-03-02T12:28:10.012437945+08:00" le...9.2
Mar 02 12:28:10 ES02 dockerd[15670]: time="2019-03-02T12:28:10.012497637+08:00" le...on"
Mar 02 12:28:10 ES02 dockerd[15670]: time="2019-03-02T12:28:10.018194143+08:00" le...ck"
Mar 02 12:28:10 ES02 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
[root@ES02 ~]# systemctl restart kubelet
[root@ES02 ~]# systemctl status kubelet
鈼[0m kubelet.service - Kubernetes Kubelet
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-03-02 12:28:30 CST; 8s ago
Main PID: 15828 (kubelet)
Memory: 32.2M
CGroup: /system.slice/kubelet.service
鈹斺攢15828 /k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-ov...
Mar 02 12:28:30 ES02 kubelet[15828]: I0302 12:28:30.930796 15828 kubelet_node_st....34
Mar 02 12:28:30 ES02 kubelet[15828]: I0302 12:28:30.930797 15828 server.go:459] Eve...
Mar 02 12:28:30 ES02 kubelet[15828]: I0302 12:28:30.930824 15828 server.go:459] ...ure
Mar 02 12:28:30 ES02 kubelet[15828]: I0302 12:28:30.930838 15828 server.go:459] ...PID
Mar 02 12:28:30 ES02 kubelet[15828]: I0302 12:28:30.930869 15828 server.go:459] ...ady
Mar 02 12:28:32 ES02 kubelet[15828]: I0302 12:28:32.820242 15828 kubelet.go:1995...ng)
Mar 02 12:28:34 ES02 kubelet[15828]: I0302 12:28:34.820241 15828 kubelet.go:1995...ng)
Mar 02 12:28:35 ES02 kubelet[15828]: I0302 12:28:35.891754 15828 kubelet.go:2189...ge:
Mar 02 12:28:36 ES02 kubelet[15828]: I0302 12:28:36.820233 15828 kubelet.go:1995...ng)
Mar 02 12:28:38 ES02 kubelet[15828]: I0302 12:28:38.820269 15828 kubelet.go:1995...ng)
Hint: Some lines were ellipsized, use -l to show in full.
[root@ES02 ~]# systemctl restart kube-proxy
[root@ES02 ~]# systemctl status kube-proxy
鈼[0m kube-proxy.service - Kubernetes Proxy
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2019-03-02 12:28:50 CST; 11s ago
Main PID: 15956 (kube-proxy)
Memory: 10.7M
CGroup: /system.slice/kube-proxy.service
鈥15956 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname...
Mar 02 12:28:53 ES02 kube-proxy[15956]: I0302 12:28:53.414392 15956 config.go:141...te
Mar 02 12:28:53 ES02 kube-proxy[15956]: I0302 12:28:53.414740 15956 config.go:141...te
Mar 02 12:28:55 ES02 kube-proxy[15956]: I0302 12:28:55.419828 15956 config.go:141...te
Mar 02 12:28:55 ES02 kube-proxy[15956]: I0302 12:28:55.419856 15956 config.go:141...te
Mar 02 12:28:57 ES02 kube-proxy[15956]: I0302 12:28:57.424877 15956 config.go:141...te
Mar 02 12:28:57 ES02 kube-proxy[15956]: I0302 12:28:57.424910 15956 config.go:141...te
Mar 02 12:28:59 ES02 kube-proxy[15956]: I0302 12:28:59.430710 15956 config.go:141...te
Mar 02 12:28:59 ES02 kube-proxy[15956]: I0302 12:28:59.430740 15956 config.go:141...te
Mar 02 12:29:01 ES02 kube-proxy[15956]: I0302 12:29:01.435578 15956 config.go:141...te
Mar 02 12:29:01 ES02 kube-proxy[15956]: I0302 12:29:01.435771 15956 config.go:141...te
Hint: Some lines were ellipsized, use -l to show in full.
5)验证服务(ES02、ES03、ES04、ES05)
[root@ES02 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=10.254.67.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=10.254.67.1/24 --ip-masq=false --mtu=1450"
[root@ES02 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
[root@ES01 bin]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.156.34 Ready <none> 3h54m v1.13.4
192.168.156.35 Ready <none> 3h55m v1.13.4
192.168.156.36 Ready <none> 3h55m v1.13.4
192.168.156.37 Ready <none> 3h55m v1.13.4
六 创建第一个K8S应用
1)创建一个测试用的deployment(master节点操作即可,ES01)
[root@ES01 bin]# kubectl run net-test --image=alpine --replicas=2 sleep 360000
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/net-test created
2)查看获取IP情况
[root@ES01 bin]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
net-test-cd766cb69-dm7lt 1/1 Running 0 2m8s 10.254.6.2 192.168.156.35 <none> <none>
net-test-cd766cb69-spbth 1/1 Running 0 2m8s 10.254.91.2 192.168.156.36 <none> <none>
转载于:https://blog.51cto.com/capfzgs/2358685
更多推荐
所有评论(0)