k8s部署版本
相关环境版本k8s-server v1.13k8s-node v1.13fannel v0.10.0网络管理组件flanneldocker版本。你18.09.x服务器地址172.29.122.34 cmcc1234172.29.122.35 cmcc1234172.29.122.36 cmcc1234角色划分k8s-master1172.29.122.34k...
相关环境
版本
k8s-server v1.13
k8s-node v1.13
fannel v0.10.0
网络管理组件
flannel
docker版本
18.09.x
服务器地址
172.29.122.34 cmcc1234
172.29.122.35 cmcc1234
172.29.122.36 cmcc1234
角色划分
k8s-master1 172.29.122.34 k8s-master etcd、kube-apiserver、kube-controller-manager、kube-scheduler
k8s-node1 172.29.122.35 k8s-node etcd、kubelet、docker、kube_proxy、flanneld
k8s-node2 172.29.122.36 k8s-node etcd、kubelet、docker、kube_proxy、flanneld
下载相关安装包{需要翻墙}
Server Binaries
wget https://dl.k8s.io/v1.13.1/kubernetes-server-linux-amd64.tar.gz
Node Binaries
https://dl.k8s.io/v1.13.1/kubernetes-node-linux-amd64.tar.gz
ETCD
wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz
FLANNEL
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
免密登录
- 在三台服务器上分别执行
ssh-keygen - 将公钥互相复制到各自的authorized_keys文件中
cat ~/.ssh/id_rsa.pub - 验证免密登录
ssh 172.29.122.34 如果不需要密码可以直接跳转说明配置成功
cfssl安装
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
ETCD集群部署
创建etcd证书
mkdir /k8s/etcd/{bin,cfg,ssl} -p
mkdir /k8s/kubernetes/{bin,cfg,ssl} -p
cd /k8s/etcd/ssl/
- etcd ca配置
cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "etcd": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
- etcd ca证书
cat << EOF | tee ca-csr.json { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
- etcd server证书
cat << EOF | tee server-csr.json { "CN": "etcd", "hosts": [ "172.29.122.34", "172.29.122.35", "172.29.122.36" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF
- 生成etcd ca证书和私钥 初始化ca
[root@cn7180 ssl]# ls ca-config.json ca-csr.json server-csr.json [root@cn7180 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca 2019/10/25 16:15:06 [INFO] generating a new CA key and certificate from CSR 2019/10/25 16:15:06 [INFO] generate received request 2019/10/25 16:15:06 [INFO] received CSR 2019/10/25 16:15:06 [INFO] generating key: rsa-2048 2019/10/25 16:15:07 [INFO] encoded CSR 2019/10/25 16:15:07 [INFO] signed certificate with serial number 157979575841044755978110016246328152013058007946 [root@cn7180 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server-csr.json
- 生成server证书
[root@cn7180 ssl]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem server-csr.json [root@cn7180 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server 2019/10/25 16:16:32 [INFO] generate received request 2019/10/25 16:16:32 [INFO] received CSR 2019/10/25 16:16:32 [INFO] generating key: rsa-2048 2019/10/25 16:16:33 [INFO] encoded CSR 2019/10/25 16:16:33 [INFO] signed certificate with serial number 391469749828472463047336755332099034604186483334 2019/10/25 16:16:33 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
etcd安装
- 解压缩
tar -xvf etcd-v3.3.10-linux-amd64.tar.gz cd etcd-v3.3.10-linux-amd64/ cp etcd etcdctl /k8s/etcd/bin/
- 配置etcd主文件
vim /k8s/etcd/cfg/etcd.conf #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/data1/etcd" ETCD_LISTEN_PEER_URLS="https://172.29.122.34:2380" ETCD_LISTEN_CLIENT_URLS="https://172.29.122.34:2379,http://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.29.122.34:2380" ETCD_ADVERTISE_CLIENT_URLS="https://172.29.122.34:2379" ETCD_INITIAL_CLUSTER="etcd01=https://172.29.122.34:2380,etcd02=https://172.29.122.35:2380,etcd03=https://172.29.122.36:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" #[Security] ETCD_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_CLIENT_CERT_AUTH="true" ETCD_PEER_CERT_FILE="/k8s/etcd/ssl/server.pem" ETCD_PEER_KEY_FILE="/k8s/etcd/ssl/server-key.pem" ETCD_PEER_TRUSTED_CA_FILE="/k8s/etcd/ssl/ca.pem" ETCD_PEER_CLIENT_CERT_AUTH="true"
- 配置etcd启动文件
mkdir /data1/etcd vim /usr/lib/systemd/system/etcd.service [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify WorkingDirectory=/data1/etcd/ EnvironmentFile=-/k8s/etcd/cfg/etcd.conf # set GOMAXPROCS to number of processors ExecStart=/bin/bash -c "GOMAXPROCS=$(nproc) /k8s/etcd/bin/etcd --name=\"${ETCD_NAME}\" --data-dir=\"${ETCD_DATA_DIR}\" --listen-client-urls=\"${ETCD_LISTEN_CLIENT_URLS}\" --listen-peer-urls=\"${ETCD_LISTEN_PEER_URLS}\" --advertise-client-urls=\"${ETCD_ADVERTISE_CLIENT_URLS}\" --initial-cluster-token=\"${ETCD_INITIAL_CLUSTER_TOKEN}\" --initial-cluster=\"${ETCD_INITIAL_CLUSTER}\" --initial-cluster-state=\"${ETCD_INITIAL_CLUSTER_STATE}\" --cert-file=\"${ETCD_CERT_FILE}\" --key-file=\"${ETCD_KEY_FILE}\" --trusted-ca-file=\"${ETCD_TRUSTED_CA_FILE}\" --client-cert-auth=\"${ETCD_CLIENT_CERT_AUTH}\" --peer-cert-file=\"${ETCD_PEER_CERT_FILE}\" --peer-key-file=\"${ETCD_PEER_KEY_FILE}\" --peer-trusted-ca-file=\"${ETCD_PEER_TRUSTED_CA_FILE}\" --peer-client-cert-auth=\"${ETCD_PEER_CLIENT_CERT_AUTH}\"" Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target
- 启动
systemctl daemon-reload systemctl enable etcd systemctl start etcd
- 查看日志或状态指令
systemctl status etcd journalctl -xe -u etcd
- 验证
[root@cn7181 etcd]# /k8s/etcd/bin/etcdctl member list 19eb556bcae57e88: name=etcd02 peerURLs=http://172.29.122.35:2380 clientURLs=http://172.29.122.35:2379 isLeader=false 61274b6157ca2b8f: name=etcd03 peerURLs=http://172.29.122.36:2380 clientURLs=http://172.29.122.36:2379 isLeader=false efc8a2a178d1b1a9: name=etcd01 peerURLs=http://172.29.122.34:2380 clientURLs=http://172.29.122.34:2379 isLeader=true
K8S部署
生成kubernets证书与私钥
- 制作kubernetes ca证书
cd /k8s/kubernetes/ssl cat << EOF | tee ca-config.json { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat << EOF | tee ca-csr.json { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF [root@cn7179 ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - 2019/10/28 10:01:10 [INFO] generating a new CA key and certificate from CSR 2019/10/28 10:01:10 [INFO] generate received request 2019/10/28 10:01:10 [INFO] received CSR 2019/10/28 10:01:10 [INFO] generating key: rsa-2048 2019/10/28 10:01:10 [INFO] encoded CSR 2019/10/28 10:01:10 [INFO] signed certificate with serial number 414907570217782876015915960823085720796825546365
- 制作apiserver证书
cat << EOF | tee server-csr.json { "CN": "kubernetes", "hosts": [ "10.254.0.1", "127.0.0.1", "10.2.8.44", "10.2.8.65", "10.2.8.34", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF [root@cn7179 ssl]# vim server-csr.json [root@cn7179 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server 2019/10/28 10:33:56 [INFO] generate received request 2019/10/28 10:33:56 [INFO] received CSR 2019/10/28 10:33:56 [INFO] generating key: rsa-2048 2019/10/28 10:33:57 [INFO] encoded CSR 2019/10/28 10:33:57 [INFO] signed certificate with serial number 522898096535483187759166310549952944192114896955 2019/10/28 10:33:57 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
- 制作kube-proxy证书
cat << EOF | tee kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF [root@cn7179 ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy 2019/10/28 10:35:36 [INFO] generate received request 2019/10/28 10:35:36 [INFO] received CSR 2019/10/28 10:35:36 [INFO] generating key: rsa-2048 2019/10/28 10:35:36 [INFO] encoded CSR 2019/10/28 10:35:36 [INFO] signed certificate with serial number 520966390166817242647900668072955606804992293034 2019/10/28 10:35:36 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
部署k8s server
kubernetes master 节点运行如下组件: kube-apiserver kube-scheduler kube-controller-manager kube-scheduler
和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式,master三节点高可用模式下可用
- 解压缩文件
tar -zxvf kubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin/ cp kube-scheduler kube-apiserver kube-controller-manager kubectl /k8s/kubernetes/bin/
部署kube-apiserver组件 创建TLS Bootstrapping Token
```
[root@cn7179 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '
c6122ea2d49766cab3db3d1d1c114b91
[root@cn7179 bin]# vim /k8s/kubernetes/cfg/token.csv
c6122ea2d49766cab3db3d1d1c114b91,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
```
- 创建Apiserver配置文件
vim /k8s/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://172.29.122.34:2379,https://172.29.122.35:2379,https://172.29.122.36:2379 \ --bind-address=172.29.122.34 \ --secure-port=6443 \ --advertise-address=172.29.122.34 \ --allow-privileged=true \ --service-cluster-ip-range=10.254.0.0/16 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/k8s/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/k8s/kubernetes/ssl/server.pem \ --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem \ --client-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/k8s/etcd/ssl/ca.pem \ --etcd-certfile=/k8s/etcd/ssl/server.pem \ --etcd-keyfile=/k8s/etcd/ssl/server-key.pem"
- 创建apiserver systemd文件
vim /usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-apiserver ExecStart=/k8s/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
- 启动服务
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl daemon-reload & systemctl enable kube-apiserver & systemctl start kube-apiserver [root@cn7179 system]# systemctl status kube-apiserver ● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2019-10-28 11:30:07 CST; 46s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 26956 (kube-apiserver) Tasks: 71 Memory: 250.5M CGroup: /system.slice/kube-apiserver.service └─26956 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=http://172.29.122.34:2379,http://172.29.122.35:2379,htt... Oct 28 11:30:47 cn7179.localdomain kube-apiserver[26956]: I1028 11:30:47.682871 26956 wrap.go:47] GET /apis/rbac.authorization.k8s.io/v1beta…34:56442] Oct 28 11:30:47 cn7179.localdomain kube-apiserver[26956]: I1028 11:30:47.683500 26956 wrap.go:47] GET /apis/storage.k8s.io/v1?timeout=32s: (…34:56442] Oct 28 11:30:47 cn7179.localdomain kube-apiserver[26956]: I1028 11:30:47.684099 26956 wrap.go:47] GET /apis/storage.k8s.io/v1beta1?timeout=3…34:56442] Oct 28 11:30:47 cn7179.localdomain kube-apiserver[26956]: I1028 11:30:47.684671 26956 wrap.go:47] GET /apis/admissionregistration.k8s.io/v1b…34:56442] Oct 28 11:30:47 cn7179.localdomain kube-apiserver[26956]: I1028 11:30:47.685225 26956 wrap.go:47] GET /apis/apiextensions.k8s.io/v1beta1?tim…34:56442] Oct 28 11:30:47 cn7179.localdomain kube-apiserver[26956]: I1028 11:30:47.685778 26956 wrap.go:47] GET /apis/scheduling.k8s.io/v1beta1?timeou…34:56442] Oct 28 11:30:47 cn7179.localdomain kube-apiserver[26956]: I1028 11:30:47.686412 26956 wrap.go:47] GET /apis/coordination.k8s.io/v1beta1?time…34:56442] Oct 28 11:30:47 cn7179.localdomain kube-apiserver[26956]: I1028 11:30:47.734064 26956 wrap.go:47] GET /api/v1/namespaces/default: (1.554722m...:56442] Oct 28 11:30:47 cn7179.localdomain kube-apiserver[26956]: I1028 11:30:47.735907 26956 wrap.go:47] GET /api/v1/namespaces/default/services/ku...:56442] Oct 28 11:30:47 cn7179.localdomain kube-apiserver[26956]: I1028 11:30:47.742576 26956 wrap.go:47] GET /api/v1/namespaces/default/endpoints/k...:56442] Hint: Some lines were ellipsized, use -l to show in full. [root@cn7179 system]# ps -ef |grep kube-apiserver root 26956 1 15 11:30 ? 00:00:18 /k8s/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=http://172.29.122.34:2379,http://172.29.122.35:2379,http://172.29.122.36:2379 --bind-address=172.29.122.34 --secure-port=6443 --advertise-address=172.29.122.34 --allow-privileged=true --service-cluster-ip-range=10.254.0.0/16 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/k8s/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/k8s/kubernetes/ssl/server.pem --tls-private-key-file=/k8s/kubernetes/ssl/server-key.pem --client-ca-file=/k8s/kubernetes/ssl/ca.pem --service-account-key-file=/k8s/kubernetes/ssl/ca-key.pem --etcd-cafile=/k8s/etcd/ssl/ca.pem --etcd-certfile=/k8s/etcd/ssl/server.pem --etcd-keyfile=/k8s/etcd/ssl/server-key.pem root 27796 26076 0 11:32 pts/2 00:00:00 grep --color=auto kube-apiserver [root@cn7179 system]# netstat -tulpn |grep kube-apiserve tcp 0 0 172.29.122.34:6443 0.0.0.0:* LISTEN 26956/kube-apiserve tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 26956/kube-apiserve
部署kube-scheduler组件
- 创建kube-scheduler配置文件
参数备注: –address:在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;–kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;–leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态.
- 配置启动参数
vim /k8s/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
- 创建kube-scheduler systemd文件
vim /usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-scheduler ExecStart=/k8s/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
- 启动服务
systemctl daemon-reload systemctl enable kube-scheduler.service systemctl start kube-scheduler.service systemctl daemon-reload & systemctl enable kube-scheduler.service & systemctl start kube-scheduler.service [root@cn7179 system]# systemctl status kube-scheduler.service ● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2019-10-28 11:44:43 CST; 7s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 524 (kube-scheduler) Tasks: 41 Memory: 18.4M CGroup: /system.slice/kube-scheduler.service └─524 /k8s/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect Oct 28 11:44:46 cn7179.localdomain kube-scheduler[524]: I1028 11:44:46.041342 524 shared_informer.go:123] caches populated Oct 28 11:44:46 cn7179.localdomain kube-scheduler[524]: I1028 11:44:46.141516 524 shared_informer.go:123] caches populated Oct 28 11:44:46 cn7179.localdomain kube-scheduler[524]: I1028 11:44:46.241686 524 shared_informer.go:123] caches populated Oct 28 11:44:46 cn7179.localdomain kube-scheduler[524]: I1028 11:44:46.341865 524 shared_informer.go:123] caches populated Oct 28 11:44:46 cn7179.localdomain kube-scheduler[524]: I1028 11:44:46.341920 524 controller_utils.go:1027] Waiting for caches to sync for...troller Oct 28 11:44:46 cn7179.localdomain kube-scheduler[524]: I1028 11:44:46.442127 524 shared_informer.go:123] caches populated Oct 28 11:44:46 cn7179.localdomain kube-scheduler[524]: I1028 11:44:46.442159 524 controller_utils.go:1034] Caches are synced for schedule...troller Oct 28 11:44:46 cn7179.localdomain kube-scheduler[524]: I1028 11:44:46.442239 524 leaderelection.go:205] attempting to acquire leader leas...uler... Oct 28 11:44:46 cn7179.localdomain kube-scheduler[524]: I1028 11:44:46.452280 524 leaderelection.go:214] successfully acquired lease kube-...heduler Oct 28 11:44:46 cn7179.localdomain kube-scheduler[524]: I1028 11:44:46.552528 524 shared_informer.go:123] caches populated Hint: Some lines were ellipsized, use -l to show in full.
部署kube-controller-manager组件
- 创建kube-controller-manager配置文件
vim /k8s/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.254.0.0/16 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/k8s/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/k8s/kubernetes/ssl/ca-key.pem \ --root-ca-file=/k8s/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/k8s/kubernetes/ssl/ca-key.pem"
- 创建kube-controller-manager systemd文件
vim /usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-controller-manager ExecStart=/k8s/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
- 启动服务
systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager [root@cn7179 system]# systemctl status kube-controller-manager ● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2019-10-28 11:48:23 CST; 10s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 2126 (kube-controller) Tasks: 68 Memory: 51.9M CGroup: /system.slice/kube-controller-manager.service └─2126 /k8s/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.... Oct 28 11:48:26 cn7179.localdomain kube-controller-manager[2126]: I1028 11:48:26.828755 2126 resource_quota_controller.go:427] syncing resou...tensio Oct 28 11:48:26 cn7179.localdomain kube-controller-manager[2126]: I1028 11:48:26.828921 2126 resource_quota_monitor.go:180] QuotaMonitor una...licies Oct 28 11:48:26 cn7179.localdomain kube-controller-manager[2126]: I1028 11:48:26.829005 2126 resource_quota_monitor.go:243] quota synced mon...oved 0 Oct 28 11:48:26 cn7179.localdomain kube-controller-manager[2126]: E1028 11:48:26.829030 2126 resource_quota_controller.go:437] failed to syn...icies" Oct 28 11:48:26 cn7179.localdomain kube-controller-manager[2126]: I1028 11:48:26.849514 2126 shared_informer.go:123] caches populated Oct 28 11:48:26 cn7179.localdomain kube-controller-manager[2126]: I1028 11:48:26.849535 2126 controller_utils.go:1034] Caches are synced for...roller Oct 28 11:48:26 cn7179.localdomain kube-controller-manager[2126]: I1028 11:48:26.849544 2126 garbagecollector.go:142] Garbage collector: all...arbage Oct 28 11:48:26 cn7179.localdomain kube-controller-manager[2126]: I1028 11:48:26.919340 2126 shared_informer.go:123] caches populated Oct 28 11:48:26 cn7179.localdomain kube-controller-manager[2126]: I1028 11:48:26.919373 2126 controller_utils.go:1034] Caches are synced for...roller Oct 28 11:48:26 cn7179.localdomain kube-controller-manager[2126]: I1028 11:48:26.919397 2126 garbagecollector.go:245] synced garbage collector Hint: Some lines were ellipsized, use -l to show in full.
验证k8s服务
- 设置环境变量
vim /etc/profile PATH=/k8s/kubernetes/bin:$PATH source /etc/profile
- 查看master服务状态
[root@cn7179 system]# kubectl get cs,nodes NAME STATUS MESSAGE ERROR componentstatus/scheduler Healthy ok componentstatus/controller-manager Healthy ok componentstatus/etcd-0 Healthy {"health":"true"} componentstatus/etcd-1 Healthy {"health":"true"} componentstatus/etcd-2 Healthy {"health":"true"}
k8s-node部署
部署kubelet组件
kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如exec、run、logs 等; kublet 启动时自动向 kube-apiserver 注册节点信息,
内置的 cadvisor 统计和监控节点的资源使用情况; 为确保安全,只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如apiserver、heapster)
-
安装二进制文件
tar zxvf kubernetes-node-linux-amd64.tar.gz cd kubernetes/node/bin/ cp kube-proxy kubelet kubectl /k8s/kubernetes/bin/
-
复制相关证书到node节点
[root@cn7179 ssl]# scp *.pem 172.29.122.35:$PWD ca-key.pem 100% 1679 1.9MB/s 00:00 ca.pem 100% 1359 1.7MB/s 00:00 kube-proxy-key.pem 100% 1679 2.8MB/s 00:00 kube-proxy.pem 100% 1403 1.6MB/s 00:00 server-key.pem 100% 1679 3.2MB/s 00:00 server.pem 100% 1627 3.4MB/s 00:00
-
创建kubelet bootstrap kubeconfig文件 通过脚本实现
vim /k8s/kubernetes/cfg/environment.sh #!/bin/bash #创建kubelet bootstrapping kubeconfig BOOTSTRAP_TOKEN=c6122ea2d49766cab3db3d1d1c114b91 KUBE_APISERVER="https://172.29.122.34:6443" #设置集群参数 kubectl config set-cluster kubernetes \ --certificate-authority=/k8s/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig #设置客户端认证参数 kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 kubectl config use-context default --kubeconfig=bootstrap.kubeconfig #---------------------- # 创建kube-proxy kubeconfig文件 kubectl config set-cluster kubernetes \ --certificate-authority=/k8s/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-credentials kube-proxy \ --client-certificate=/k8s/kubernetes/ssl/kube-proxy.pem \ --client-key=/k8s/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
-
执行脚本
sh environment.sh [root@cn7180 cfg]# sh environment.sh Cluster "kubernetes" set. User "kubelet-bootstrap" set. Context "default" created. Switched to context "default". Cluster "kubernetes" set. User "kube-proxy" set. Context "default" created. Switched to context "default".
-
创建kubelet参数配置模板文件
vim /k8s/kubernetes/cfg/kubelet.config kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 172.29.122.35 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.254.0.10"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true
-
创建kubelet配置文件
vim /k8s/kubernetes/cfg/kubelet KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=172.29.122.35 \ --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/k8s/kubernetes/cfg/bootstrap.kubeconfig \ --config=/k8s/kubernetes/cfg/kubelet.config \ --cert-dir=/k8s/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
-
创建kubelet systemd文件
vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/k8s/kubernetes/cfg/kubelet ExecStart=/k8s/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target
-
将kubelet-bootstrap用户绑定到系统集群角色
kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap
注意这个默认连接localhost:8080端口,可以在master上操作
[root@cn7179 ~]# kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
- 启动服务
systemctl daemon-reload systemctl enable kubelet systemctl start kubelet [root@cn7180 systemd]# systemctl status kubelet ● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-10-29 10:59:03 CST; 1min 34s ago Main PID: 303 (kubelet) Tasks: 39 Memory: 27.7M CGroup: /system.slice/kubelet.service └─303 /k8s/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=172.29.122.35 --kubeconfig=/k8s/kubernetes/cfg/kubelet.kubeconfig ... Oct 29 10:59:03 cn7180.localdomain kubelet[303]: I1029 10:59:03.433675 303 feature_gate.go:206] feature gates: &{map[]} Oct 29 10:59:04 cn7180.localdomain kubelet[303]: I1029 10:59:04.833588 303 server.go:825] Using self-signed cert (/k8s/kubernetes/ssl/kubelet.cr...let.key) Oct 29 10:59:04 cn7180.localdomain kubelet[303]: I1029 10:59:04.840153 303 mount_linux.go:179] Detected OS with systemd Oct 29 10:59:04 cn7180.localdomain kubelet[303]: I1029 10:59:04.840257 303 server.go:407] Version: v1.13.1 Oct 29 10:59:04 cn7180.localdomain kubelet[303]: I1029 10:59:04.840359 303 feature_gate.go:206] feature gates: &{map[]} Oct 29 10:59:04 cn7180.localdomain kubelet[303]: I1029 10:59:04.840466 303 feature_gate.go:206] feature gates: &{map[]} Oct 29 10:59:04 cn7180.localdomain kubelet[303]: I1029 10:59:04.840626 303 plugins.go:103] No cloud provider specified. Oct 29 10:59:04 cn7180.localdomain kubelet[303]: I1029 10:59:04.840655 303 server.go:523] No cloud provider specified: "" from the config file: "" Oct 29 10:59:04 cn7180.localdomain kubelet[303]: I1029 10:59:04.840723 303 bootstrap.go:65] Using bootstrap kubeconfig to generate TLS client ce...fig file Oct 29 10:59:04 cn7180.localdomain kubelet[303]: I1029 10:59:04.844234 303 bootstrap.go:96] No valid private key and/or certificate found, reusi... new one Hint: Some lines were ellipsized, use -l to show in full.
- Master接受kubelet CSR请求 可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书,如下是手动 approve CSR请求操作方法 查看CSR列表
[root@cn7179 ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-qsuUJ5Ug54Rnhnay7nDOMBzsUtHYnzBUhhTykD3PzMw 9m10s kubelet-bootstrap Pending
- 接受node
[root@cn7179 ~]# kubectl certificate approve node-csr-qsuUJ5Ug54Rnhnay7nDOMBzsUtHYnzBUhhTykD3PzMw certificatesigningrequest.certificates.k8s.io/node-csr-qsuUJ5Ug54Rnhnay7nDOMBzsUtHYnzBUhhTykD3PzMw approved
- 再查看CSR
[root@cn7179 ~]# kubectl get csr NAME AGE REQUESTOR CONDITION node-csr-qsuUJ5Ug54Rnhnay7nDOMBzsUtHYnzBUhhTykD3PzMw 11m kubelet-bootstrap Approved,Issued
部署kube-proxy组件
kube-proxy 运行在所有 node节点上,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡
- 创建 kube-proxy 配置文件
vim /k8s/kubernetes/cfg/kube-proxy KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=172.29.122.35 \ --cluster-cidr=10.254.0.0/16 \ --kubeconfig=/k8s/kubernetes/cfg/kube-proxy.kubeconfig"
- 创建kube-proxy systemd文件
vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/k8s/kubernetes/cfg/kube-proxy ExecStart=/k8s/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target
- 启动服务
systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy
- 验证
[root@cn7180 system]# systemctl status kube-proxy ● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-10-29 11:16:24 CST; 11s ago Main PID: 11744 (kube-proxy) Tasks: 0 Memory: 14.7M CGroup: /system.slice/kube-proxy.service ‣ 11744 /k8s/kubernetes/bin/kube-proxy --logtostderr=true --v=4 --hostname-override=172.29.122.35 --cluster-cidr=10.254.0.0/16 --kubeconfig=/k8s/... Oct 29 11:16:26 cn7180.localdomain kube-proxy[11744]: I1029 11:16:26.934696 11744 config.go:141] Calling handler.OnEndpointsUpdate Oct 29 11:16:26 cn7180.localdomain kube-proxy[11744]: I1029 11:16:26.934757 11744 config.go:141] Calling handler.OnEndpointsUpdate Oct 29 11:16:28 cn7180.localdomain kube-proxy[11744]: I1029 11:16:28.940522 11744 config.go:141] Calling handler.OnEndpointsUpdate Oct 29 11:16:28 cn7180.localdomain kube-proxy[11744]: I1029 11:16:28.940582 11744 config.go:141] Calling handler.OnEndpointsUpdate Oct 29 11:16:30 cn7180.localdomain kube-proxy[11744]: I1029 11:16:30.947674 11744 config.go:141] Calling handler.OnEndpointsUpdate Oct 29 11:16:30 cn7180.localdomain kube-proxy[11744]: I1029 11:16:30.947763 11744 config.go:141] Calling handler.OnEndpointsUpdate Oct 29 11:16:32 cn7180.localdomain kube-proxy[11744]: I1029 11:16:32.953965 11744 config.go:141] Calling handler.OnEndpointsUpdate Oct 29 11:16:32 cn7180.localdomain kube-proxy[11744]: I1029 11:16:32.954049 11744 config.go:141] Calling handler.OnEndpointsUpdate Oct 29 11:16:34 cn7180.localdomain kube-proxy[11744]: I1029 11:16:34.959989 11744 config.go:141] Calling handler.OnEndpointsUpdate Oct 29 11:16:34 cn7180.localdomain kube-proxy[11744]: I1029 11:16:34.960058 11744 config.go:141] Calling handler.OnEndpointsUpdate
Flanneld网络部署
默认没有flanneld网络,Node节点间的pod不能通信,只能Node内通信,为了部署步骤简洁明了,故flanneld放在后面安装 flannel服务需要先于docker启动。
flannel服务启动时主要做了以下几步的工作:
从etcd中获取network的配置信息 划分subnet,并在etcd中进行注册 将子网信息记录到/run/flannel/subnet.env中
etcd注册网段
flanneld 当前版本 (v0.10.0) 不支持 etcd v3,故使用 etcd v2 API 写入配置 key 和网段数据;
写入的 Pod 网段 ${CLUSTER_CIDR} 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;
```
[root@cn7180 bin]# ETCDCTL_API=2 /k8s/etcd/bin/etcdctl --ca-file=/k8s/etcd/ssl/ca.pem --cert-file=/k8s/etcd/ssl/server.pem --key-file=/k8s/etcd/ssl/server-key.pem --endpoints="http://172.29.122.34:2379,http://172.29.122.35:2379,http://172.29.122.36:2379" set /k8s/network/config '{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}'
{ "Network": "10.254.0.0/16", "Backend": {"Type": "vxlan"}}
```
flannel安装
- 解压安装
tar -xvf flannel-v0.10.0-linux-amd64.tar.gz mv flanneld mk-docker-opts.sh /k8s/kubernetes/bin/
- 配置flanneld
vim /k8s/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=http://172.29.122.34:2379,http://172.29.122.35:2379,http://172.29.122.36:2379 -etcd-cafile=/k8s/etcd/ssl/ca.pem -etcd-certfile=/k8s/etcd/ssl/server.pem -etcd-keyfile=/k8s/etcd/ssl/server-key.pem -etcd-prefix=/k8s/network"
- 创建flanneld systemd文件
vim /usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/k8s/kubernetes/cfg/flanneld ExecStart=/k8s/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target
mk-docker-opts.sh 脚本将分配给 flanneld 的 Pod 子网网段信息写入 /run/flannel/docker 文件,后续 docker 启动时 使用这个文件中的环境变量配置 docker0 网桥;
flanneld 使用系统缺省路由所在的接口与其它节点通信,对于有多个网络接口(如内网和公网)的节点,可以用 -iface 参数指定通信接口; flanneld 运行时需要 root 权限
- 配置Docker启动指定子网 修改EnvironmentFile=/run/flannel/subnet.env,ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS即可
vim /usr/lib/systemd/system/docker.service [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target
- 启动
systemctl daemon-reload systemctl enable flanneld systemctl stop docker systemctl start flanneld
- 验证服务
[root@cn7181 k8s]# systemctl status flanneld.service ● flanneld.service - Flanneld overlay address etcd agent Loaded: loaded (/usr/lib/systemd/system/flanneld.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2019-10-29 11:38:49 CST; 24s ago Process: 48141 ExecStartPost=/k8s/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env (code=exited, status=0/SUCCESS) Main PID: 48062 (flanneld) Tasks: 28 Memory: 15.3M CGroup: /system.slice/flanneld.service └─48062 /k8s/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=http://172.29.122.34:2379,http://172.29.122.35:2379,http://172.29.122.36:237... Oct 29 11:38:49 cn7181.localdomain flanneld[48062]: I1029 11:38:49.936742 48062 iptables.go:137] Deleting iptables rule: -d 10.254.0.0/16 -j ACCEPT Oct 29 11:38:49 cn7181.localdomain flanneld[48062]: I1029 11:38:49.937795 48062 iptables.go:137] Deleting iptables rule: ! -s 10.254.0.0/16 -...j RETURN Oct 29 11:38:49 cn7181.localdomain flanneld[48062]: I1029 11:38:49.938305 48062 iptables.go:125] Adding iptables rule: -s 10.254.0.0/16 -j ACCEPT Oct 29 11:38:49 cn7181.localdomain flanneld[48062]: I1029 11:38:49.939160 48062 iptables.go:137] Deleting iptables rule: ! -s 10.254.0.0/16 -...SQUERADE Oct 29 11:38:49 cn7181.localdomain flanneld[48062]: I1029 11:38:49.941014 48062 iptables.go:125] Adding iptables rule: -s 10.254.0.0/16 -d 10...j RETURN Oct 29 11:38:49 cn7181.localdomain flanneld[48062]: I1029 11:38:49.941483 48062 iptables.go:125] Adding iptables rule: -d 10.254.0.0/16 -j ACCEPT Oct 29 11:38:49 cn7181.localdomain flanneld[48062]: I1029 11:38:49.944109 48062 iptables.go:125] Adding iptables rule: -s 10.254.0.0/16 ! -d ...SQUERADE Oct 29 11:38:49 cn7181.localdomain flanneld[48062]: I1029 11:38:49.947077 48062 iptables.go:125] Adding iptables rule: ! -s 10.254.0.0/16 -d ...j RETURN Oct 29 11:38:49 cn7181.localdomain flanneld[48062]: I1029 11:38:49.950074 48062 iptables.go:125] Adding iptables rule: ! -s 10.254.0.0/16 -d ...SQUERADE Oct 29 11:38:49 cn7181.localdomain systemd[1]: Started Flanneld overlay address etcd agent. Hint: Some lines were ellipsized, use -l to show in full. [root@cn7180 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=10.254.56.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1450" DOCKER_NETWORK_OPTIONS=" --bip=10.254.56.1/24 --ip-masq=false --mtu=1450" [root@cn7180 ~]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 40:f2:e9:76:2e:38 brd ff:ff:ff:ff:ff:ff inet 172.29.122.35/24 brd 172.29.122.255 scope global noprefixroute eno1 valid_lft forever preferred_lft forever inet6 fe80::42f2:e9ff:fe76:2e38/64 scope link noprefixroute valid_lft forever preferred_lft forever 3: enp15s0f0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:0a:f7:49:0d:e0 brd ff:ff:ff:ff:ff:ff 4: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 40:f2:e9:76:2e:39 brd ff:ff:ff:ff:ff:ff 5: enp15s0f1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000 link/ether 00:0a:f7:49:0d:e2 brd ff:ff:ff:ff:ff:ff 6: eno3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 40:f2:e9:76:2e:3a brd ff:ff:ff:ff:ff:ff 8: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 52:54:00:1d:b2:dd brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 valid_lft forever preferred_lft forever 9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000 link/ether 52:54:00:1d:b2:dd brd ff:ff:ff:ff:ff:ff 17: enp0s29u1u1u5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 42:f2:e9:76:2e:3f brd ff:ff:ff:ff:ff:ff 18: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default link/ether 26:fd:fc:20:f3:a1 brd ff:ff:ff:ff:ff:ff inet 10.254.56.0/32 scope global flannel.1 valid_lft forever preferred_lft forever inet6 fe80::24fd:fcff:fe20:f3a1/64 scope link valid_lft forever preferred_lft forever 19: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:de:30:c6:19 brd ff:ff:ff:ff:ff:ff inet 10.1.61.1/24 brd 10.1.61.255 scope global docker0 valid_lft forever preferred_lft forever
验证
```
[root@cn7179 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
172.29.122.35 Ready <none> 118m v1.13.1
172.29.122.36 Ready <none> 23s v1.13.1
```
29u1u1u5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 42:f2:e9:76:2e:3f brd ff:ff:ff:ff:ff:ff
18: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 26:fd:fc:20:f3:a1 brd ff:ff:ff:ff:ff:ff
inet 10.254.56.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::24fd:fcff:fe20:f3a1/64 scope link
valid_lft forever preferred_lft forever
19: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42🇩🇪30:c6:19 brd ff:ff:ff:ff:ff:ff
inet 10.1.61.1/24 brd 10.1.61.255 scope global docker0
valid_lft forever preferred_lft forever
```
验证
```
[root@cn7179 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
172.29.122.35 Ready <none> 118m v1.13.1
172.29.122.36 Ready <none> 23s v1.13.1
```
更多推荐
所有评论(0)