k8s 1.17.3 二进制安装
k8s及etcd包获取https://github.com/kubernetes/kubernetes/releaseshttps://github.com/etcd-io/etcd/releasesflannel官方文档及下载地址官方文档https://github.com/coreos/flannel下载地址https://github.com/coreos/flannel/...
k8s及etcd包获取
https://github.com/kubernetes/kubernetes/releases
https://github.com/etcd-io/etcd/releases
flannel官方文档及下载地址
官方文档
https://github.com/coreos/flannel
下载地址
https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
系统版本
[root@k8s flannel]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
各插件版本
[root@k8s mysql]# etcdctl --version
etcdctl version: 3.2.24
API version: 2
[root@k8s mysql]# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:14:22Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s flannel]# /data/flannel/flanneld -version
v0.11.0
[root@k8s flannel]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
coredns/coredns 1.6.2 bf261d157914 8 months ago 44.1MB
busybox 1.28 8c811b4aec35 23 months ago 1.15MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB
k8s:192.168.73.100
k8s-node1 : 192.168.73.101
k8s-node2 : 192.168.73.102
1、关闭firewalld和selinux(所有主机)
vi /etc/selinux/config
systemctl stop firewalld
systemctl disable firewalld
2、配置解析/etc/hosts(所有主机)
vim /etc/hosts
192.168.73.100 k8s
192.168.73.101 k8s-node1
192.168.73.102 k8s-node2
3、添加内核参数文件 /etc/sysctl.d/k8s.conf(所有主机)
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
4、执行命令(所有主机)
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
5、安装docker(所有主机)
yum install -y yum-utils device-mapper-persistent-data lvm2
wget -O /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
查看版本
yum list docker-ce.x86_64 --showduplicates |sort -r
安装指定的docker(所有主机)
yum makecache fast
yum install docker-ce-18.06.1.ce-3.el7 -y
查看docker版本
docker -v
加速(所有主机)
curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://45203393.m.daocloud.io
systemctl daemon-reload
systemctl restart docker
6、关闭swap(所有主机)
swapoff -a
sysctl -p /etc/sysctl.d/k8s.conf
注释掉/etc/fstab中的swap条目
mount -a
echo "KUBELET_EXTRA_ARGS=--fail-swap-on=false" > /etc/sysconfig/kubelet
7、k8s各个服务安装
etcd,kube-apiserver,kube-controller-manager,kube-scheduler为master上服务,除此,还可以增加kubelet,kube-proxy服务
kubelet,kube-proxy为node上服务
启动测试报错方法
服务名称 + 配置文件内容
例:
kube-scheduler --kubeconfig=/etc/kubernetes/kubeconfig --logtostderr=false --log-dir=/var/log/kubernetes --v=0
新建服务启动及配置文件
etcd服务
[root@kubernetes etcd]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Service
After=network.target
[Service]
Type=simple
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd
[Install]
WantedBy=multi-user.target
[root@kubernetes etcd]# mkdir /var/lib/etcd/ /etc/etcd/
[root@kubernetes etcd]#touch /etc/etcd/etcd.conf
kube-apiserver服务
[root@kubernetes etcd]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=http://github.com/GoogleCloudPlatform/kubernetes
After=etcd.service
Wants=etcd.service
[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@kubernetes etcd]# mkdir /etc/kubernetes/
[root@kubernetes etcd]# cat /etc/kubernetes/apiserver
KUBE_API_ARGS=" \
--etcd-servers=http://192.168.73.100:2379,http://127.0.0.1:2379 \
--insecure-bind-address=0.0.0.0 \
--insecure-port=8080 \
--service-cluster-ip-range=169.169.0.0/16 \
--service-node-port-range=1-65535 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,DefaultStorageClass,DefaultTolerationSeconds,ValidatingAdmissionWebhook,ResourceQuota \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"
kube-controller-manager服务
[root@kubernetes etcd]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GooleCloundPlatform/kubernetes
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@kubernetes etcd]# cat /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"
[root@kubernetes ~]# cat /etc/kubernetes/kubeconfig
apiVersion: v1
kind: Config
users:
- name: client
user:
clusters:
- name: default
cluster:
server: http://192.168.73.100:8080
contexts:
- context:
cluster: default
user: client
name: default
current-context: default
kube-scheduler服务
[root@kubernetes etcd]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Descriotion=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloundPlatform/kubernetes
After=kube-apiserver.service
Require=kube-apiserver.service
[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WanteBy=multi-user.target
[root@kubernetes etcd]#
[root@kubernetes etcd]# cat /etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"
kubelet服务(--hostname-overrid,此参数填写所在本机IP)
[root@kubernetes etcd]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Descriotion=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet $KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target
[root@kubernetes etcd]#
[root@kubernetes etcd]# cat /etc/kubernetes/kubelet
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--hostname-override=192.168.73.100 \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"
kube-proxy服务
[root@kubernetes etcd]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-proxy Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
Requires=network.service
[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@kubernetes etcd]#
[root@kubernetes etcd]# cat /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
服务安装
master节点
[root@k8s ~]# mkdir /data
[root@kubernetes data]# cd /data/
[root@kubernetes data]# ls
etcd-v3.2.24-linux-amd64.tar.gz go1.13.4.linux-amd64.tar.gz
flannel-v0.11.0-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
[root@kubernetes log]# mkdir /var/lib/etcd/
[root@kubernetes log]# mkdir /var/log/kubernetes
[root@kubernetes bin]# mkdir /var/lib/kubelet
[root@kubernetes data]# tar -xf etcd-v3.2.24-linux-amd64.tar.gz
[root@kubernetes data]# ls
etcd-v3.2.24-linux-amd64 go1.13.4.linux-amd64.tar.gz
etcd-v3.2.24-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
flannel-v0.11.0-linux-amd64.tar.gz
[root@kubernetes data]#
[root@kubernetes data]# mv etcd-v3.2.24-linux-amd64 etcd
[root@kubernetes etcd]# cd /data/
[root@kubernetes data]# ls
etcd go1.13.4.linux-amd64.tar.gz
etcd-v3.2.24-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
flannel-v0.11.0-linux-amd64.tar.gz
[root@kubernetes data]#
[root@kubernetes data]# tar -xf kubernetes-server-linux-amd64.tar.gz
[root@kubernetes data]# ls
etcd go1.13.4.linux-amd64.tar.gz
etcd-v3.2.24-linux-amd64.tar.gz kubernetes
flannel-v0.11.0-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
[root@kubernetes log]# cd /data/etcd
[root@kubernetes etcd]# ls
Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md
[root@kubernetes etcd]# cp etcd /usr/bin/etcd
[root@kubernetes etcd]# cp etcdctl /usr/bin/etcdctl
[root@kubernetes etcd]#
[root@kubernetes etcd]#
[root@kubernetes etcd]# cd /data/kubernetes/server/bin/
[root@kubernetes bin]# ls
apiextensions-apiserver kube-apiserver.docker_tag kube-controller-manager.docker_tag kubelet kube-proxy.tar kube-scheduler.tar
kubeadm kube-apiserver.tar kube-controller-manager.tar kube-proxy kube-scheduler mounter
kube-apiserver kube-controller-manager kubectl kube-proxy.docker_tag kube-scheduler.docker_tag
[root@kubernetes bin]#
[root@kubernetes bin]#
[root@kubernetes bin]# cp kube-apiserver /usr/bin/kube-apiserver
[root@kubernetes bin]# cp kube-controller-manager /usr/bin/kube-controller-manager
[root@kubernetes bin]# cp kubectl /usr/bin/kubectl
[root@kubernetes bin]# cp kubelet /usr/bin/kubelet
[root@kubernetes bin]# cp kube-proxy /usr/bin/kube-proxy
[root@kubernetes bin]# cp kube-scheduler /usr/bin/kube-scheduler
[root@k8s-node2 ~]# mkdir /data
[root@k8s-node1 ~]# mkdir /data
[root@k8s data]# scp flannel-v0.11.0-linux-amd64.tar.gz 192.168.73.102:/data
root@192.168.73.102's password:
flannel-v0.11.0-linux-amd64.tar.gz 100% 9342KB 71.8MB/s 00:00
[root@k8s data]# scp flannel-v0.11.0-linux-amd64.tar.gz 192.168.73.101:/data
root@192.168.73.101's password:
flannel-v0.11.0-linux-amd64.tar.gz 100% 9342KB 60.2MB/s 00:00
[root@k8s data]# scp kubernetes-server-linux-amd64.tar.gz 192.168.73.101:/data
root@192.168.73.101's password:
Permission denied, please try again.
root@192.168.73.101's password:
kubernetes-server-linux-amd64.tar.gz 100% 343MB 62.5MB/s 00:05
[root@k8s data]# scp kubernetes-server-linux-amd64.tar.gz 192.168.73.102:/data
root@192.168.73.102's password:
kubernetes-server-linux-amd64.tar.gz 100% 343MB 65.2MB/s 00:05
[root@k8s data]#
node节点
[root@kubernetes-node1 data]# cd /data/
[root@kubernetes-node1 data]# ls
flannel-v0.11.0-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
go1.13.4.linux-amd64.tar.gz
[root@kubernetes-node1 data]#
[root@kubernetes-node1 data]# tar -xf kubernetes-server-linux-amd64.tar.gz
[root@kubernetes-node1 bin]# cd /data/kubernetes/server/bin/
[root@kubernetes-node1 bin]# ls
apiextensions-apiserver kube-apiserver.docker_tag kube-controller-manager.docker_tag kubelet kube-proxy.tar kube-scheduler.tar
kubeadm kube-apiserver.tar kube-controller-manager.tar kube-proxy kube-scheduler mounter
kube-apiserver kube-controller-manager kubectl kube-proxy.docker_tag kube-scheduler.docker_tag
[root@kubernetes-node1 bin]#
[root@kubernetes-node1 bin]# cp kubelet /usr/bin/kubelet
[root@kubernetes-node1 bin]# cp kubectl /usr/bin/kubectl
[root@kubernetes-node1 bin]# cp kube-proxy /usr/bin/kube-proxy
[root@kubernetes-node1 bin]# mkdir /var/log/kubernetes
[root@kubernetes-node1 bin]# mkdir /var/lib/kubelet
[root@kubernetes-node2 data]# cd /data/
[root@kubernetes-node2 data]# ls
flannel-v0.11.0-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz
go1.13.4.linux-amd64.tar.gz
[root@kubernetes-node2 data]#
[root@kubernetes-node2 data]#
[root@kubernetes-node2 data]# tar -xf kubernetes-server-linux-amd64.tar.gz
[root@kubernetes-node2 ~]# cd /data/kubernetes/server/bin/
[root@kubernetes-node2 bin]# ls
apiextensions-apiserver kube-apiserver.docker_tag kube-controller-manager.docker_tag kubelet kube-proxy.tar kube-scheduler.tar
kubeadm kube-apiserver.tar kube-controller-manager.tar kube-proxy kube-scheduler mounter
kube-apiserver kube-controller-manager kubectl kube-proxy.docker_tag kube-scheduler.docker_tag
[root@kubernetes-node2 bin]#
[root@kubernetes-node2 bin]# cp kubectl /usr/bin/kubectl
[root@kubernetes-node2 bin]# cp kubelet /usr/bin/kubelet
[root@kubernetes-node2 bin]# cp kube-proxy /usr/bin/kube-proxy
[root@kubernetes-node2 bin]# mkdir /var/log/kubernetes
[root@kubernetes-node2 log]# mkdir /var/lib/kubelet
各个服务启动(启动需要按顺序启动,各个服务启动后需要检测服务状态,命令为:systemctl status 服务名称)
注:node上的kubele和kube-proxy启动需要时间,可能会启动失败,一段时间后,可以正常启动(node需要时间向master注册信息)
[root@kubernetes etcd]# systemctl daemon-reload
[root@kubernetes etcd]# systemctl enable etcd
[root@kubernetes etcd]# systemctl enable kube-apiserver
[root@kubernetes etcd]# systemctl enable kube-controller-manager
[root@kubernetes etcd]# systemctl enable kube-scheduler
[root@kubernetes etcd]# systemctl enable kubelet
[root@kubernetes etcd]# systemctl enable kube-proxy
[root@kubernetes etcd]# systemctl start etcd
[root@kubernetes etcd]# systemctl start kube-apiserver
[root@kubernetes etcd]# systemctl start kube-controller-manager
[root@kubernetes etcd]# systemctl start kube-scheduler
[root@kubernetes etcd]# systemctl start kubelet
[root@kubernetes etcd]# systemctl start kube-proxy
[root@kubernetes etcd]# systemctl status etcd
[root@kubernetes etcd]# systemctl status kube-apiserver
[root@kubernetes etcd]# systemctl status kube-controller-manager
[root@kubernetes etcd]# systemctl status kube-scheduler
[root@kubernetes etcd]# systemctl status kubelet
[root@kubernetes etcd]# systemctl status kube-proxy
node节点详见上面的kube-proxy和kubelet服务启动和配置文件,kubeconfig配置文件
kubelet配置文件中--hostname-override需要更改成当前主机IP
在master上确认node注册完毕
[root@kubernetes etcd]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.73.100 Ready <none> 82m v1.17.3
192.168.73.101 Ready <none> 81m v1.17.3
192.168.73.102 Ready <none> 51m v1.17.3
[root@kubernetes etcd]# kubectl get ns
NAME STATUS AGE
default Active 108m
kube-node-lease Active 108m
kube-public Active 108m
kube-system Active 108m
8、认证
1.基于CA签名的双向数字证书认证方式
1.1.设置kube-apiserver的CA证书相关的文件和启动参数
[root@kubernetes ~]# mkdir -p /var/lib/kubernetes
[root@kubernetes ~]# cd /var/lib/kubernetes/
[root@kubernetes kubernetes]# openssl genrsa -out ca.key 2048
Generating RSA private key, 2048 bit long modulus
.............................+++
........+++
e is 65537 (0x10001)
[root@kubernetes kubernetes]# openssl req -x509 -new -nodes -key ca.key -subj "/CN=kubernetes" -days 5000 -out ca.crt
[root@kubernetes kubernetes]# openssl genrsa -out server.key 2048
Generating RSA private key, 2048 bit long modulus
....................................+++
.................+++
e is 65537 (0x10001)
[root@kubernetes kubernetes]# ls
ca.crt ca.key server.key
[root@kubernetes kubernetes]# vi master_ssl.cnf
[root@kubernetes kubernetes]# cat master_ssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s-master
DNS.6 = k8s
IP.1 = 169.169.0.1
IP.2 = 192.168.73.100
[root@kubernetes kubernetes]# openssl req -new -key server.key -subj "/CN=kubernetes" -config master_ssl.cnf -out server.csr
[root@kubernetes kubernetes]# openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile master_ssl.cnf -out server.crt
Signature ok
subject=/CN=kubernetes
Getting CA Private Key
[root@kubernetes kubernetes]#
设置kube-apiserver的三个启动参数
--client-ca-file=/var/lib/kubernetes/ca.crt
--tls-private-key-file=/var/lib/kubernetes/server.key
--tls-cert-file=/var/lib/kubernetes/server.crt
关闭非安全端口,开启安全端口6443
--insecure-port=0
--secure-port=6443
[root@kubernetes kubernetes]# cp /etc/kubernetes/apiserver /etc/kubernetes/apiserver.bak
[root@kubernetes kubernetes]#
[root@kubernetes kubernetes]# vi /etc/kubernetes/apiserver
[root@kubernetes kubernetes]#
[root@kubernetes kubernetes]# cat /etc/kubernetes/apiserver
KUBE_API_ARGS=" \
--etcd-servers=http://127.0.0.1:2379 \
--insecure-bind-address=0.0.0.0 \
--insecure-port=0 \
--service-cluster-ip-range=169.169.0.0/16 \
--service-node-port-range=1-65535 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,DefaultStorageClass,DefaultTolerationSeconds,ValidatingAdmissionWebhook,ResourceQuota \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0 \
--client-ca-file=/var/lib/kubernetes/ca.crt \
--tls-private-key-file=/var/lib/kubernetes/server.key \
--tls-cert-file=/var/lib/kubernetes/server.crt \
--secure-port=6443"
[root@kubernetes kubernetes]# systemctl restart kube-apiserver
[root@kubernetes kubernetes]# systemctl status kube-apiserver
1.2.设置kube-controller-manager的客户端证书,私钥和启动参数
[root@kubernetes kubernetes]# openssl genrsa -out cs_client.key 2048
Generating RSA private key, 2048 bit long modulus
.................................................................+++
....................+++
e is 65537 (0x10001)
[root@kubernetes kubernetes]# openssl req -new -key cs_client.key -subj "/CN=kubernetes" -out cs_client.csr
[root@kubernetes kubernetes]# openssl x509 -req -in cs_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out cs_client.crt -days 5000
Signature ok
subject=/CN=kubernetes
Getting CA Private Key
[root@kubernetes kubernetes]# ls
ca.crt ca.srl cs_client.csr master_ssl.cnf server.csr
ca.key cs_client.crt cs_client.key server.crt server.key
[root@kubernetes kubernetes]#
[root@kubernetes kubernetes]# cp /etc/kubernetes/kubeconfig /etc/kubernetes/kubeconfig.bak
[root@kubernetes kubernetes]# vim /etc/kubernetes/kubeconfig
[root@kubernetes kubernetes]#
[root@kubernetes kubernetes]# cat /etc/kubernetes/kubeconfig
Version: v1
kind: Config
users:
- name: controllermanager
user:
client-certificate: /var/lib/kubernetes/cs_client.crt
client-key: /var/lib/kubernetes/cs_client.key
clusters:
- name: local
cluster:
certificate-authority: /var/lib/kubernetes/ca.crt
server: https://192.168.73.100:6443
contexts:
- context:
cluster: local
user: controllermanager
name: my-context
current-context: my-context
设置kube-controller-manager服务的启动参数
添加--service这个证书会报错,暂使用--service-account-private-key-file代替
--service-account-key-file=/var/lib/kubernetes/server.key
--root-ca-file=/var/lib/kubernetes/ca.crt
--kubeconfig=/etc/kubernetes/kubeconfig
[root@kubernetes kubernetes]# cp /etc/kubernetes/controller-manager /etc/kubernetes/controller-manager.bak
[root@kubernetes kubernetes]# vim /etc/kubernetes/controller-manager
[root@kubernetes kubernetes]# cat /etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS="--kubeconfig=/etc/kubernetes/kubeconfig \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0 \
--service-account-private-key-file=/var/lib/kubernetes/server.key \
--root-ca-file=/var/lib/kubernetes/ca.crt"
[root@kubernetes kubernetes]# systemctl restart kube-controller-manager
[root@kubernetes kubernetes]# systemctl status kube-controller-manager
1.3.设置kube-scheduler的启动参数
--kubeconfig=/etc/kubernetes/kubeconfig
原配置文件有该参数,直接重启kube-scheduler服务
[root@kubernetes kubernetes]# systemctl restart kube-scheduler
[root@kubernetes kubernetes]# systemctl status kube-scheduler
1.4.设置每个node上kubelet的客户端证书,私钥和启动参数
在每个node上创建文件夹来放置证书
1.4.1.master
node1和node2上创建/var/lib/kubernetes文件夹,并传文件到各node的该文件夹里面
[root@kubernetes-node1 ~]# mkdir /var/lib/kubernetes
[root@kubernetes-node2 ~]# mkdir /var/lib/kubernetes
[root@kubernetes kubernetes]# scp ca.crt ca.key 192.168.73.101:/var/lib/kubernetes/
root@192.168.73.101's password:
ca.crt 100% 1099 873.2KB/s 00:00
ca.key 100% 1679 1.4MB/s 00:00
[root@kubernetes kubernetes]# scp ca.crt ca.key 192.168.73.102:/var/lib/kubernetes/
root@192.168.73.102's password:
ca.crt 100% 1099 415.6KB/s 00:00
ca.key 100% 1679 1.2MB/s 00:00
[root@kubernetes kubernetes]#
master上创建证书
[root@kubernetes kubernetes]# openssl genrsa -out kubelet_client.key 2048
Generating RSA private key, 2048 bit long modulus
.....................................+++
..................................................................................................................................................................+++
e is 65537 (0x10001)
[root@kubernetes kubernetes]# openssl req -new -key kubelet_client.key -subj "/CN=192.168.73.100" -out kubelet_client.csr
[root@kubernetes kubernetes]# openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000
Signature ok
subject=/CN=192.168.73.100
Getting CA Private Key
[root@kubernetes kubernetes]# vim /etc/kubernetes/kubeconfigbak
[root@kubernetes kubernetes]# cat /etc/kubernetes/kubeconfigbak
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate: /var/lib/kubernetes/kubelet_client.crt
client-key: /var/lib/kubernetes/kubelet_client.key
clusters:
- name: local
cluster:
certificate-authority: /var/lib/kubernetes/ca.crt
server: https://192.168.73.100:6443
contexts:
- context:
cluster: local
user: kubelet
name: my-context
current-context: my-context
设置kubelet启动参数
--kubeconfig=/etc/kubernetes/kubeconfigbak
[root@kubernetes kubernetes]# vi /etc/kubernetes/kubelet
[root@kubernetes kubernetes]#
[root@kubernetes kubernetes]#
[root@kubernetes kubernetes]# cat /etc/kubernetes/kubelet
KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubeconfigbak \
--hostname-override=192.168.73.100 \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=0"
[root@kubernetes kubernetes]# systemctl restart kubelet
[root@kubernetes kubernetes]# systemctl status kubelet
设置kube-proxy启动参数
--kubeconfig=/etc/kubernetes/kubeconfigbak
[root@kubernetes kubernetes]# vi /etc/kubernetes/proxy
[root@kubernetes kubernetes]# cat /etc/kubernetes/proxy
KUBE_PROXY_ARGS="--kubeconfig=/etc/kubernetes/kubeconfigbak \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
[root@kubernetes kubernetes]# systemctl restart kube-proxy
[root@kubernetes kubernetes]# systemctl status kube-proxy
1.4.2.node1
[root@kubernetes-node1 ~]# cd /var/lib/kubernetes/
[root@kubernetes-node1 kubernetes]# ls
ca.crt ca.key
[root@kubernetes-node1 kubernetes]# openssl genrsa -out kubelet_client.key 2048
Generating RSA private key, 2048 bit long modulus
................................+++
..................+++
e is 65537 (0x10001)
[root@kubernetes-node1 kubernetes]# openssl req -new -key kubelet_client.key -subj "/CN=192.168.73.101" -out kubelet_client.csr
[root@kubernetes-node1 kubernetes]# openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000
Signature ok
subject=/CN=192.168.73.101
Getting CA Private Key
[root@kubernetes-node1 kubernetes]# cp /etc/kubernetes/kubeconfig /etc/kubernetes/kubeconfig.bak
[root@kubernetes-node1 kubernetes]# vi /etc/kubernetes/kubeconfig
[root@kubernetes-node1 kubernetes]# cat /etc/kubernetes/kubeconfig
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate: /var/lib/kubernetes/kubelet_client.crt
client-key: /var/lib/kubernetes/kubelet_client.key
clusters:
- name: local
cluster:
certificate-authority: /var/lib/kubernetes/ca.crt
server: https://192.168.73.100:6443
contexts:
- context:
cluster: local
user: kubelet
name: my-context
current-context: my-context
设置kubelet启动参数
--kubeconfig=/etc/kubernetes/kubeconfig
原kubelet配置文件有此参数,直接重启kubelet服务
[root@kubernetes-node1 kubernetes]# systemctl restart kubelet
[root@kubernetes-node1 kubernetes]# systemctl status kubelet
设置kube-proxy启动参数
--kubeconfig=/etc/kubernetes/kubeconfig
原proxy配置文件有此参数,直接重启proxy服务
[root@kubernetes-node1 kubernetes]# systemctl restart kube-proxy
[root@kubernetes-node1 kubernetes]# systemctl status kube-proxy
1.4.3.node2
[root@kubernetes-node2 ~]# cd /var/lib/kubernetes/
[root@kubernetes-node2 kubernetes]# openssl genrsa -out kubelet_client.key 2048
Generating RSA private key, 2048 bit long modulus
...+++
...................................+++
e is 65537 (0x10001)
[root@kubernetes-node2 kubernetes]# openssl req -new -key kubelet_client.key -subj "/CN=192.168.73.102" -out kubelet_client.csr
[root@kubernetes-node2 kubernetes]# openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000
Signature ok
subject=/CN=192.168.73.102
Getting CA Private Key
[root@kubernetes-node2 kubernetes]# cp /etc/kubernetes/kubeconfig /etc/kubernetes/kubeconfig.bak
[root@kubernetes-node2 kubernetes]# vi /etc/kubernetes/kubeconfig
[root@kubernetes-node2 kubernetes]#
[root@kubernetes-node2 kubernetes]#
[root@kubernetes-node2 kubernetes]# cat /etc/kubernetes/kubeconfig
apiVersion: v1
kind: Config
users:
- name: kubelet
user:
client-certificate: /var/lib/kubernetes/kubelet_client.crt
client-key: /var/lib/kubernetes/kubelet_client.key
clusters:
- name: local
cluster:
certificate-authority: /var/lib/kubernetes/ca.crt
server: https://192.168.73.100:6443
contexts:
- context:
cluster: local
user: kubelet
name: my-context
current-context: my-context
[root@kubernetes-node2 kubernetes]#
[root@kubernetes-node2 kubernetes]# systemctl restart kubelet
[root@kubernetes-node2 kubernetes]# systemctl status kubelet
[root@kubernetes-node2 kubernetes]# systemctl restart kube-proxy
[root@kubernetes-node2 kubernetes]# systemctl status kube-proxy
[root@k8s kubernetes]# kubectl --server=https://192.168.73.100:6443 --certificate-authority=/var/lib/kubernetes/ca.crt --client-certificate=/var/lib/kubernetes/cs_client.crt --client-key=/var/lib/kubernetes/cs_client.key get nodes
NAME STATUS ROLES AGE VERSION
192.168.73.100 Ready <none> 11m v1.17.3
192.168.73.101 Ready <none> 8m57s v1.17.3
192.168.73.102 Ready <none> 7m32s v1.17.3
flannel部署
k8s内etcd上设置KEY值,并更改etcd配置文件
[root@k8s ~]# vi /etc/etcd/etcd.conf
[root@k8s ~]# cat /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/"
ETCD_LISTEN_CLIENT_URLS="http://192.168.73.100:2379,http://127.0.0.1:2379"
ETCD_NAME="default"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.73.100:2379"
[root@k8s ~]# systemctl restart etcd
[root@k8s ~]# systemctl status etcd
etcdctl \
--endpoints="http://192.168.73.100:2379,http://127.0.0.1:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
解压缩
[root@k8s data]# mkdir /data/flannel
[root@k8s data]# cd /data/
[root@k8s data]# tar zxf flannel-v0.11.0-linux-amd64.tar.gz -C ./flannel
[root@k8s data]# cd flannel
[root@k8s flannel]# ls
flanneld mk-docker-opts.sh README.md
配置Flannel
[root@k8s flannel]# vi /etc/kubernetes/flanneld
[root@k8s flannel]# cat /etc/kubernetes/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=http://192.168.73.100:2379,http://127.0.0.1:2379 \
-etcd-prefix=/coreos.com/network"
创建flanneld systemd启动管理文件
[root@k8s flannel]# vi /usr/lib/systemd/system/flanneld.service
[root@k8s flannel]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/kubernetes/flanneld
ExecStart=/data/flannel/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/data/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动服务
[root@k8s flannel]# systemctl daemon-reload
[root@k8s flannel]# systemctl start flanneld
[root@k8s flannel]# systemctl enable flanneld
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
[root@k8s flannel]# systemctl status flanneld
[root@k8s flannel]# ps -ef |grep flanneld
root 4807 1 0 17:38 ? 00:00:00 /data/flannel/flanneld --ip-masq \$FLANNEL_OPTIONS
root 5010 1714 0 17:38 pts/0 00:00:00 grep --color=auto flanneld
配置Docker启动指定子网,docker启动文件新加EnvironmentFile=/run/flannel/subnet.env和更改ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS配置
[root@k8s flannel]# vi /usr/lib/systemd/system/docker.service
[root@k8s ~]# cat /usr/lib/systemd/system/docker.service|grep -v '#'|grep -v '^$'
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
[root@k8s flannel]# systemctl daemon-reload
[root@k8s flannel]# systemctl restart docker
[root@k8s flannel]#
[root@k8s flannel]# systemctl status docker
将flannel得相关 文件到所有节点
[root@k8s flannel]# scp /usr/lib/systemd/system/flanneld.service 192.168.73.101:/usr/lib/systemd/system/flanneld.service
root@192.168.73.101's password:
flanneld.service 100% 401 388.4KB/s 00:00
[root@k8s flannel]# scp /usr/lib/systemd/system/flanneld.service 192.168.73.102:/usr/lib/systemd/system/flanneld.service
root@192.168.73.102's password:
flanneld.service 100% 401 205.1KB/s 00:00
[root@k8s flannel]# scp /etc/kubernetes/flanneld 192.168.73.101:/etc/kubernetes/
root@192.168.73.101's password:
flanneld 100% 119 98.1KB/s 00:00
[root@k8s flannel]# scp /etc/kubernetes/flanneld 192.168.73.102:/etc/kubernetes/
root@192.168.73.102's password:
flanneld 100% 119 101.5KB/s 00:00
[root@k8s flannel]#
[root@k8s-node1 ~]# mkdir /data/flannel
[root@k8s-node1 ~]# tar -xf /data/flannel-v0.11.0-linux-amd64.tar.gz -C /data/flannel
[root@k8s-node1 ~]# ls /data/flannel
flanneld mk-docker-opts.sh README.md
[root@k8s-node2 ~]# mkdir /data/flannel
[root@k8s-node2 ~]# tar -xf /data/flannel-v0.11.0-linux-amd64.tar.gz -C /data/flannel
[root@k8s-node2 ~]# ls /data/flannel
flanneld mk-docker-opts.sh README.md
[root@k8s-node1 ~]# vi /etc/kubernetes/flanneld
[root@k8s-node1 ~]# cat /etc/kubernetes/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=http://192.168.73.100:2379 \
-etcd-prefix=/coreos.com/network"
[root@k8s-node2 flannel]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/kubernetes/flanneld
ExecStart=/data/flannel/flanneld --ip-masq --etcd-endpoints=http://192.168.73.100:2379 -etcd-prefix=/coreos.com/network
ExecStartPost=/data/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
[root@k8s-node2 flannel]#
[root@k8s-node2 flannel]# cat /etc/kubernetes/flanneld
FLANNEL_OPTIONS=""
[root@k8s-node1 flannel]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/etc/kubernetes/flanneld
ExecStart=/data/flannel/flanneld --ip-masq --etcd-endpoints=http://192.168.73.100:2379 -etcd-prefix=/coreos.com/network
ExecStartPost=/data/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
[root@k8s-node1 flannel]# cat /etc/kubernetes/flanneld
FLANNEL_OPTIONS=""
各个节点上执行
systemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
systemctl status flanneld
systemctl restart docker
systemctl status docker
验证服务
查看 /run/flannel/subnet.env 得到flannel为docker分配得网段为--bip=172.17.100.1/24
[root@k8s flannel]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.80.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
DOCKER_NETWORK_OPTIONS=" --bip=172.17.80.1/24 --ip-masq=false --mtu=1450"
[root@k8s flannel]#
ifconfig 查看docker0得IP为172.17.100.1 ,flannel.1得IP为172.17.80.0,意味着其他node发给本node上得容器包都会被flannel.1捕获,并解分包转发给docker0,docker0再和本node上得容器进行通信
测试两个node中容器跨网络通信 可以看到不同node节点的容器之间可以正常通信
master 中执行:
[root@k8s k8s]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
coredns/coredns 1.6.2 bf261d157914 8 months ago 44.1MB
busybox 1.28 8c811b4aec35 23 months ago 1.15MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 2 years ago 742kB
[root@k8s k8s]#
[root@k8s k8s]# docker run -it busybox:1.28
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
13: eth0@if14: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:ac:11:50:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.80.2/24 brd 172.17.80.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ping 172.17.29.2
PING 172.17.29.2 (172.17.29.2): 56 data bytes
64 bytes from 172.17.29.2: seq=0 ttl=62 time=1.117 ms
64 bytes from 172.17.29.2: seq=1 ttl=62 time=1.195 ms
^C
--- 172.17.29.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.117/1.156/1.195 ms
node1 中执行:
[root@k8s-node1 data]# docker run -it busybox:1.28
/ # ip a\
>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
11: eth0@if12: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1450 qdisc noqueue
link/ether 02:42:ac:11:1d:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.29.2/24 brd 172.17.29.255 scope global eth0
valid_lft forever preferred_lft forever
/ # ping 172.17.80.2
PING 172.17.80.2 (172.17.80.2): 56 data bytes
64 bytes from 172.17.80.2: seq=0 ttl=62 time=2.524 ms
64 bytes from 172.17.80.2: seq=1 ttl=62 time=1.423 ms
^C
--- 172.17.80.2 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.423/1.973/2.524 ms
查看etcd注册的ip地址
[root@k8s k8s]# etcdctl --endpoints="http://192.168.73.100:2379,https://127.0.0.1:2379" ls /coreos.com/network/subnets
/coreos.com/network/subnets/172.17.14.0-24
/coreos.com/network/subnets/172.17.29.0-24
/coreos.com/network/subnets/172.17.80.0-24
[root@k8s k8s]# etcdctl --endpoints="http://192.168.73.100:2379,https://127.0.0.1:2379" get /coreos.com/network/subnets/172.17.14.0-24
{"PublicIP":"192.168.73.133","BackendType":"vxlan","BackendData":{"VtepMAC":"ba:39:21:76:73:b1"}}
•PublicIP: 节点ip地址
•BackendType: 类型
•VtepMAC: 虚拟的mac
查看路由表
[root@k8s k8s]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 ens38
172.17.14.0 172.17.14.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.29.0 172.17.29.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.80.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
192.168.73.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
192.168.73.0 0.0.0.0 255.255.255.0 U 101 0 0 ens38
[root@k8s k8s]#
[root@k8s-node1 data]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 ens38
172.17.14.0 172.17.14.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.29.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
172.17.80.0 172.17.80.0 255.255.255.0 UG 0 0 0 flannel.1
192.168.73.0 0.0.0.0 255.255.255.0 U 100 0 0 ens38
192.168.73.0 0.0.0.0 255.255.255.0 U 101 0 0 ens33
[root@k8s-node1 data]#
[root@k8s-node2 data]# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default gateway 0.0.0.0 UG 100 0 0 ens38
172.17.14.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
172.17.29.0 172.17.29.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.80.0 172.17.80.0 255.255.255.0 UG 0 0 0 flannel.1
192.168.73.0 0.0.0.0 255.255.255.0 U 100 0 0 ens33
192.168.73.0 0.0.0.0 255.255.255.0 U 101 0 0 ens38
[root@k8s-node2 data]#
[root@k8s kubernetes]# kubectl --server=https://192.168.73.100:6443 --certificate-authority=/var/lib/kubernetes/ca.crt --client-certificate=/var/lib/kubernetes/cs_client.crt --client-key=/var/lib/kubernetes/cs_client.key get nodes
NAME STATUS ROLES AGE VERSION
192.168.73.100 Ready <none> 11m v1.17.3
192.168.73.101 Ready <none> 8m57s v1.17.3
192.168.73.102 Ready <none> 7m32s v1.17.3
设置别名,进行快捷操作
[root@k8s kubernetes]# vi /root/.bashrc
[root@k8s kubernetes]# tail -1 /root/.bashrc
alias kubectl='kubectl --server=https://192.168.73.100:6443 --certificate-authority=/var/lib/kubernetes/ca.crt --client-certificate=/var/lib/kubernetes/cs_client.crt --client-key=/var/lib/kubernetes/cs_client.key'
[root@k8s kubernetes]# source /root/.bashrc
[root@k8s kubernetes]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
192.168.73.100 Ready <none> 17m v1.17.3
192.168.73.101 Ready <none> 15m v1.17.3
192.168.73.102 Ready <none> 13m v1.17.3
coredns部署参照博客 《k8s集群dns(coredns)搭建》 直接创建coredns部分
kubelet启动参数添加
--cluster_dns=169.169.0.10 \ #coeredns中的DNS地址
--cluster_domain=cluster.local"
直接指定DNS地址用于域名解析
更多推荐
所有评论(0)