K8S+IPVS+COREDNS(CENTOS 7.4版本)
K8S视频:https://item.taobao.com/item.htm?id=59186429591目录服务节点网络结构1.1 升级内核1.2 所有节点安装docker1.3 所有节点安装ipvsadm4.1 创建服务归类文件夹4.2 创建node所需的文件(1)ingest01和bigdata01节点创建kubelet目录(2) 创建node-service所需文件...
K8S视频:https://item.taobao.com/item.htm?id=59186429591
目录
服务节点
网络结构
1.1 升级内核
1.2 所有节点安装docker
1.3 所有节点安装ipvsadm
4.1 创建服务归类文件夹
4.2 创建node所需的文件
(1)ingest01和bigdata01节点创建kubelet目录
(2) 创建node-service所需文件
4.3 创建master 所需service文件
4.4 创建etcd所需service文件
5.1 下发node所需的service文件
5.2 下发master所需的service文件
5.3 下发etcd所需的service文件
6.1 生成文件
6.2 生成通用证书以及kubeconfig
6.3 下发证书文件至所有节点
7.1 启动 etcd 节点服务
7.2 启动master节点服务
7.3 启动node节点服务
8.1 布署kube-router组件
8.2 布署 kube-dashboard
8.3 布署coredns
8.4 上传私库镜像
(1)镜像重命名
(2)上传镜像
9 登录Dashboard
(1)查看Dashboard端口
(2)访问Dashboard
服务节点:
节点 | 节点角色 | hostname |
192.168.1.18 | master01 | ingest01 |
192.168.1.21 | node01 | bigdata01 |
192.168.1.22 | etcd01 | etcd01 |
网络结构:
网络名称 | 网络范围 |
集群网络 | 172.20.0.0/16 |
svc网络 | 172.21.0.0/16 |
物理网络 | 192.168.1.0/24 |
一、所有节点升级内核,安装docker1.13.1
1.1 升级内核
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm;
yum --enablerepo=elrepo-kernel install kernel-lt-devel kernel-lt -y
#查看默认启动顺序
awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
CentOS Linux (4.4.4-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-327.10.1.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-c52097a1078c403da03b8eddeac5080b) 7 (Core)
#默认启动的顺序是从0开始,新内核是从头插入(目前位置在0,而4.4.4的是在1),所以需要选择0。
grub2-set-default 0
所有节点配置hosts解析记录
vim /etc/hosts
192.168.1.18 ingest01
192.168.1.21 bigdata01
192.168.1.22 etcd01
各个节点设置hostname
192.168.1.18服务器
hostnamectl set-hostname ingest01
192.168.1.21服务器
hostnamectl set-hostname bigdata01
192.168.1.22服务器
hostnamectl set-hostname etcd01
#重启
reboot
#检查内核,成功升级到4.4
uname -a
Linux ingest01 4.4.128-1.el7.elrepo.x86_64 #1 SMP Sat Apr 14 06:57:19 EDT 2018 x86_64 x86_64 x86_64 GNU/Linux
1.2 所有节点安装docker
#安装docker
yum install docker-common docker-client docker -y
1.3 所有节点安装ipvsadm
yum install ipvsadm -y
二、准备 k8s-node、master、etcd、flanneld二进制文件
####注意所有的文件由master ingest01这台机下发,配置ssh信任所有机器
[root@ingest01 ~]#
wget https://dl.k8s.io/v1.9.0/kubernetes-server-linux-amd64.tar.gz
wget https://github.com/coreos/etcd/releases/download/v3.2.11/etcd-v3.2.11-linux-amd64.tar.gz
wget https://github.com/coreos/flannel/releases/download/v0.9.0/flannel-v0.9.0-linux-amd64.tar.gz
三、下发所有二进制文件
3.1 解压
tar xvf kubernetes-server-linux-amd64.tar.gz &&
tar xvf etcd-v3.2.11-linux-amd64.tar.gz &&
tar xvf flannel-v0.9.0-linux-amd64.tar.gz
3.2 创建node,master ,etcd所需的二进制目录并进行归类
mkdir -p /root/kubernetes/server/bin/{node,master,etcd}
\cp -f /root/kubernetes/server/bin/kubelet /root/kubernetes/server/bin/node/
\cp -f /root/mk-docker-opts.sh /root/kubernetes/server/bin/node/
\cp -f /root/flanneld /root/kubernetes/server/bin/node/
\cp -f /root/kubernetes/server/bin/kube-* /root/kubernetes/server/bin/master/
\cp -f /root/kubernetes/server/bin/kubelet /root/kubernetes/server/bin/master/
\cp -f /root/kubernetes/server/bin/kubectl /root/kubernetes/server/bin/master/
\cp -f /root/etcd-v3.2.11-linux-amd64/etcd* /root/kubernetes/server/bin/etcd/
3.3 下发node以及flanneld二进制文件到ingest01和bigdata01
for node in ingest01 bigdata01;do
scp /root/kubernetes/server/bin/node/* ${node}:/usr/local/bin/
done
3.4 下发master二进制文件
\cp /root/kubernetes/server/bin/master/* /usr/local/bin/
3.5 下发etcd文件
scp /root/kubernetes/server/bin/etcd/* etcd01:/usr/local/bin/
四、创建集群systemctl启动服务service文件
4.1 创建服务归类文件夹
mkdir -p /root/kubernetes/server/bin/{node-service,master-service,etcd-service,docker-service,ssl}
4.2 创建node所需的文件
(1)ingest01和bigdata01节点创建kubelet目录
for node in ingest01 bigdata01;do
ssh root@$node "mkdir -p /var/lib/kubelet"
done
(2)创建node-service所需文件
在/root/kubernetes/server/bin/node-service/生成docker.service、kubelet.service、flanneld.service
【注意】
1、kubelet.service中的address和hostname-override填写服务所在的服务器主机地址和主机名;
flanneld.service中的-etcd-endpoints和-iface填写etcd服务所在服务器地址和网卡名(例如:eth0或ens33)
2、在ingest01和bigdata01创建目录mkdir -p /var/lib/kubelet
3、必须搭建本地harbor仓库,并且将所需要的镜像文件上传到仓库
下载镜像:docker pull registry.access.redhat.com/rhel7/pod-infrastructure:latest
执行以下内容:
#docker.service
cat >/root/kubernetes/server/bin/node-service/docker.service <<'HERE'
[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target
Wants=docker-storage-setup.service
Requires=docker-cleanup.timer
[Service]
Type=notify
NotifyAccess=all
KillMode=process
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
EnvironmentFile=/run/flannel/docker
Environment=GOTRACEBACK=crash
Environment=DOCKER_HTTP_HOST_COMPAT=1
Environment=PATH=/usr/libexec/docker:/usr/bin:/usr/sbin
ExecStart=/usr/bin/dockerd-current $DOCKER_NETWORK_OPTIONS \
--add-runtime docker-runc=/usr/libexec/docker/docker-runc-current \
--default-runtime=docker-runc \
--exec-opt native.cgroupdriver=systemd \
--userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
$OPTIONS \
$DOCKER_STORAGE_OPTIONS \
$DOCKER_NETWORK_OPTIONS \
$ADD_REGISTRY \
$BLOCK_REGISTRY \
$INSECURE_REGISTRY
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal
MountFlags=slave
[Install]
WantedBy=multi-user.target
HERE
#----------
#kubeliet.service
cat >/root/kubernetes/server/bin/node-service/kubelet.service <<'HERE'
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
--address=192.168.1.18 \
--hostname-override=ingest01 \
--pod-infra-container-image=192.168.1.18/public/pod-infrastructure \
--experimental-bootstrap-kubeconfig=/etc/kubernetes/ssl/bootstrap.kubeconfig \
--kubeconfig=/etc/kubernetes/ssl/kubelet.kubeconfig \
--cert-dir=/etc/kubernetes/ssl \
--hairpin-mode promiscuous-bridge \
--allow-privileged=true \
--serialize-image-pulls=false \
--logtostderr=true \
--cgroup-driver=systemd \
--cluster_dns=172.21.0.2 \
--cluster_domain=cluster.local \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
HERE
#----------
#flanneld.service
cat >/root/kubernetes/server/bin/node-service/flanneld.service <<'HERE'
[Unit]
Description=Flanneld overlay address etcd agent
After=network.target
After=network-online.target
Wants=network-online.target
After=etcd.service
Before=docker.service
[Service]
Type=notify
ExecStart=/usr/local/bin/flanneld \
-etcd-cafile=/etc/kubernetes/ssl/k8s-root-ca.pem \
-etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
-etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
-etcd-endpoints=https://192.168.1.22:2379 \
-etcd-prefix=/kubernetes/network \
-iface=ens33
ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker
Restart=on-failure
[Install]
WantedBy=multi-user.target
RequiredBy=docker.service
HERE
4.3 创建master 所需service文件
在/root/kubernetes/server/bin/master-service/目录里创建kube-apiserver.service、kube-controller-manager.service、
【注意】
kube-apiserver.service中的--advertise-address和--bind-address填写服务所在的服务器主机地址和主机名;--etcd-servers填写etcd服务器所在的服务器地址及端口
执行以下内容:
#kube-apiserver.service
cat >/root/kubernetes/server/bin/master-service/kube-apiserver.service <<'HERE'
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
ExecStart=/usr/local/bin/kube-apiserver \
--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction \
--advertise-address=192.168.1.18 \
--bind-address=192.168.1.18 \
--insecure-bind-address=0.0.0.0 \
--kubelet-https=true \
--runtime-config=rbac.authorization.k8s.io/v1beta1 \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/etc/kubernetes/ssl/token.csv \
--service-cluster-ip-range=172.21.0.0/16 \
--service-node-port-range=300-9000 \
--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--service-account-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
--etcd-cafile=/etc/kubernetes/ssl/k8s-root-ca.pem \
--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \
--etcd-servers=https://192.168.1.22:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/lib/audit.log \
--event-ttl=1h \
--v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
HERE
#----------
#kube-controller-manager.service
cat >/root/kubernetes/server/bin/master-service/kube-controller-manager.service <<'HERE'
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--allocate-node-cidrs=true \
--service-cluster-ip-range=172.21.0.0/16 \
--cluster-cidr=172.20.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/k8s-root-ca-key.pem \
--root-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--leader-elect=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
HERE
#----------
#kube-scheduler.service
cat >/root/kubernetes/server/bin/master-service/kube-scheduler.service <<'HERE'
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-scheduler \
--address=127.0.0.1 \
--master=http://127.0.0.1:8080 \
--leader-elect=true \
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
HERE
4.4 创建etcd所需service文件
在/root/kubernetes/server/bin/etcd-service/目录里创建etcd.service文件
【注意】
1、在etcd01服务所在服务器创建/var/lib/etcd/目录
mkdir -p /var/lib/etcd/
2、etcd.service配置文件中的initial-advertise-peer-urls、listen-peer-urls、listen-client-urls、initial-cluster=etcd01、initial-cluster填写etcd01服务器地址
执行以下内容:
cat >/root/kubernetes/server/bin/etcd-service/etcd.service <<'HERE'
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name=etcd01 \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--initial-advertise-peer-urls=https://192.168.1.22:2380 \
--listen-peer-urls=https://192.168.1.22:2380 \
--listen-client-urls=https://192.168.1.22:2379,http://127.0.0.1:2379 \
--advertise-client-urls=https://192.168.1.22:2379 \
--initial-cluster-token=etcd-cluster-0 \
--initial-cluster=etcd01=https://192.168.1.22:2380 \
--initial-cluster-state=new \
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
HERE
五、下发service文件
5.1 下发node所需的service文件
#注意更改service文件中的主机名和ip,每个节点不一样
for node in bigdata01 ingest01;do
scp /root/kubernetes/server/bin/node-service/* ${node}:/lib/systemd/system/
done
5.2 下发master所需的service文件
#注意更改service文件中的主机名和ip,每个节点不一样
\cp /root/kubernetes/server/bin/master-service/* /lib/systemd/system/
5.3 下发etcd所需的service文件
#注意更改service文件中的主机名和ip,每个节点不一样
scp /root/kubernetes/server/bin/etcd-service/* etcd01:/lib/systemd/system/
六、创建集群认证证书文件,下发文件
6.1 生成文件
#安装 CFSSL
vim /etc/profile
添加下面这行内容:
export PATH=/usr/local/bin:$PATH
在/root/kubernetes/server/bin/ssl/目录生成admin-csr.json、k8s-gencert.json、k8s-root-ca-csr.json、kube-proxy-csr.json文件
执行以下内容:
#直接使用二进制源码包安装
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
chmod +x cfssl_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
chmod +x cfssljson_linux-amd64
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl-certinfo_linux-amd64
mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo
#admin-csr.json**
cat >/root/kubernetes/server/bin/ssl/admin-csr.json <<'HERE'
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shenzhen",
"L": "Shenzhen",
"O": "system:masters",
"OU": "System"
}
]
}
HERE
#----------
#k8s-gencert.json
cat >/root/kubernetes/server/bin/ssl/k8s-gencert.json <<'HERE'
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
],
"expiry": "87600h"
}
}
}
}
HERE
#----------
#k8s-root-ca-csr.json
cat >/root/kubernetes/server/bin/ssl/k8s-root-ca-csr.json <<'HERE'
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"C": "CN",
"ST": "Shenzhen",
"L": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
HERE
#----------
#kube-proxy-csr.json
cat >/root/kubernetes/server/bin/ssl/kube-proxy-csr.json <<'HERE'
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shenzhen",
"L": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
HERE
#----------
#注意,此处需要将dns首ip、etcd、k8s-master节点的ip都填上
cat >/root/kubernetes/server/bin/ssl/kubernetes-csr.json <<'HERE'
{
"CN": "kubernetes",
"hosts": [
"127.0.0.1",
"192.168.1.18",
"192.168.1.21",
"192.168.1.22",
"172.21.0.1",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "Shenzhen",
"L": "Shenzhen",
"O": "k8s",
"OU": "System"
}
]
}
HERE
6.2 生成通用证书以及kubeconfig
#进入ssl目录
cd /root/kubernetes/server/bin/ssl/
# 生成证书
cfssl gencert --initca=true k8s-root-ca-csr.json | cfssljson --bare k8s-root-ca
for targetName in kubernetes admin kube-proxy; do
cfssl gencert --ca k8s-root-ca.pem --ca-key k8s-root-ca-key.pem --config k8s-gencert.json --profile kubernetes $targetName-csr.json | cfssljson --bare $targetName
done
# 生成配置
#注意,此处定义api-server的服务ip,此处用HA模式,如果你的master是单节点,请配置成单个ip即可:
添加以下内容到/etc/profile文件中:
export KUBE_APISERVER="https://192.168.1.18:6443"
export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
保存后,执行source /etc/profile和echo "Tokne: ${BOOTSTRAP_TOKEN}"
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
echo "Create kubelet bootstrapping kubeconfig..."
kubectl config set-cluster kubernetes \
--certificate-authority=k8s-root-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
echo "Create kube-proxy kubeconfig..."
kubectl config set-cluster kubernetes \
--certificate-authority=k8s-root-ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=kube-proxy.pem \
--client-key=kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
# 生成高级审计配置
cat >> audit-policy.yaml <<EOF
# Log all requests at the Metadata level.
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
EOF
# 生成集群管理员admin kubeconfig配置文件供kubectl调用
# admin set-cluster
kubectl config set-cluster kubernetes \
--certificate-authority=k8s-root-ca.pem\
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=./kubeconfig
#----------
# admin set-credentials
kubectl config set-credentials kubernetes-admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=./kubeconfig
# admin set-context
kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin \
--kubeconfig=./kubeconfig
# admin set default context
kubectl config use-context kubernetes-admin@kubernetes \
--kubeconfig=./kubeconfig
6.3 下发证书文件至所有节点
#创建ssl文件夹
for node in {bigdata01,ingest01,etcd01};do
ssh ${node} "mkdir -p /etc/kubernetes/ssl/ "
done
#下发文件
for ssl in {bigdata01,ingest01,etcd01};do
scp /root/kubernetes/server/bin/ssl/* ${ssl}:/etc/kubernetes/ssl/
done
#创建master /root/.kube 目录,复制超级admin授权config
mkdir -p /root/.kube ; \cp -f /etc/kubernetes/ssl/kubeconfig /root/.kube/config
七、启动所有节点服务,验证服务
*注意启动之前确认配置文件修改无误,并且/var/lib/etcd、/var/lib/kubelet等目录均已经创建
所有节点关闭
systemctl disable firewalld
systemctl stop firewalld
setenforce 0
7.1 启动 etcd 节点服务
#启动etcd集群
ssh etcd01 "systemctl daemon-reload && systemctl start etcd && systemctl enable etcd"
#检查集群健康
etcdctl \
--ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem\
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
cluster-health
#设置集群网络范围
etcdctl --endpoints=https://192.168.1.22:2379 \
--ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mkdir /kubernetes/network
etcdctl --endpoints=https://192.168.1.22:2379 \
--ca-file=/etc/kubernetes/ssl/k8s-root-ca.pem\
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
mk /kubernetes/network/config '{ "Network": "172.20.0.0/16", "Backend": { "Type": "vxlan", "VNI": 1 }}'
7.2 启动master节点服务
配置vim /etc/sysconfig/docker
将文件的selinux-enabled改为selinux-enabled=false
执行swapoff -a 并配置开机启动
# 在master机器上执行,授权kubelet-bootstrap角色
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
systemctl daemon-reload && systemctl start flanneld docker kube-apiserver kube-controller-manager kube-scheduler kubelet && systemctl enable flanneld docker kube-apiserver kube-controller-manager kube-scheduler kubelet
7.3 启动node节点服务
ssh bigdata01 "systemctl daemon-reload && systemctl start flanneld docker kubelet && systemctl enable flanneld docker kubelet "
八、布署kube-router-ipvs取代kube-proxy、kube-dashboard、core-dns取代kube-dns
8.1 布署kube-router组件
#镜相下载:docker.io/cloudnativelabs/kube-router:latest
[root@ingest01 ~]#docker pull docker.io/cloudnativelabs/kube-router:latest
[root@ingest01~]#vim kube-router.yaml
保存以下内容:
apiVersion: v1
kind: ConfigMap
metadata:
name: kube-router-cfg
namespace: kube-system
labels:
tier: node
k8s-app: kube-router
data:
cni-conf.json: |
{
"name":"kubernetes",
"type":"bridge",
"bridge":"kube-bridge",
"isDefaultGateway":true,
"ipam": {
"type":"host-local"
}
}
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
labels:
k8s-app: kube-router
tier: node
name: kube-router
namespace: kube-system
spec:
template:
metadata:
labels:
k8s-app: kube-router
tier: node
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: kube-router
serviceAccount: kube-router
containers:
- name: kube-router
image: k8s-registry.local/public/kube-router:latest
imagePullPolicy: Always
args:
- --run-router=true
- --run-firewall=true
- --run-service-proxy=true
- --kubeconfig=/var/lib/kube-router/kubeconfig
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
requests:
cpu: 250m
memory: 250Mi
securityContext:
privileged: true
volumeMounts:
- name: lib-modules
mountPath: /lib/modules
readOnly: true
- name: cni-conf-dir
mountPath: /etc/cni/net.d
- name: kubeconfig
mountPath: /var/lib/kube-router/kubeconfig
- name: run
mountPath: /var/run/docker.sock
readOnly: true
initContainers:
- name: install-cni
image: k8s-registry.local/public/busybox:latest
imagePullPolicy: Always
command:
- /bin/sh
- -c
- set -e -x;
if [ ! -f /etc/cni/net.d/10-kuberouter.conf ]; then
TMP=/etc/cni/net.d/.tmp-kuberouter-cfg;
cp /etc/kube-router/cni-conf.json ${TMP};
mv ${TMP} /etc/cni/net.d/10-kuberouter.conf;
fi
volumeMounts:
- name: cni-conf-dir
mountPath: /etc/cni/net.d
- name: kube-router-cfg
mountPath: /etc/kube-router
hostNetwork: true
hostIPC: true
hostPID: true
tolerations:
- key: CriticalAddonsOnly
operator: Exists
- effect: NoSchedule
key: node-role.kubernetes.io/master
operator: Exists
volumes:
- name: lib-modules
hostPath:
path: /lib/modules
- name: cni-conf-dir
hostPath:
path: /etc/cni/net.d
- name: run
hostPath:
path: /var/run/docker.sock
- name: kube-router-cfg
configMap:
name: kube-router-cfg
- name: kubeconfig
hostPath:
path: /etc/kubernetes/ssl/kubeconfig
# configMap:
# name: kube-proxy
# items:
# - key: kubeconfig.conf
# path: kubeconfig
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kube-router
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-router
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- namespaces
- pods
- services
- nodes
- endpoints
verbs:
- list
- get
- watch
- apiGroups:
- "networking.k8s.io"
resources:
- networkpolicies
verbs:
- list
- get
- watch
- apiGroups:
- extensions
resources:
- networkpolicies
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: kube-router
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kube-router
subjects:
- kind: ServiceAccount
name: kube-router
namespace: kube-system
执行生成命令
kubectl create -f kube-router.yaml
8.2 布署 kube-dashboard
#镜相下载:
registry.docker-cn.com/kubernetesdashboarddev/kubernetes-dashboard-amd64:head
本地执行:
docker pull registry.docker-cn.com/kubernetesdashboarddev/kubernetes-dashboard-amd64:head
[root@ingest01~]#vim dashboard.yaml
保存以下内容:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
spec:
serviceAccountName: kubernetes-dashboard
containers:
- name: kubernetes-dashboard
image: k8s-registry.local/public/kubernetes-dashboard-amd64:1.8.0
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
memory: 300Mi
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 9090
livenessProbe:
httpGet:
path: /
port: 9090
initialDelaySeconds: 30
timeoutSeconds: 30
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
----------
保存后,执行生成命令
kubectl create -f dashboard.yaml
[root@ingest01~]#vim dashboard-svc.yaml
保存以下内容:
----------
apiVersion: v1
kind: Service
metadata:
name: kubernetes-dashboard
namespace: kube-system
labels:
k8s-app: kubernetes-dashboard
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
selector:
k8s-app: kubernetes-dashboard
type: NodePort
ports:
- port: 9090
targetPort: 9090
nodePort: 8601
保存后,执行生成命令:
kubectl create -f dashboard-svc.yaml
8.3 布署coredns
#镜相下载地址: registry.docker-cn.com/coredns/coredns:0.9.10
执行:docker pull registry.docker-cn.com/coredns/coredns:0.9.10
[root@ingest01 ~]#vim coredns.yaml
保存以下内容:
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
labels:
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: Reconcile
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
addonmanager.kubernetes.io/mode: EnsureExists
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
log stdout
health
kubernetes cluster.local 172.21.0.0/16
prometheus
proxy . /etc/resolv.conf
cache 30
}
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: coredns
template:
metadata:
labels:
k8s-app: coredns
spec:
serviceAccountName: coredns
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: "CriticalAddonsOnly"
operator: "Exists"
containers:
- name: coredns
image: k8s-registry.local/public/coredns:0.9.10
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: coredns
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: coredns
clusterIP: 172.21.0.2
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
----------
保存后,执行生成命令
kubectl create -f coredns.yaml
8.4 上传私库镜像
(1)镜像重命名
docker tag registry.access.redhat.com/rhel7/pod-infrastructure 192.168.1.18/public/pod-infrastructure
docker tag docker.io/cloudnativelabs/kube-router 192.168.1.18/public/kube-router
Docker tag registry.docker-cn.com/kubernetesdashboarddev/kubernetes-dashboard-amd64:head 192.168.1.18/public/kubernetes-dashboard-amd64:1.8.0
docker tag registry.docker-cn.com/coredns/coredns:0.9.10 192.168.1.18/public/coredns:0.9.10
docker tag docker.io/busybox 192.168.1.18/public/busybox
(2)上传镜像
docker push 192.168.1.18/public/pod-infrastructure
docker push 192.168.1.18/public/coredns:0.9.10
docker push 192.168.1.18/public/kube-router
docker push 192.168.1.18/public/kubernetes-dashboard-amd64:1.8.0
docker push 192.168.1.18/public/busybox
9 登录Dashboard
(1)查看Dashboard端口
[root@ingest01 ~]# kubectl -n kube-system get svc kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 172.21.107.167 <none> 9090:8601/TCP 4h
(2)访问Dashboard
输入网址: http://192.168.1.18:8601/
更多推荐
所有评论(0)