实验环境的设置:
k8s资料包
master:14.0.0.10
node1:14.0.0.11 (有dokcer环境)
node2:14.0.0.13 (有dokcer环境)
iptables -F
setenforce 0

实验步骤:
在master:14.0.0.10上:
mkdir k8s
cd k8s
从k8s资料包中,将etcd-cert.sh(创建证书的脚本) etcd.sh(创建服务的脚本)拖进来到k8s下

mkdir etcd-cert(证书群)
cd etcd-cert
从k8s资料包中,将文件夹etcd-cert里面的cfssl、cfssljson、cfssl-certinfo直接拖进来!!
mv * /usr/local/bin/
chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
1、定义ca证书:
cat > ca-config.json <<EOF
{
“signing”: {
“default”: {
“expiry”: “87600h”
},
“profiles”: {
“www”: {
“expiry”: “87600h”,
“usages”: [
“signing”,
“key encipherment”,
“server auth”,
“client auth”
]
}
}
}
}
EOF

2、实现证书签名:
cat > ca-csr.json <<EOF
{
“CN”: “etcd CA”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“L”: “Beijing”,
“ST”: “Beijing”
}
]
}
EOF

cfssl gencert -initca ca-csr.json | cfssljson -bare ca - “生成ca-key.pem ca.pem证书!!!”
注意: ls etcd-cert >>> 有5个文件: 1、ca-config.json 、 2、ca.csr(生成证书的中间件) 、3、ca-csr.json 、4、ca-key.pem 、5、ca.pem

3、指定etcd三个节点之间的通信验证:
cat > server-csr.json <<EOF
{
“CN”: “etcd”,
“hosts”: [
“14.0.0.10”,
“14.0.0.11”,
“14.0.0.13”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“L”: “BeiJing”,
“ST”: “BeiJing”
}
]
}
EOF

4、生成ETCD证书 server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
注意:此时下面应该有9个文件

cd k8s >>
将kubernetes-server-linux-amd64.tar.gz 、etcd-v3.3.10-linux-amd64.tar.gz 拖进来 ,此时目录下有:etcd-cert.sh、etcd.sh、etcd-cert和两个压缩包
tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
mkdir /opt/etcd/{cfg,bin,ssl} -p //cfg:配置文件,bin:命令文件,ssl:证书
mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
cp etcd-cert/*.pem /opt/etcd/ssl/
bash etcd.sh etcd01 14.0.0.10 etcd02=https://14.0.0.11:2380,etcd03=https://14.0.0.13:2380 宣告群集名称
vim /opt/etcd/cfg/etcd ???
scp -r /opt/etcd/ root@14.0.0.11:/opt/
scp -r /opt/etcd/ root@14.0.0.13:/opt/
scp -r /usr/lib/systemd/system/etcd.service root@14.0.0.11:/usr/lib/systemd/system/
scp -r /usr/lib/systemd/system/etcd.service root@14.0.0.13:/usr/lib/systemd/system/
在node节点上:
ls /opt/etcd 有三个文件夹:bin(etcd、etcdctl)、cfg(etcd)、ssl(4.pem)

在node01、02节点上:
vim /opt/etcd/cfg/etcd 修改IP地址!!!

#[Member]
ETCD_NAME=“etcd02” ******etcd03
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS=“https://14.0.0.11:2380” ******14.0.0.13
ETCD_LISTEN_CLIENT_URLS=“https://14.0.0.11:2379” ******14.0.0.13

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS=“https://14.0.0.11:2380” ******14.0.0.13
ETCD_ADVERTISE_CLIENT_URLS=“https://14.0.0.11:2379” ******14.0.0.13
ETCD_INITIAL_CLUSTER=“etcd01=https://14.0.0.10:2380,etcd02=https://14.0.0.11:2380,etcd03=https://14.0.0.13:2380”
ETCD_INITIAL_CLUSTER_TOKEN=“etcd-cluster”
ETCD_INITIAL_CLUSTER_STATE=“new”

systemctl start etcd
systemctl status etcd
检查:systemctl status etcd “running” 既是成功!!
注意:在主节点的etcd-cert目录下检查群集状态!!!
[root@localhost etcd-cert]#/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints=“https://14.0.0.10:2379,https://14.0.0.11:2379,https://14.00.13:2379” cluster-health
显示“cluster is healthy!“既是成功!!!

在主节点etcd-cert目录下做flannel网络配置:
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints=“https://14.0.0.10:2379,https://14.0.0.11:2379,https://14.0.0.13:2379” set /coreos.com/network/config ‘{ “Network”: “172.17.0.0/16”, “Backend”: {“Type”: “vxlan”}}’
在子节点cd /opt/etcd/ssl目录下检查:
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints=“https://14.0.0.10:2379,https://14.0.0.11:2379,https://14.0.0.13:2379” get /coreos.com/network/config
两次都有{ “Network”: “172.17.0.0/16”, “Backend”: {“Type”: “vxlan”}}显示,说明成功!!

在两个node节点上:
将资料包中的flannel-v0.10.0-linux-amd64.tar.gz拖进来!!
tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 有3个文件出来!!
mkdir /opt/kubernetes/{cfg,bin,ssl} -p
mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

将资料包里面的 flannel.sh 拖进来!!!启动flannel
bash flannel.sh https://14.0.0.10:2379,https://14.0.0.11:2379,https://14.0.0.13:2379

配置docker连接flannel
vim /usr/lib/systemd/system/docker.service
13# for containers run by docker
14 EnvironmentFile=/run/flannel/subnet.env 添加
15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

cat /run/flannel/subnet.env //说明:容器的网卡的号码

systemctl daemon-reload
systemctl restart docker
ifconfig >> 会有一个”flannel.1: “网卡!! docker0 对接 flannel.1

创建容器检查ip地址的正确性:
docker run -it centos:7 /bin/bash
yum install net-tools -y
ifconfig
在查出来的IP地址相互ping,看是否能够通??

------------------------------------部署master组件-----------------------------
1、生成api-server证书。在master节点的k8s目录下:
将master.zip拖进来
unzip master.zip
mkdir /opt/kubernetes/{cfg,bin,ssl} -p
mkdir k8s-cert
cd k8s-cert/ >>将k8s-cert.sh 拖进来
注意修改cat > server-csr.json <>里面的部分代码:IP地址群:2个master、1个飘逸地址、2个nginx反向代理master地址(负载均衡器)
bash k8s-cert.sh
ls .pem >>会有8个、4组的证书:2个admin、2个ca、2个kube-proxy、2个server
cp ca
pem server*pem /opt/kubernetes/ssl/

cd … >>在k8s目录下将kubernetes-server-linux-amd64.tar.gz拖进来!!!
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd /root/k8s/kubernetes/server/bin
cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/

head -c 16 /dev/urandom | od -An -t x | tr -d ’ ’ >>生成的码字xxxxxxxxx
vim /opt/kubernetes/cfg/token.csv >>添加下面的字段!!!! 3200edb5769b3a64cd4f93b679652457
xxxxxxxxxxxxxxxxxxx,kubelet-bootstrap,10001,“system:kubelet-bootstrap”

cd /root/k8s/
bash apiserver.sh 14.0.0.10 https://14.0.0.10,https://14.0.0.11:2379,https://14.0.0.13:2379
cat /opt/kubernetes/cfg/kube-apiserver >>检查相关参数是否正常???
netstat -ntap | grep 6443 https >>>这里就是一个坎,必须正确才能往后做
netstat -ntap | grep 8080 http
./scheduler.sh 127.0.0.1
chmod +x controller-manager.sh
./controller-manager.sh 127.0.0.1
/opt/kubernetes/bin/kubectl get cs >>查看master节点状态:必须都是健康状态!!!

------------------------------------node节点部署------------------------------------------
在master的bin目录下 cd /root/k8s/kubernetes/server/bin:
scp kubelet kube-proxy root@14.0.0.11:/opt/kubernetes/bin/
scp kubelet kube-proxy root@14.0.0.13:/opt/kubernetes/bin/

在两个node的~下:将node.zip拖进来
unzip node.zip

在master的k8s目录下:
mkdir kubeconfig
cd kubeconfig/ >> 将包里面的kubeconfig.sh拖进来!!
mv kubeconfig.sh kubeconfig
vim kubeconfig

删除前面8行,在17行修改: --token=6351d652249951f79c33acdab329e4c4 \ (前面生成的随机码)
vim /etc/profile
在末尾添加 export PATH=$PATH:/opt/kubernetes/bin/
source /etc/profile
kubectl get cs >>检查群集的健康状态!!!
bash kubeconfig 14.0.0.10 /root/k8s/k8s-cert/
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@14.0.0.11:/opt/kubernetes/cfg/
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@14.0.0.13:/opt/kubernetes/cfg/
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

在node1的~下:
bash kubelet.sh 14.0.0.11
ps aux | grep kube

在master的kubeconfig目录下:
kubectl get csr >>检查到node的请求,有个识别码用于下面!!
kubectl certificate approve +XXXXXXXXXXXXXXXX >>给予建立的权限
kubectl get csr >>Approved,Issued 既是成功!!
kubectl get node >>有一个节点出现并是Ready状态!!

在node1上面:
bash proxy.sh 14.0.0.11
systemctl status kube-proxy.service >>running既是成功!!!
scp -r /opt/kubernetes/ root@14.0.0.13:/opt/
scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@14.0.0.13:/usr/lib/systemd/system/

在node2的~下:
cd /opt/kubernetes/ssl/
rm -rf *
cd …/cfg
vim kubelet
–hostname-override=14.0.0.13
vim kubelet.config
address: 14.0.0.13
vim kube-proxy
–hostname-override=14.0.0.13
systemctl start kubelet.service
systemctl enable kubelet.service
systemctl start kube-proxy.service
systemctl enable kube-proxy.service

在master上:
kubectl get csr 》》生成授权码
kubectl certificate approve +xxxxxxxxxxx(上面生成的授权码)
kubectl get nodes
全集里面的所有节点都是ready既是成功!!!!

在master1上:
scp -r /opt/kubernetes/ root@14.0.0.14:/opt
scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@14.0.0.14:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@14.0.0.14:/opt/
在master2上:
vim /opt/kubernetes/cfg/kube-apiserver 修改
systemctl start kube-apiserver.service >>第5、7行做修改(改成自己的IP地址)!!
systemctl start kube-controller-manager.service
systemctl start kube-scheduler.service
vim /etc/profile >> 添加 export PATH=$PATH:/opt/kubernetes/bin/
source /etc/profile
kubectl get node

在两台反向代理的服务器上都做:
vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0

yum install nginx -y
vim /etc/nginx/nginx.conf
在events {
worker_connections 1024;
}
添加***************
stream {

log_format main ‘$remote_addr u p s t r e a m a d d r − [ upstream_addr - [ upstreamaddr[time_local] $status $upstream_bytes_sent’;
access_log /var/log/nginx/k8s-access.log main;

upstream k8s-apiserver {
    server 14.0.0.10:6443;
    server 14.0.0.14:6443;
}
server {
            listen 6443;
            proxy_pass k8s-apiserver;
}
}

systemctl start nginx
将脚本拖进来!!
cp keepalived.conf /etc/keepalived/keepalived.conf
vim /etc/keepalived/keepalived.conf >>修改相关参数
vim /etc/nginx/check_nginx.sh
count= ( p s − e f ∣ g r e p n g i n x ∣ e g r e p − c v " g r e p ∣ (ps -ef |grep nginx |egrep -cv "grep| (psefgrepnginxegrepcv"grep " ) i f [ " ") if [ " ")if["count" -eq 0 ];then
systemctl stop keepalived
fi
chmod +x /etc/nginx/check_nginx.sh
systemctl start keepalived
ip a >>查看是否存在虚拟的IP地址,在master上pkill nginx是否存在地址飘逸!!!

在两个node 节点上:
[root@localhost cfg]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@localhost cfg]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@localhost cfg]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
修改:server: https://14.0.0.100:6443

在nginx反向代理服务器上:
tail /var/log/nginx/k8s-access.log >>是否存在两个node 节点地址和两个master的地址

在matser01上:
kubectl run nginx --image=nginx
kubectl get pods >> ContainerCreating > Running
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
kubectl logs +xxxxxxxxxxx (kubectl get pods里面的序列码)
kubectl get pods -o wide
在分配的节点上:curl 172.17.31.3 (NODE下面的序列号)

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐