资源

服务器名称ip地址服务
master1(2C/4G,cpu核心数要求大于2)192.168.100.10docker、kubeadm、kubelet、kubectl、flannel
node01(2C/2G)192.168.100.30docker、kubeadm、kubelet、kubectl、flannel
node02(2C/2G)192.168.100.40docker、kubeadm、kubelet、kubectl、flannel
node03(2C/2G)192.168.100.50docker、kubeadm、kubelet、kubectl、flannel

1、在所有节点上安装Docker和kubeadm
2、部署Kubernetes Master
3、部署容器网络插件
4、部署 Kubernetes Node,将节点加入Kubernetes集群中

环境准备

所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
iptables -F
swapoff -a						#交换分区必须要关闭
#加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done
修改主机名
hostnamectl set-hostname master1
hostnamectl set-hostname node01
hostnamectl set-hostname node02
hostnamectl set-hostname node03
免密打通
ssh-keygen -t rsa
cd ~/.ssh/
ssh-copy-id -i id_rsa.pub root@192.168.100.30
ssh-copy-id -i id_rsa.pub root@192.168.100.40
ssh-copy-id -i id_rsa.pub root@192.168.100.50
所有节点修改hosts文件
vim /etc/hosts
192.168.100.10 master1
192.168.100.30 node01
192.168.100.40 node02
192.168.100.50 node03

scp /etc/hosts node01:/etc
scp /etc/hosts node02:/etc
scp /etc/hosts node03:/etc
调整内核参数
cat > /etc/sysctl.d/kubernetes.conf << EOF
#开启网桥模式,可将网桥的流量传递给iptables链
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
#关闭ipv6协议
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
EOF

## 生效参数
sysctl --system  

所有节点安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install docker-ce docker-ce-cli containerd.io -y

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://q7n9qid7.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  }
}
EOF
#使用Systemd管理的Cgroup来进行资源控制与管理,因为相对Cgroupfs而言,Systemd限制CPU、内存等资源更加简单和成熟稳定。
#日志使用json-file格式类型存储,大小为100M,保存在/var/log/containers目录下,方便ELK等日志系统收集和管理日志。
systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service 
docker info | grep "Cgroup Driver"
#Cgroup Driver: systemd

所有节点安装kubeadm,kubelet和kubectl

定义kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.20.7 kubeadm-1.20.7 kubectl-1.20.7
开机自启kubelet
systemctl enable kubelet.service
#K8S通过kubeadm安装出来以后都是以Pod方式存在,即底层是以容器方式运行,所以kubelet必须设置开机自启
查看初始化需要的镜像
kubeadm config images list

导入k8s镜像

方法一:无k8s镜像搭建方法

master上执行

下载k8s-1.20.7镜像
kubeadm config images pull --kubernetes-version v1.20.7 --image-repository registry.aliyuncs.com/google_containers
docker images
镜像重新打tag
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.20.7 k8s.gcr.io/kube-proxy:v1.20.7  
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.7 k8s.gcr.io/kube-apiserver:v1.20.7 
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.7 k8s.gcr.io/kube-controller-manager:v1.20.7 
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.7 k8s.gcr.io/kube-scheduler:v1.20.7
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag registry.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0 
docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2   
导出k8s镜像
mkdir kubeadm-basic.images
cd kubeadm-basic.images

docker save k8s.gcr.io/kube-proxy:v1.20.7 -o kube-proxy.tar
docker save k8s.gcr.io/kube-apiserver:v1.20.7 -o kube-apiserver.tar
docker save k8s.gcr.io/kube-controller-manager:v1.20.7 -o kube-controller-manager.tar
docker save k8s.gcr.io/kube-scheduler:v1.20.7 -o kube-scheduler.tar
docker save k8s.gcr.io/etcd:3.4.13-0 -o etcd.tar
docker save k8s.gcr.io/coredns:1.7.0 -o coredns.tar
docker save k8s.gcr.io/pause:3.2 -o pause.tar
打包k8s镜像
tar -czvf kubeadm-basic.images.tar.gz kubeadm-basic.images/*
复制镜像和脚本到 node 节点,并在 node 节点上执行脚本 bash /opt/load-images.sh
scp -r kubeadm-basic.images.tar.gz root@node01:/opt
scp -r kubeadm-basic.images.tar.gz root@node02:/opt
scp -r kubeadm-basic.images.tar.gz root@node03:/opt

所有节点上执行

cd /opt
tar -zxvf kubeadm-basic.images.tar.gz
for i in $(ls /opt/kubeadm-basic.images/*.tar); do docker load -i $i; done

方法二:已有k8s镜像搭建方法

在 master 节点上传 kubeadm-basic.images.tar.gz 压缩包至 /opt 目录
cd /opt
tar zxvf kubeadm-basic.images.tar.gz

for i in $(ls /opt/kubeadm-basic.images/*.tar); do docker load -i $i; done
复制镜像和脚本到 node 节点
scp -r kubeadm-basic.images root@node01:/opt
scp -r kubeadm-basic.images root@node02:/opt
scp -r kubeadm-basic.images root@node03:/opt
所有节点上执行
cd /opt
for i in $(ls /opt/kubeadm-basic.images/*.tar); do docker load -i $i; done

master上执行

初始化kubeadm

kubeadm config print init-defaults > /opt/kubeadm-config.yaml
cd /opt/
vim kubeadm-config.yaml
......
11 localAPIEndpoint:
12   advertiseAddress: 192.168.100.10		#指定master节点的IP地址
13   bindPort: 6443
......
34 kubernetesVersion: v1.20.7				#指定kubernetes版本号
35 networking:
36   dnsDomain: cluster.local
37   podSubnet: "10.244.0.0/16"				#指定pod网段,10.244.0.0/16用于匹配flannel默认网段
38   serviceSubnet: 10.96.0.0/16			#指定service网段
39 scheduler: {}
--- #末尾再添加以下内容
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs						#把默认的service调度方式改为ipvs模式
ipvs:
  strictARP: true
  scheduler: rr
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
#--upload-certs 参数可以在后续执行加入节点时自动分发证书文件,k8sV1.15版本以下替换为 --experimental-upload-certs
#tee kubeadm-init.log 用以输出日志
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.10:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:8be27369ddd5c8cad17eef27825754d921d71c58d1cc9c56cee4988fd86417ef
查看 kubeadm-init 日志
less kubeadm-init.log
kubernetes配置文件目录
ls /etc/kubernetes/
存放ca等证书和密码的目录
ls /etc/kubernetes/pki
创建k8s config
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

所有节点上执行

kubeadm join 192.168.100.10:6443 --token abcdef.0123456789abcdef \
	--discovery-token-ca-cert-hash sha256:8be27369ddd5c8cad17eef27825754d921d71c58d1cc9c56cee4988fd86417ef

master上执行

自动补全

echo "source <(kubectl completion bash)" /etc/profile
source /etc/profile

部署网络插件flannel

创建 flannel 资源

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
##注意flannel镜像名称和yaml内的是否一致
## 同步到所有节点
scp /opt/cni/bin/flannel node01:/opt/cni/bin
scp /opt/cni/bin/flannel node02:/opt/cni/bin
scp /opt/cni/bin/flannel node03:/opt/cni/bin

在master节点查看节点状态(需要等几分钟)

kubectl get nodes
NAME      STATUS     ROLES                  AGE   VERSION
master1   Ready      control-plane,master   78m   v1.20.7
node01    Ready      <none>                 78m   v1.20.7
node02    NotReady   <none>                 78m   v1.20.7
node03    NotReady   <none>                 78m   v1.20.7

kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-bccdc95cf-c9w6l          1/1     Running   0          71m
coredns-bccdc95cf-nql5j          1/1     Running   0          71m
etcd-master                      1/1     Running   0          71m
kube-apiserver-master            1/1     Running   0          70m
kube-controller-manager-master   1/1     Running   0          70m
kube-proxy-558p8                 1/1     Running   0          2m53s
kube-proxy-nwd7g                 1/1     Running   0          2m56s
kube-proxy-wd87d                 1/1     Running   0          2m54s
kube-proxy-qpz8t                 1/1     Running   0          71m
kube-scheduler-master            1/1     Running   0          70m
kubectl get pods -n kube-flannel
NAME                    READY   STATUS    RESTARTS   AGE
kube-flannel-ds-2k5b9   1/1     Running   0          2m53s
kube-flannel-ds-fhx2c   1/1     Running   0          2m55s
kube-flannel-ds-g6z6q   1/1     Running   0          2m50s
kube-flannel-ds-nkrrj   1/1     Running   0          2m51s

验证

node节点上 登录到镜像仓库,获取镜像
docker login -u lp1078802338 -p  registry.cn-hangzhou.aliyuncs.com
创建pod
kubectl create deployment nginx-deployment --image=registry.cn-hangzhou.aliyuncs.com/lp-k8s-prometheus/nginx:v1 --replicas=1
## 最好看一下pod在哪个节点上,然后登录node 使用docker拉取,不然会很慢
kubectl get pods -o wide
NAME               READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
nginx-deployment   1/1     Running   0          5m49s   10.244.1.2   node01   <none>           <none>

##node节点上执行
docker pull registry.cn-hangzhou.aliyuncs.com/lp-k8s-prometheus/nginx:v1

v1: Pulling from lp-k8s-prometheus/nginx
a2abf6c4d29d: Pull complete
a9edb18cadd1: Pull complete
589b7251471a: Pull complete
186b1aaa4aa6: Pull complete
b4df32aa5a72: Pull complete
a0bcbecc962e: Pull complete
docker images

REPOSITORY                                                  TAG        IMAGE ID       CREATED         SIZE
registry.cn-hangzhou.aliyuncs.com/lp-k8s-prometheus/nginx   v1         605c77e624dd   23 months ago   141MB
##master上执行
kubectl get pods -o wide

NAME                                    READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pod/nginx-deployment-799b5654d5-b8nqv   1/1     Running   0          27s   10.244.1.3   node01   <none>           <none>
将pod的server类型改成NodePort
kubectl expose deployment nginx-deployment --port=80 --type=NodePort
kubectl get pod,svc -o wide

NAME                                    READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
pod/nginx-deployment-799b5654d5-b8nqv   1/1     Running   0          27s   10.244.1.3   node01   <none>           <none>

NAME                       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE   SELECTOR
service/kubernetes         ClusterIP   10.96.0.1       <none>        443/TCP        70m   <none>
service/nginx-deployment   NodePort    10.97.191.117   <none>        80:30619/TCP   8s    app=nginx-deployment
打开浏览器验证
http://node01:30619
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐