k8s安装和部署详解

kubernetes官方提供的三种部署方式
minikube

Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。部署地址:使用 Minikube 创建集群 | Kubernetes

kubeadm

Kubeadm也是一个工具,提供Kubeadm init和kubeadm join,用于快速部署Kubernetes集群。部署地址:Kubeadm | Kubernetes

二进制包

推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。下载地址:https://github.com/kubernetes/kubernetes/releases

使用kubeadm方式安装
1、准备环境

四台服务器:

k8s-master 192.168.175.110

k8s-node1 192.168.175.111

k8s-node2 192.168.175.112

k8s-node3 192.168.175.113

关闭防火墙和selinux

systemctl disable firewalld
systemctl stop firewalld
setenforce 0
sed -i '/SELINUX/ s/enforce/disabled/g' /etc/sysconfig/selinux

cat << EOF >> /etc/sysctl.conf   追加到内核会读取的参数文件里
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_nonlocal_bind=1
net.ipv4.ip_forward=1
vm.swappiness=0
EOF
sysctl -p	让内核重新读取数据,加载生效

建议先给每台服务器起好名字,使用固定的ip地址,防止后面因为IP地址的变化,导致整个集群异常

2、确认docker已经安装好,启动docker,并且设置开机启动
[root@k8s-master ~]#  systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-master ~]#  systemctl start docker
[root@k8s-master ~]# ps aux|grep docker
root      9705  0.5  4.6 1150216 47300 ?       Ssl  21:02   0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root      9843  0.0  0.0 112824   976 pts/2    S+   21:02   0:00 grep --color=auto docker
3、配置docker使用systemd作为默认Cgroup驱动

每台服务器上都要操作,master和node上都要操作

cat << EOF > /etc/docker/daemon.json
{
	"exec-opts":["native.cgroupdriver=systemd"]
}
EOF

#重启docker
systemctl restart docker
4、关闭swap分区

每台服务器都要操作,因为k8s不想使用swap分区来存储数据,使用swap会降低性能

[root@k8s-master ~]# swapoff -a  #临时关闭
[root@k8s-master ~]# sed -i '/swap / s/^\(.*\)$/#\1/g' /etc/fstab	#永久关闭
5、重新命名主机,修改hosts文件
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
hostnamectl set-hostname k8s-node3

修改hosts文件,每台都需要

cat >> /etc/hosts << EOF
192.168.175.110 k8s-master
192.168.175.111 k8s-node1
192.168.175.112 k8s-node2
192.168.175.113 k8s-node3
EOF
6、安装kubeadm,kubelet,kubectl

kubeadm——》k8s的管理程序——》需要在master上运行——》用来建立整个k8s集群,背后执行了大量脚本帮助我们去启动k8s

kubelet——》在node节点上用来管理容器的——》管理docker,告诉docker程序去启动容器

​ master和node通信用的——》管理docker,告诉docker程序去启动容器

一个在集群中每个节点上运行的代理,它保证容器都运行在Pod中

kubectl——》在master上用来给node节点发号施令的程序,用来控制node节点的,告诉它们做什么事情,是命令行操作的工具。集群里每台机器都需要安装

  • 添加kubernetes YUM软件源

    cat > /etc/yum.repos.d/kubernetes.repo <<EOF
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
  • 安装kubeadm、kubectl、kubelet

    yum install -y kubeadm-1.23.5-0.x86_64 kubectl-1.23.5-0.x86_64 kubelet-1.23.5-0.x86_64
    
  • master服务器启动kubelet服务,node节点服务器不启动,同时每台服务器都设置kubelet开机自启,因为kubelet是k8s在node节点上的代理,必须开机要运行的

    systemctl start kubelet
    systemctl enable kubelet
    
7、matser主机执行
  • 部署Kubernetes Master

    提前准备coredns:1.8.4的镜像,后面需要使用,需要在每台机器上下载镜像

[root@k8s-master ~]# docker pull coredns/coredns:1.8.4
[root@k8s-master ~]# docker tag coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4
  • 初始化操作在master服务器上运行
kubeadm init  --kubernetes-version=v1.23.5 \
--apiserver-advertise-address=192.168.175.110 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

​ 192.168.175.110是master的IP

​ service-cidr 服务发布暴露——》dnat

​ 执行效果如下——报错

[root@k8s-master ~]# kubeadm init --kubernetes-version=v1.23.5  \
> --apiserver-advertise-address=192.168.175.110 \
> --image-repository registry.aliyuncs.com/google_containers \
> --service-cidr=10.1.0.0/16 \
> --pod-network-cidr=10.244.0.0/16
I0914 22:14:09.299430    8277 version.go:255] remote version is much newer: v1.25.0; falling back to: stable-1.23
[init] Using Kubernetes version: v1.23.10
[preflight] Running pre-flight checks
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

[root@k8s-master ~]# cat  /proc/sys/net/bridge/bridge-nf-call-iptables
0
[root@k8s-master ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
0
[root@k8s-master ~]# cat  /proc/sys/net/bridge/bridge-nf-call-iptables
1

再重新执行,出现如下效果表示成功

Then you can join any number of worker nodes by running the following on each as root:

 kubeadm join 192.168.175.110:6443 --token s2i0jk.lzb24ty4zr2lflc6 \
	--discovery-token-ca-cert-hash sha256:a3f085636033a0197ea153201fcbb2a712addd07a60146d2c53670931c4dfa29 

按照提示操作

[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config           
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
8、在node节点服务器加入k8s集群

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-V8EWsC7x-1663404793270)(C:\Users\Administrator\AppData\Roaming\Typora\typora-user-images\image-20220914222851455.png)]

测试node节点是否能和master通信

ping k8s-master

不要在node节点上运行kubelet服务,不然会导致join失败,如已经运行了,建议关闭然后运行kubeadm reset命令然后再加入集群

[root@k8s-node1 ~]# kubeadm join 192.168.175.110:6443 --token s2i0jk.lzb24ty4zr2lflc6 \
	--discovery-token-ca-cert-hash sha256:a3f085636033a0197ea153201fcbb2a712addd07a60146d2c53670931c4dfa29 
9、查看master上的所有节点服务器

当status为NotReady时说明master和node节点之间的通信还是有问题的,容器之间通信还没有准备好

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   13m     v1.23.5
k8s-node1    NotReady   <none>                 11m     v1.23.5
k8s-node2    NotReady   <none>                 9m      v1.23.5
k8s-node3    NotReady   <none>                 9m51s   v1.23.5

matser服务器需要安装网络插件flannel(在master节点执行),实现master上的pod和node节点上的pod之间通信。

[root@k8s-master ~]# vim  kube-flannel.yml 
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.1-rc2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.1-rc2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

Master服务器执行kubectl apply -f kube-flannel.yml命令,node节点重新加入集群,执行kubeadm reset再使用上方的kubelet init命令加入集群。

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   20m   v1.23.5
k8s-node1    Ready    <none>                 18m   v1.23.5
k8s-node2    Ready    <none>                 16m   v1.23.5
k8s-node3    Ready    <none>                 17m   v1.23.5

如果还是不行,考虑更换kube-flannel.yml文件,https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐