一、 服务器规划

1、环境说明:

主机名

ip

docker

kubelet

kubeadm

kubectl

备注

k8s-master1

192.168.209.132

19.03.13

1.19.4

1.19.4

1.19.4

master

k8s-node1

192.168.209.134

19.03.13

1.19.4

1.19.4

1.19.4

node1

k8s-node2

192.168.209.135

19.03.13

1.19.4

1.19.4

1.19.4

node2

2、系统镜像

[root@localhost ~]# cat /etc/redhat-release

CentOS Linux release 7.9.2009 (Core)

3、环境要求

  • 一台或多台机器,操作系统CentOS 7
  • 硬件配置:内存2GB或2G+,CPU 2核或CPU 2核+;
  • 集群内各个机器之间能相互通信;
  • 集群内各个机器可以访问外网,需要拉取镜像;
  • 禁止swap分区;

二、环境配置

1、关闭防火墙

[root@localhost ~]# systemctl stop firewalld 
[root@localhost ~]# systemctl disable firewalld 

2、关闭selinux

#永久

[root@localhost ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config

#临时

[root@localhost ~]# setenforce 0

3、关闭swap

k8s禁止虚拟内存以提高性能

#永久

[root@localhost ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab

#临时

[root@localhost ~]# swapoff -a

4、配置hosts

vim /etc/hosts 
192.168.209.132 k8s-master1 
192.168.209.134 k8s-node1 
192.168.209.135 k8s-node2

5、修改主机名

[root@localhost ~]# hostnamectl set-hostname k8s-master1 
#查看 
[root@localhost ~]# more /etc/hostname

6、设置网桥参数

cat > /etc/sysctl.d/k8s.conf << EOF 
net.bridge.bridge-nf-call-ip6tables = 1 
net.bridge.bridge-nf-call-iptables = 1 
EOF 

#生效 
sysctl --system

7、同步时间

yum install ntpdate -y 
ntpdate time.windows.com 

二、docker安装

1、更新yum源

#可选 
yum install wget -y 
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2、安装docker

yum install docker-ce-19.03.13 -y

3、配置开机自启

systemctl enable docker.service

4、配置加速器

配置daemon.json文件

[root@localhost ~]# mkdir -p /etc/docker 
[root@localhost ~]# tee /etc/docker/daemon.json <<-'EOF' 
{ 
  "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"] 
} 
EOF

重启服务

[root@localhost ~]# systemctl daemon-reload 
[root@localhost ~]# systemctl restart docker

5、docker查看命令

查看docker状态

systemctl status docker.service

查看当前已下载的镜像

docker images

拉取镜像

docker pull hello-world

运行镜像

[root@localhost ~]# docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

如此docker安装成功。

三、k8s安装

1、添加k8s的阿里云YUM源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2、安装 kubeadm,kubelet 和 kubectl

[root@localhost ~]# yum install kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4 -y

3、开机自启

[root@localhost ~]# systemctl enable kubelet.service

4、查看是否安装成功

yum list installed | grep kubelet 
yum list installed | grep kubeadm 
yum list installed | grep kubectl

5、修改Cgroup Driver

修改daemon.json,新增‘“exec-opts”: [“native.cgroupdriver=systemd”’

[root@localhost ~]# vim /etc/docker/daemon.json 
{
  "registry-mirrors": ["https://v16stybc.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重新加载docker

[root@localhost ~]# systemctl daemon-reload 
[root@localhost ~]# systemctl restart docker

修改cgroup driver是为了消除kubeadm init告警:

[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at  https://kubernetes.io/docs/setup/cri/

6、kubelet命令补全

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile 

#配置生效 
source /etc/profile

这一步不执行的话,kubectl get nodes会报错

[root@localhost ~]# kubectl get nodes The connection to the server localhost:8080 was refused - did you specify the right host or port?

7、初始化主节点

master节点执行

kubeadm init --apiserver-advertise-address=192.168.209.132 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.19.4 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 --v=2 

--apiserver-advertise-address master节点ip地址 
--pod-network-cidr指定Pod网络的范围,这里使用flannel网络方案。 
--v=2 打印日志

记录kubeadm join的输出,后面需要这个命令将各个节点加入集群中

8、初始化node节点

node节点执行

kubeadm join 192.168.16.132:6443 --token a0n3bj.8o7dhcphidtid5fk --discovery-token-ca-cert-hash sha256:00b608e1314662953a52975c2b5c6c2f4440d2abb255434e459935ba373fa4e8 --v=2

9、查看节点信息

[root@localhost ~]# kubectl get nodes 
NAME STATUS ROLES AGE VERSION 
k8s-master1 NotReady master 29m v1.19.4 
k8s-node1 NotReady <none> 24m v1.19.4 
k8s-node2 NotReady <none> 5m10s v1.19.4

10、网络插件

master节点执行

[root@localhost ~]# vim kube-flannel.yml

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.0
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

#执行
[root@localhost ~]# kubectl apply -f kube-flannel.yml

执行完毕稍等几分钟左右

# 查看安装情况
kubectl get pods -n kube-system

安装完以后, 再次查看通过kubectl get nodes查看

[root@localhost ~]# kubectl get nodes 
NAME STATUS ROLES AGE VERSION 
k8s-master1 Ready master 32m v1.19.4 
k8s-node1 Ready <none> 27m v1.19.4 
k8s-node2 Ready <none> 8m23s v1.19.4

k8s node节点pod不能显示

[root@k8s-node ~]# kubectl  get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
解决方法:
[root@k8s-master ~]# scp -r /etc/kubernetes/admin.conf root@k8s-node:/etc/kubernetes/    --将admin.conf文件拷贝到其它从节点
[root@k8s-node ~]# vim /root/.bash_profile       --在各个从节点添加环境变量
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@k8s-node ~]# source  /root/.bash_profile 

查看k8s各组件状态

kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}  

11、测试

k8s环境安装成功,拉取nginx镜像进行测试。

#创建deploy
[root@localhost ~]# kubectl create deployment nginx --image=nginx
#开放端口
[root@localhost ~]# kubectl expose deployment nginx --port=80 --type=NodePort
#查看端口
kubectl get pod,svc 或者 kubectl get service
[root@localhost ~]# kubectl get service
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP        33m
nginx        NodePort    10.104.117.63   <none>        80:32343/TCP   9s

32343是可以访问的端口。

http://192.168.209.132:32343

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐