1、准备机器

1.1、安装虚拟机

    虚拟机镜像采用ubuntu-22.04.3-live-server-amd64.iso,一路回车直到系统安装完成。用root登录,能ping通宿主机和外网就可以了。

1.2、配置虚拟机
1.2.1、关闭防火墙
# systemctl disable ufw


// 若没有该config文件不做如下2行
# sed -ri 's/SELINUX=permissive/SELINUX=disabled/' /etc/selinux/config
# sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
1.2.2、网络时间同步
// 同步aliyun时间
# apt install ntpdate
# crontab -e
0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com

// 设置时区
# timedatectl set-timezone Asia/Shanghai
1.2.3、配置内核转发和网桥过滤
// 配置
# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF

// 启动
# modprobe br_netfilter

// 检查
# lsmod | grep br_netfilter
br_netfilter           32768  0
bridge                307200  1 br_netfilter

// 开机启动
# cat > /etc/modules-load.d/k8s.conf << EOF
overlay
br_netfilter
EOF
1.2.4、安装ipset和ipvsadm
# apt install ipset ipvsadm

# cat > /etc/modules-load.d/ipvs.conf << EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
1.2.5、关闭交换区
# cat /etc/fstab

# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/ubuntu-vg/ubuntu-lv during curtin installation
/dev/disk/by-id/dm-uuid-LVM-aMgPZgZ6o3cHyNRGU08LFhzfZuvDoqjTrxFfUt6c3Zu3FwpXO7xWyoRZSNRaLZq1 / ext4 defaults 0 1
# /boot was on /dev/sda2 during curtin installation
/dev/disk/by-uuid/9314b4f8-368c-4f1b-ba74-9fb759ad9270 /boot ext4 defaults 0 1
#/swap.img       none    swap    sw      0       0

注释掉最后关于交换区的一行。

1.2.6、配置/etc/hosts
// 在/etc/hosts文件后面加上以下内容
10.0.1.11 master1
10.0.1.21 worker1
10.0.1.22 worker2
10.0.1.23 worker3

2、安装容器

2.1、安装containerd
# apt install containerd
# apt remove containerd
// 在安装containerd的时候,系统附带重新安装了新的runc

// 然后到github上下载cri-containerd,才能支持crictl命令,可以在win下用迅雷下载比较快,然后复制到虚拟机上。
# wget https://github.com/containerd/containerd/releases/download/v1.7.14/cri-containerd-1.7.14-linux-amd64.tar.gz
// 解压
# tar xvf cri-containerd-1.7.14-linux-amd64.tar.gz -C /

// 修改配置
# mkdir /etc/containerd
# containerd config default > /etc/containerd/config.toml
// 将该文件里面65行的版本号改为3.9
#    sandbox_image = "registry.k8s.io/pause:3.8"
    sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
// 将该文件里面137行改为true
#             SystemdCgroup = false
             SystemdCgroup = true


// 最后将containerd设为开机自启动
# systemctl enable containerd

3、构建k8s

3.1、下载k8s软件
3.1.1、snap下载
// snap下载
# snap install kubeadm --classic

# snap install kubectl --classic

# snap install kubelet --classic

// 查看kubelet服务状态
# systemctl status snap.kubelet.daemon.service

# cd /etc/systemd/system
# mv snap.kubelet.daemon.service kubelet.service
# systemctl disable snap.kubelet.daemon.service
# systemctl enable kubelet.service
# reboot

apt install conntrack
apt install socat


// 关机
# shutdown -h 0


3.1.2、apt下载
// apt下载
// 从社区获取apt下载源包含k8s1.29版本,用aliyun也可以,但版本号最高为k8s1.28
# curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | tee /etc/apt/sources.list.d/kubernetes.list

// 更新apt源
# apt update

// 查看新源里面有什么版本的kubeadm
# apt-cache policy kubeadm
kubeadm:
  Installed: (none)
  Candidate: 1.28.2-00
  Version table:
     1.28.2-00 500
        500 https://pkgs.k8s.io/core:/stable:/v1.28/deb  Packages
     1.28.2-00 500
        500 https://pkgs.k8s.io/core:/stable:/v1.28/deb  Packages
     1.28.2-00 500
        500 https://pkgs.k8s.io/core:/stable:/v1.28/deb  Packages
     1.28.2-00 500
        500 https://pkgs.k8s.io/core:/stable:/v1.28/deb  Packages
// 发现最新版是1.28.2-00

// 进行安装
# apt install kubeadm kubectl kubelet
// 保持版本不被自动升级
# apt-mark hold kubeadm kubectl kubelet

// 关机
# shutdown -h 0


3.2、复制虚拟机master1

在virtualbox里面复制一个虚拟机,取名k8s_master1,修改IP地址

在virtualbox里面复制一个虚拟机,取名k8s_worker1,修改IP地址

# hostnamectl hostname master1
// 各个worker虚拟机还需要修改IP地址,并将各自的IP和机器名称加入/etc/hosts


// 在master1上做初始化
# kubeadm init --kubernetes-version=v1.29.3 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16

# kubeadm init --kubernetes-version=v1.29.3 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16 --apiserver-advertise-address=10.0.1.11

// 一次成功!
# kubeadm init --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/16 --apiserver-advertise-address=10.0.1.11
[init] Using Kubernetes version: v1.29.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1 10.0.1.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master1] and IPs [10.0.1.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master1] and IPs [10.0.1.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "super-admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 4.503238 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: yyjh09.6he5wfuvsgpclctr
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.1.11:6443 --token yyjh09.6he5wfuvsgpclctr \
        --discovery-token-ca-cert-hash sha256:ea410f8b9757ca344212ff3e906ec9eb44f1902b5ee7a24bdb9c3fe9d8621d5a

// 安装成功了!检查一下
# kubectl get node
E0319 11:28:28.217021    8109 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0319 11:28:28.217430    8109 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0319 11:28:28.219640    8109 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0319 11:28:28.219773    8109 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
E0319 11:28:28.222284    8109 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp 127.0.0.1:8080: connect: connection refused
The connection to the server localhost:8080 was refused - did you specify the right host or port?

// 按照成功提示信息执行如下命令
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

// 重新检查
# kubectl get node
NAME      STATUS     ROLES           AGE   VERSION
master1   NotReady   control-plane   11m   v1.29.3

# kubectl get pod -A
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-857d9ff4c9-sl62g          0/1     Pending   0          12m
kube-system   coredns-857d9ff4c9-z6jjq          0/1     Pending   0          12m
kube-system   etcd-master1                      1/1     Running   0          12m
kube-system   kube-apiserver-master1            1/1     Running   0          12m
kube-system   kube-controller-manager-master1   1/1     Running   0          12m
kube-system   kube-proxy-5l598                  1/1     Running   0          12m
kube-system   kube-scheduler-master1            1/1     Running   0          12m



// 在worker节点上按照master1上初始化成功之后的提示操作
# kubeadm join 10.0.1.11:6443 --token yyjh09.6he5wfuvsgpclctr \
        --discovery-token-ca-cert-hash sha256:ea410f8b9757ca344212ff3e906ec9eb44f1902b5ee7a24bdb9c3fe9d8621d5a

// 按照成功提示信息执行如下命令
# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/kubelet.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

// 检查节点加入情况
# kubectl get node
NAME      STATUS     ROLES           AGE    VERSION
master1   NotReady   control-plane   91m    v1.29.3
worker1   NotReady   <none>          7m3s   v1.29.3



4、构建网络

// 用helm来安装calico,首先检查系统有没有安装helm
# helm
Command 'helm' not found, but can be installed with:
snap install helm

// 没有安装,按照提示安装
# snap install helm
error: This revision of snap "helm" was published using classic confinement and thus may perform
       arbitrary system changes outside of the security sandbox that snaps are usually confined to,
       which may put your system at risk.

       If you understand and want to proceed repeat the command including --classic.
root@master1:~# snap install helm --classic
helm 3.14.3 from Snapcrafters✪ installed


# Installing

1. Add the projectcalico helm repository.

   ```
   helm repo add projectcalico https://projectcalico.docs.tigera.io/charts
   ```

1. Create the tigera-operator namespace.

   ```
   kubectl create namespace tigera-operator
   ```

1. Install the helm chart into the `tigera-operator` namespace.

   ```
   helm install calico projectcalico/tigera-operator --namespace tigera-operator



// 检查
# kubectl get pod -A
NAMESPACE         NAME                                      READY   STATUS              RESTARTS   AGE
calico-system     calico-kube-controllers-fbb8d4c9c-nqd9k   0/1     Pending             0          28s
calico-system     calico-node-7v465                         0/1     Init:0/2            0          28s
calico-system     calico-node-dbmx9                         0/1     Init:1/2            0          28s
calico-system     calico-typha-8b695c9cc-v2vsf              1/1     Running             0          28s
calico-system     csi-node-driver-64mpv                     0/2     ContainerCreating   0          28s
calico-system     csi-node-driver-q5jm5                     0/2     ContainerCreating   0          28s
kube-system       coredns-857d9ff4c9-sl62g                  0/1     Pending             0          100m
kube-system       coredns-857d9ff4c9-z6jjq                  0/1     Pending             0          100m
kube-system       etcd-master1                              1/1     Running             0          100m
kube-system       kube-apiserver-master1                    1/1     Running             0          100m
kube-system       kube-controller-manager-master1           1/1     Running             0          100m
kube-system       kube-proxy-5l598                          1/1     Running             0          100m
kube-system       kube-proxy-798fq                          1/1     Running             0          17m
kube-system       kube-scheduler-master1                    1/1     Running             0          100m
tigera-operator   tigera-operator-748c69cf45-gdhdg          1/1     Running             0          39s

// 一直重复检查,直到左右pod处于Running状态
# kubectl get pod -A
NAMESPACE          NAME                                      READY   STATUS    RESTARTS   AGE
calico-apiserver   calico-apiserver-67dd77d667-4c4vf         0/1     Running   0          29s
calico-apiserver   calico-apiserver-67dd77d667-8glv5         0/1     Running   0          29s
calico-system      calico-kube-controllers-fbb8d4c9c-nqd9k   1/1     Running   0          2m11s
calico-system      calico-node-7v465                         1/1     Running   0          2m11s
calico-system      calico-node-dbmx9                         1/1     Running   0          2m11s
calico-system      calico-typha-8b695c9cc-v2vsf              1/1     Running   0          2m11s
calico-system      csi-node-driver-64mpv                     2/2     Running   0          2m11s
calico-system      csi-node-driver-q5jm5                     2/2     Running   0          2m11s
kube-system        coredns-857d9ff4c9-sl62g                  1/1     Running   0          102m
kube-system        coredns-857d9ff4c9-z6jjq                  1/1     Running   0          102m
kube-system        etcd-master1                              1/1     Running   0          102m
kube-system        kube-apiserver-master1                    1/1     Running   0          102m
kube-system        kube-controller-manager-master1           1/1     Running   0          102m
kube-system        kube-proxy-5l598                          1/1     Running   0          102m
kube-system        kube-proxy-798fq                          1/1     Running   0          18m
kube-system        kube-scheduler-master1                    1/1     Running   0          102m
tigera-operator    tigera-operator-748c69cf45-gdhdg          1/1     Running   0          2m22s


// 检查node状态# kubectl get node
NAME      STATUS   ROLES           AGE    VERSION
master1   Ready    control-plane   102m   v1.29.3
worker1   Ready    <none>          18m    v1.29.3

// worker1的校色标签为<none>,修改为worker
# kubectl label node worker1 node-role.kubernetes.io/worker=worker
node/worker1 labeled


5、测试与监控

5.1、部署ngins进行测试

编写nginx.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginxweb
spec:
  selector:
    matchLabels:
      app: nginxweb1
  replicas: 2
  template:
    metadata:
      labels:
        app: nginxweb1
    spec:
      containers:
      - name: nginxwebc
        image: nginx:latest
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: nginxweb-service
spec:
  externalTrafficPolicy: Cluster
  selector:
    app: nginxweb1
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
    nodePort: 30080
  type: NodePort
# kubectl delete -f nginx.yaml
deployment.apps "nginxweb" deleted
service "nginxweb-service" deleted

# kubectl get all
NAME                            READY   STATUS    RESTARTS   AGE
pod/nginxweb-64c569cccc-rj47x   1/1     Running   0          2m59s
pod/nginxweb-64c569cccc-wppsh   1/1     Running   0          2m59s

NAME                       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/kubernetes         ClusterIP   10.96.0.1      <none>        443/TCP        3h13m
service/nginxweb-service   NodePort    10.96.240.49   <none>        80:30080/TCP   2m59s

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/nginxweb   2/2     2            2           2m59s

NAME                                  DESIRED   CURRENT   READY   AGE
replicaset.apps/nginxweb-64c569cccc   2         2         2       2m59s

# curl 10.96.240.49
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

// 或者通过win浏览器访问http://10.0.1.11:30080
Welcome to nginx!
If you see this page, the nginx web server is successfully installed and working. Further configuration is required.

For online documentation and support please refer to nginx.org.
Commercial support is available at nginx.com.

Thank you for using nginx.


5.2、安装dashboard
# helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/
"kubernetes-dashboard" has been added to your repositories
root@master1:~/test# helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard
Release "kubernetes-dashboard" does not exist. Installing it now.
NAME: kubernetes-dashboard
LAST DEPLOYED: Wed Mar 20 08:08:32 2024
NAMESPACE: kubernetes-dashboard
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
*************************************************************************************************
*** PLEASE BE PATIENT: Kubernetes Dashboard may need a few minutes to get up and become ready ***
*************************************************************************************************

Congratulations! You have just installed Kubernetes Dashboard in your cluster.

To access Dashboard run:
  kubectl -n kubernetes-dashboard port-forward svc/kubernetes-dashboard-kong-proxy 8443:443

NOTE: In case port-forward command does not work, make sure that kong service name is correct.
      Check the services in Kubernetes Dashboard namespace using:
        kubectl -n kubernetes-dashboard get svc

Dashboard will be available at:
  https://localhost:8443

上述安装不好使,下列安装一次成功!

最后得到如下的管理界面

完成任务,谢谢浏览!

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐