一、K8s介绍和各组件盘点

https://kubernetes.io

kubernetes是开源软件,用于自动部署,缩容、扩容,管理容器化应用程序的开源系统。是容器编排平台和集群管理系统。

1.k8s组件

1)k8s系统角色分为master和node,是主从结构

master组件:API Server、Controller-mamager、Scheduler、etcd

  API Server(无状态服务)APIServer是整个集群的控制中枢,提供集群中各个模块之间的数据交换,并将集群状态和信息存储到分布式键-值(key-value)存储系统Etcd集群中。同时它也是集群管理、资源配额、提供完备的集群安全机制的入口,为集群各类资源对象提供增删改查以及watch的REST API接口。

  Scheduler(有状态服务)Scheduler是集群Pod的调度中心,主要是通过调度算法将Pod分配到最佳的Node,集群部署中,有选主机制。
节点,监听所有Pod的状态,一且发现新的未被调度到任何Node节点的Pod(PodSpec.NodeName为空),就会根据一系列策略选择最佳节点进行调度。

  Controller-Manager(有状态服务) Controller Manager是集群状态管理器。群部署中,有选主机制。

  etcd:键值数据库。只有API Server能和etcd直接通信。etcd实际环境部署,部署只能是奇数个,使用高性能ssd。

node组件:kubelet、kube-proxy、Container-runtime

  kubelet:负责与Master通信协作,管理该节点上的Pod,对容器进行健康检查及监控,同时
负责上报节点和节点上面Pod的状态。

  kube-proxy:负责各Pod之间的通信和负载均衡,将指定的流量分发到后端正确的机器上。

  Runtime:负责容器的管理

kubectl:客户端工具,可部署在任意节点,只要网通,能和集群通信即可操作,通过config文件操作。

二、Kubernetes单机版安装

1.准备工作

1)关闭Selinux 和 firewalld

临时关闭Selinux: 

setenforce 0 

永久关闭Selinux: 

sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

关闭firewalld: 

systemctl disable firewalld;systemctl stop firewalld

2)设置主机名

hostnamectl  set-hostname  k8s01

这里假设我的主机IP为: 192.168.100.122

echo "192.168.100.122  k8s01" >> /etc/hosts

3)关闭swap

临时关闭: 

swapoff -a 

永久关闭: 

vi  /etc/fstab  #注释掉swap那行

4)将桥接的ipv4流量传递到iptables链

生成bridge相关内核参数

modprobe br_netfilter  

[root@aminglinux01 ~]# modprobe br_netfilter  
modprobe: FATAL: Module br_netfilter   not found in directory /lib/modules/4.18.0-553.el8_10.x86_64
[root@aminglinux01 ~]# uname -a
Linux aminglinux01 4.18.0-553.el8_10.x86_64 #1 SMP Fri May 24 13:05:10 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
yum install kernel-devel kernel-headers          #####安装linux内核相关文件
[root@aminglinux01 yum.repos.d]# modprobe br_netfilter
[root@aminglinux01 yum.repos.d]# 

将内核参数写入配置文件

cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

使内核参数生效

sysctl --system

5)设置时间同步

yum install -y chrony

systemctl start chronyd

systemctl enable chronyd

2. 安装containerd

1)配置yum 仓库

安装yum-utils工具

yum install -y yum-utils  

配置Docker官方的yum仓库

yum-config-manager \

    --add-repo \

    https://download.docker.com/linux/centos/docker-ce.repo

由于某些原因,docker官方yum仓库不可用,可使用阿里云

[root@aminglinux01 ~]# yum-config-manager \
>     --add-repo \
> https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
Adding repo from: https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#######yum install -y yum-utils已安装,但提示“-bash:  yum-config-manager: command not found”

1.rpm -qa | grep yum-utils           ###确认 yum-utils安装
2.which yum-config-manager           ###查看 yum-config-manager 位置是否在/usr/bin下
3./usr/bin/yum-config-manager --help  ###确认命令是否可用
4.如果命令不在/usr/bin/,可以尝试重新安装yum-utils:yum reinstall yum-utils
5.如果重新安装后仍不可用,可能需要检查环境变量。确认/usr/bin/在你的PATH环境变量中:echo $PATH
6.如果不在,可以将其添加到用户的.bashrc或.bash_profile文件中:export PATH=$PATH:/usr/bin/
7.source ~/.bashrc

2)yum安装containerd

yum install containerd.io -y

3)启动containerd服务

systemctl enable containerd

systemctl start containerd

4)修改sandbox镜像地址

先生成默认配置文件

containerd  config default > /etc/containerd/config.toml

修改配置文件

vi  /etc/containerd/config.toml

sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9"  

  # 修改为阿里云镜像地址

SystemdCgroup = true   #这里改为true

重启containerd服务

systemctl restart containerd

3.安装kubeadm和kubelet

1)配置kubernetes仓库

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

说明:kubernetes用的是RHEL7的源,和8是通用的

2)安装kubeadm和kubelet

yum install -y kubelet-1.27.4  kubeadm-1.27.4  kubectl-1.27.4

3)启动kubelet服务

systemctl start kubelet.service

systemctl enable kubelet.service

4)设置crictl连接 containerd

crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock

4.初始化

1)使用kubeadm初始化k8s

其要做的事情是将k8s核心的各个组件用容器的形式运行起来,包括aip-server、controller-manager等

kubeadm init --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.100.122 --kubernetes-version=v1.27.4  --service-cidr=10.15.0.0/16  --pod-network-cidr=10.18.0.0/16

其中有一段输出是这样的:

kubeadm join 192.168.100.122:6443 --token u529o4.invnj3s6anxekg79 \

        --discovery-token-ca-cert-hash sha256:27b967c444cf3f4a45fedae24ed886663a1dc2cd6ceae03930fcbda491ec5ece

[root@aminglinux01 ~]# 
[root@aminglinux01 ~]# kubeadm init --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.100.151 --kubernetes-version=v1.27.4  --service-cidr=10.15.0.0/16  --pod-network-cidr=10.18.0.0/16
[init] Using Kubernetes version: v1.27.4
[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [aminglinux01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.15.0.1 192.168.100.151]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [aminglinux01 localhost] and IPs [192.168.100.151 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [aminglinux01 localhost] and IPs [192.168.100.151 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.004523 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node aminglinux01 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node aminglinux01 as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: 0gaylg.gdnwgjmgt2ejccqd
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.151:6443 --token 0gaylg.gdnwgjmgt2ejccqd \
	--discovery-token-ca-cert-hash sha256:7eaaf0e1bb6109cb74ec07db778089f9b33b3471d85d5cfcbf7fcefca96e34bc








########初始化遇到问题:
[preflight] Running pre-flight checks
        [WARNING FileExisting-tc]: tc not found in system path
        [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


#########执行:sysctl -w net.ipv4.ip_forward=1

说明: 上面这条命令就是如果需要将node节点加入到集群需要执行的命令,单机版的不需要执行,这个token有效期为24小时,如果过期,可以使用下面命令获取

kubeadm token create --print-join-command

2)创建配置文件

这个配置文件指的是,k8s管理员用户的相关配置,如果没有这个配置,我们是无法使用命令行工具去访问k8s里的资源对象的(如,pod、deployment、service等)

mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config

此时就可以执行如下命令来获取节点、pod信息了:

kubectl get node   ##获取节点信息

kubectl get pod --all-namespaces ##获取所有pod信息

5.安装网络插件

此时k8s还不能正常工作,因为它的网络插件还未安装,各组件之间还不能正常通信,下面我们来安装calico网络插件

1)下载部署calico的yaml文件

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O

[root@aminglinux01 ~]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0curl: (7) Failed to connect to raw.githubusercontent.com port 443: Connection refused

解决方法:
vi /etc/hosts
185.199.108.133 raw.githubusercontent.com
185.199.109.133 raw.githubusercontent.com
185.199.110.133 raw.githubusercontent.com
185.199.111.133 raw.githubusercontent.com


[root@aminglinux01 ~]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  232k  100  232k    0     0   258k      0 --:--:-- --:--:-- --:--:--  257k

2)修改yaml文件

修改该yaml文件中,定义网段的配置

vim calico.yaml  ##找到如下内容

# - name: CALICO_IPV4POOL_CIDR

# value: "192.168.0.0/16"

# 修改为:

- name: CALICO_IPV4POOL_CIDR

  value: "10.18.0.0/16"

注意缩进,很多同学在这里因为缩进的问题导致出错。

3)部署caclico

kubectl apply -f calico.yaml

查看pod

kubectl get pods -n kube-system

等所有pod都是running状态才算正常

[root@aminglinux01 ~]# kubectl get -A po
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-6c99c8747f-wtdzp   1/1     Running   0          95s
kube-system   calico-node-pn4lx                          1/1     Running   0          96s
kube-system   coredns-65dcc469f7-28snq                   1/1     Running   0          108m
kube-system   coredns-65dcc469f7-d8kr6                   1/1     Running   0          108m
kube-system   etcd-aminglinux01                          1/1     Running   0          108m
kube-system   kube-apiserver-aminglinux01                1/1     Running   0          108m
kube-system   kube-controller-manager-aminglinux01       1/1     Running   0          108m
kube-system   kube-proxy-48r6j                           1/1     Running   0          108m
kube-system   kube-scheduler-aminglinux01                1/1     Running   0          108m
[root@aminglinux01 ~]# 

到此,安装结束。 如果你想在浏览器里访问和管理k8s,还可以安装一个Dashboard组件,这个大家自行搜一下安装方法吧,非常简单。其实在我看来,没多大用。

三、搭建k8s集群

1.准备工作(三台机器都操作)

1)关闭Selinux 和 firewalld

临时关闭Selinux: 

setenforce 0 

永久关闭Selinux: 

sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

关闭firewalld: 

systemctl disable firewalld;  systemctl stop firewalld

2)设置主机名

hostnamectl set-hostname aminglinux01  ## 101上

hostnamectl  set-hostname  aminglinux02  ## 102上

hostnamectl  set-hostname  aminglinux03  ## 103上

编辑/etc/hosts

cat  >> /etc/hosts <<EOF

192.168.100.151 aminglinux01

192.168.100.152 aminglinux02

192.168.100.153 aminglinux03

EOF

3)关闭swap

临时关闭: 

swapoff -a 

永久关闭: 

vi  /etc/fstab  #注释掉swap那行

4)将桥接的ipv4流量传递到iptables链

生成bridge相关内核参数

modprobe br_netfilter  

将内核参数写入配置文件

cat > /etc/sysctl.d/k8s.conf << EOF

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

使内核参数生效

sysctl --system 

[root@localhost ~]# sysctl --system
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-coredump.conf ...
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e
kernel.core_pipe_limit = 16
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.all.promote_secondaries = 1
net.core.default_qdisc = fq_codel
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/50-libkcapi-optmem_max.conf ...
net.core.optmem_max = 81920
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...
kernel.pid_max = 4194304
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
* Applying /etc/sysctl.conf ...

5)打开端口转发

echo "net.ipv4.ip_forward = 1"  >> /etc/sysctl.conf

sysctl -p

6)设置时间同步

yum install -y chrony

systemctl start chronyd

systemctl enable chronyd

2.安装containerd(三台机器都操作)

1)配置yum 仓库

安装yum-utils工具

yum install -y yum-utils  

配置Docker官方的yum仓库

yum-config-manager \

    --add-repo \

    https://download.docker.com/linux/centos/docker-ce.repo

2)yum安装containerd

yum install containerd.io -y

3)启动containerd服务

systemctl enable containerd

systemctl start containerd

4)修改sandbox镜像地址

先生成默认配置文件

containerd  config default > /etc/containerd/config.toml

修改配置文件

vi  /etc/containerd/config.toml  ##改如下内容

sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9"   # 修改为阿里云镜像地址

SystemdCgroup = true   

重启containerd服务

systemctl restart containerd

3.安装kubeadm和kubelet(三台机器都操作)

1)配置kubernetes仓库

cat  > /etc/yum.repos.d/kubernetes.repo <<EOF

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

说明:kubernetes用的是RHEL7的源,和8是通用的

2)安装kubeadm和kubelet

yum install -y kubelet-1.26.2  kubeadm-1.26.2  kubectl-1.26.2

3)启动kubelet服务

systemctl start kubelet.service

systemctl enable kubelet.service

4)设置crictl连接 containerd

crictl config --set runtime-endpoint=unix:///run/containerd/containerd.sock

4.初始化

1)使用kubeadm初始化k8s(101上操作)

其要做的事情是将k8s核心的各个组件用容器的形式运行起来,包括aip-server、controller-manager等

kubeadm init --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.222.101 --kubernetes-version=v1.26.2  --service-cidr=10.15.0.0/16  --pod-network-cidr=10.18.0.0/16

[root@aminglinux01 ~]# kubeadm init --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.100.151 --kubernetes-version=v1.26.2  --service-cidr=10.15.0.0/16  --pod-network-cidr=10.18.0.0/16
[init] Using Kubernetes version: v1.26.2
[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


解决方案:
1.重新输入:modprobe br_netfilter
2.sysctl -w net.ipv4.ip_forward=1

其中有一段输出是这样的:

kubeadm join 192.168.222.101:6443 --token u529o4.invnj3s6anxekg79 \

        --discovery-token-ca-cert-hash sha256:27b967c444cf3f4a45fedae24ed886663a1dc2cd6ceae03930fcbda491ec5ece

kubeadm join 192.168.100.151:6443 --token ic3n5j.n32oipowdxl44g1o \
	--discovery-token-ca-cert-hash sha256:8bf2bd16d26cdf16fa580702e4dea11a1da9a60e41951a1a8cfb97c63eace9dd 

说明: 上面这条命令就是如果需要将node节点加入到集群需要执行的命令,这个token有效期为24小时,如果过期,可以使用下面命令获取

kubeadm token create --print-join-command

2)创建配置文件(101上操作)

这个配置文件指的是,k8s管理员用户的相关配置,如果没有这个配置,我们是无法使用命令行工具去访问k8s里的资源对象的(如,pod、deployment、service等)

mkdir -p $HOME/.kube

cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

chown $(id -u):$(id -g) $HOME/.kube/config

此时就可以执行如下命令来获取节点、pod信息了:

kubectl get node   ##获取节点信息

kubectl get pod --all-namespaces ##获取所有pod信息

3)node节点加入master(102和103上操作)

kubeadm join 192.168.222.101:6443 --token u529o4.invnj3s6anxekg79 \

        --discovery-token-ca-cert-hash sha256:27b967c444cf3f4a45fedae24ed886663a1dc2cd6ceae03930fcbda491ec5ece

[root@aminglinux02 ~]# kubeadm join 192.168.100.151:6443 --token ic3n5j.n32oipowdxl44g1o \
> --discovery-token-ca-cert-hash sha256:8bf2bd16d26cdf16fa580702e4dea11a1da9a60e41951a1a8cfb97c63eace9dd 
[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher



重新输入“modprobe br_netfilter”

[root@aminglinux02 ~]# modprobe br_netfilter
[root@aminglinux02 ~]# kubeadm join 192.168.100.151:6443 --token ic3n5j.n32oipowdxl44g1o --discovery-token-ca-cert-hash sha256:8bf2bd16d26cdf16fa580702e4dea11a1da9a60e41951a1a8cfb97c63eace9dd 
[preflight] Running pre-flight checks
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

4)查看node信息(101上操作)

kubectl get node  ##可以看到有三个节点

[root@aminglinux01 ~]# kubectl get -A node
NAME           STATUS     ROLES           AGE     VERSION
aminglinux01   NotReady   control-plane   14m     v1.26.2
aminglinux02   NotReady   <none>          3m25s   v1.26.2
aminglinux03   NotReady   <none>          2m52s   v1.26.2
[root@aminglinux01 ~]# 

5.安装网络插件(101上操作)

此时k8s还不能正常工作,因为它的网络插件还未安装,各组件之间还不能正常通信,下面我们来安装calico网络插件

1)下载部署calico的yaml文件

curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O

2)修改yaml文件

修改该yaml文件中,定义网段的配置

vim calico.yaml  ##找到如下内容

# - name: CALICO_IPV4POOL_CIDR

# value: "192.168.0.0/16"

# 修改为:

- name: CALICO_IPV4POOL_CIDR

  value: "10.18.0.0/16"

注意缩进,很多同学在这里因为缩进的问题导致出错。

- name 和 value保持跟上下文一致的缩进即可。

3)部署caclico

kubectl apply -f calico.yaml

 如果一致无法拉去calico镜像,可以选择下载后,离线部署,三台都要上传后,导入!Releases · projectcalico/calico · GitHub

[root@aminglinux01 ~]# ctr -n k8s.io images import calico-cni.tar 
unpacking docker.io/calico/cni:v3.25.0 (sha256:41b3cda67b0993ae0ea620fa534f01ab0b1a840da468bf444dc44cca7a026f20)...done
[root@aminglinux01 ~]# ctr -n k8s.io images import calico-node.tar 
unpacking docker.io/calico/node:v3.25.0 (sha256:6601dafde8d3256bd1c6e223a165031a48b732346608f674b1871be8a12f6006)...done
[root@aminglinux01 ~]# ctr -n k8s.io images import calico-kube-controllers.tar 
unpacking docker.io/calico/kube-controllers:v3.25.0 (sha256:caf3e6c659cab8e3fb2f221ddceff54565b686d40a2146b3482c96996ce0fb11)...done
[root@aminglinux01 ~]# ctr -n k8s.io images import calico-pod2daemon.tar 
unpacking docker.io/calico/pod2daemon-flexvol:v3.25.0 (sha256:410ab3a6fde9c3c995a6aeb3d64f24967882ad3c4a7d159ed9acd822d9367821)...done
[root@aminglinux01 ~]# 

查看pod

kubectl get pods -n kube-system

[root@aminglinux01 ~]# kubectl get -A pod
NAMESPACE     NAME                                      READY   STATUS    RESTARTS      AGE
kube-system   calico-kube-controllers-57b57c56f-smqfj   1/1     Running   0             53s
kube-system   calico-node-546gp                         1/1     Running   0             53s
kube-system   calico-node-lttfj                         1/1     Running   0             53s
kube-system   calico-node-t8n97                         0/1     Running   0             53s
kube-system   coredns-567c556887-pqv8h                  1/1     Running   1 (40m ago)   174m
kube-system   coredns-567c556887-vgsth                  1/1     Running   1 (40m ago)   174m
kube-system   etcd-aminglinux01                         1/1     Running   1 (40m ago)   174m
kube-system   kube-apiserver-aminglinux01               1/1     Running   1 (40m ago)   174m
kube-system   kube-controller-manager-aminglinux01      1/1     Running   1 (40m ago)   174m
kube-system   kube-proxy-fbzxg                          1/1     Running   1 (40m ago)   174m
kube-system   kube-proxy-k82tm                          1/1     Running   1 (37m ago)   163m
kube-system   kube-proxy-zl2dc                          1/1     Running   1 (37m ago)   162m
kube-system   kube-scheduler-aminglinux01               1/1     Running   1 (40m ago)   174m

等所有pod都是running状态才算正常

四、在k8s中快速部署一个应用

1)创建deployment

kubectl create deployment testdp --image=nginx:1.32.2

[root@aminglinux01 ~]# [root@aminglinux01 ~]# kubectl create deployment lucky --image=registry.cn-hangzhou.aliyuncs.com/*/lucky:2.8.3
deployment.apps/lucky created

2)查看deployment

kubectl get deployment

[root@aminglinux01 ~]# kubectl get deployment
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
lucky    1/1     1            1           7s

3)查看pod

kubectl get pods

[root@aminglinux01 ~]# kubectl get pods
NAME                      READY   STATUS             RESTARTS   AGE
lucky-6cdcf8b9d4-qslbj    1/1     Running            0          12s

4)查看pod详情

kubectl describe pod pod名称

[root@aminglinux01 ~]# kubectl describe pod lucky-6cdcf8b9d4-qslbj
Name:             lucky-6cdcf8b9d4-qslbj
Namespace:        default
Priority:         0
Service Account:  default
Node:             aminglinux03/192.168.100.153
Start Time:       Thu, 04 Jul 2024 18:05:52 -0400
Labels:           app=lucky
                  pod-template-hash=6cdcf8b9d4
Annotations:      cni.projectcalico.org/containerID: 27f13a70894ec189971c89ba016da203684ddef091456e004daf8b8197225cb6
                  cni.projectcalico.org/podIP: 10.18.68.129/32
                  cni.projectcalico.org/podIPs: 10.18.68.129/32
Status:           Running
IP:               10.18.68.129
IPs:
  IP:           10.18.68.129
Controlled By:  ReplicaSet/lucky-6cdcf8b9d4
Containers:
  lucky:
    Container ID:   containerd://7375c0abfd652d3bc44e6f004143668d56fae726696da9afceb6ca7ff3af5f54
    Image:          registry.cn-hangzhou.aliyuncs.com/daliyused/lucky:2.8.3
    Image ID:       registry.cn-hangzhou.aliyuncs.com/daliyused/lucky@sha256:39cf30e54b037e4f2d4806011c33357c73554feb5f30403829b736f4f37da1f9
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 04 Jul 2024 18:05:57 -0400
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-v46sq (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kube-api-access-v46sq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  2m7s  default-scheduler  Successfully assigned default/lucky-6cdcf8b9d4-qslbj to aminglinux03
  Normal  Pulling    2m7s  kubelet            Pulling image "registry.cn-hangzhou.aliyuncs.com/daliyused/lucky:2.8.3"
  Normal  Pulled     2m4s  kubelet            Successfully pulled image "registry.cn-hangzhou.aliyuncs.com/daliyused/lucky:2.8.3" in 3.786250106s (3.786284571s including waiting)
  Normal  Created    2m4s  kubelet            Created container lucky
  Normal  Started    2m3s  kubelet            Started container lucky

5)创建service,暴露pod端口到node节点

kubectl expose deployment lucky --port=16601 --type=NodePort --target-port=10661 --name=lucky

[root@aminglinux01 ~]# kubectl expose deployment lucky --port=16601 --type=NodePort --target-port=10661 --name=lucky
service/lucky exposed

6)查看service

kubectl get svc

[root@aminglinux01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)           AGE
kubernetes   ClusterIP   10.15.0.1       <none>        443/TCP           3h21m
lucky        NodePort    10.15.104.133   <none>        16601:31368/TCP   33s

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐