一.k8s简介:

k8s项目来源于Borg系统,k8s对计算机资源进行了更高层次的抽象,通过将容器进行细致的组合,将最终的应用服务交给用户。

k8s好处:

  • 隐藏资源管理和错误处理,用户仅需要关注应用的开发
  • 服务高可用,高可靠
  • 可将负载运行在由成千上万的机器联合而成的集群中(集群化的管理系统)

k8s设计架构:   

  • k8s集群分为master和slave机制     node节点用来跑pod(容器应用)

  Kubernetes master由以下几个核心组件组成:

  • etcd:本身是分布式存储系统,保存apiserver所需的原信息,保证master组件的高可用性(etcd并不属于k8s本身的应用,但不可缺少,etcd相当与一个数据库,存储了k8s集群上所有的数据
  • apiserver:提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制,可以水平扩展部署。
  • controller manager:控制管理器,负责维护集群的状态,比如故障检测、自动扩展、滚动更新等,支持热备
  • scheduler:负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上,支持热备
  • kubelet:负责维护容器的生命周期,同时也负责Volume卷(CVI)和网络(CNI)的管理(每一个结点上都有,尤其是node节点,监控pod的生命周期)
  • Container runtime:负责镜像管理以及Pod和容器的真正运行(CRI)
  •  kube-proxy:负责为Service提供cluster内部的服务发现和负载均衡

除了核心组件,还有一些推荐的Add-ons:

  • kube-dns:负责为整个集群提供DNS服务
  • Ingress Controller:为服务提供外网入口
  • metrics-server:提供资源监控
  • Dashboard:提供GUI
  • Fluentd-elasticsearch:提供集群日志采集、存储与查询

核心层:Kubernetes最核心的功能,对外提供API构建高层的应用,对内提供插件
式应用执行环境
应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服
务发现、DNS解析等)
管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动
态Provision等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy等)
接口层:kubectl命令行工具、客户端SDK以及集群联邦
• 生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范

• Kubernetes外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS
应用、ChatOps等
• Kubernetes内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的
配置和管理等Kubernetes部署

二.k8s集群部署

1.实验环境部署

  1. 每台虚拟机cpu给2,RMA给2GB
  2. 关闭结点的selinux和防火墙
  3. 所有节点部署docker引擎
  4. swapoff -a  禁掉swap分区
  5. vim /etc/fstab    注释掉/etc/fstab文件中的swap定义

2.部署k8s集群

  • 允许 iptables 检查桥接流量
cd /etc/sysctl.d/
scp docker.conf server2:/etc/sysctl.d/
scp docker.conf server3:/etc/sysctl.d/
sysctl --system  在另外两台主机上使之生效
  • 配置 Docker 守护程序,尤其是使用 systemd 来管理容器的 cgroup。所有节点修改cgroup驱动为systemd
[root@server1 ~]# cat <<EOF | sudo tee /etc/docker/daemon.json
> {
>   "exec-opts": ["native.cgroupdriver=systemd"],
>   "log-driver": "json-file",
>   "log-opts": {
>     "max-size": "100m"
>   },
>   "storage-driver": "overlay2"
> }
> EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
[root@server1 ~]# cd /etc/docker/
[root@server1 docker]# ls
daemon.json  key.json
[root@server1 docker]# systemctl restart docker.service 
[root@server1 docker]# docker info 
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Docker Buildx (Docker Inc., v0.9.1-docker)
  scan: Docker Scan (Docker Inc., v0.23.0)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 0
 Server Version: 20.10.22
 Storage Driver: overlay2
  Backing Filesystem: xfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: systemd
[root@server1 docker]# scp daemon.json server2:/etc/docker/
[root@server1 docker]# scp daemon.json server3:/etc/docker/
  • 在阿里云配置k8s镜像,安装kubeadm,kubelet,kubectl

在master端:

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@server1 docker]# cd /etc/yum.repos.d/
[root@server1 yum.repos.d]# ls
CentOS-Base.repo  docker-ce.repo  dvd.repo  kubernetes.repo  redhat.repo
[root@server1 yum.repos.d]# vim /etc/yum.repos.d/kubernetes.repo 修改gpgcheck=1
[root@server1 ~]# yum list kubeadm
Loaded plugins: product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
Available Packages
kubeadm.x86_64 
[root@server1 ~]# yum install  kubeadm-1.23.7-0 kubectl-1.23.7-0 kubelet-1.23.7-0

给另外两台虚拟机安装kubeadm,kubelet,kubectl

[root@server1 ~]# cd /etc/yum.repos.d/
[root@server1 yum.repos.d]# ls
CentOS-Base.repo  docker-ce.repo  dvd.repo  kubernetes.repo  redhat.repo
[root@server1 yum.repos.d]# scp kubernetes.repo server2:/etc/yum.repos.d/
root@server2's password: 
kubernetes.repo                                                                                                 100%  129   209.2KB/s   00:00    
[root@server1 yum.repos.d]# scp kubernetes.repo server3:/etc/yum.repos.d/
root@server3's password: 
kubernetes.repo        
[root@server1 yum.repos.d]# systemctl enable --now kubelet      设置开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@server2 ~]# yum install  kubeadm-1.23.7-0 kubectl-1.23.7-0 kubelet-1.23.7-0
[root@server2 ~]# systemctl enable --now kubelet   设置开机自启
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

在mastr端列出镜像并拉取镜像:

[root@server1 ~]#  kubeadm config images list --image-repository registry.aliyuncs.com/google_containers   列出所需镜像,指定阿里云镜像
I0120 04:55:38.923762   15286 version.go:255] remote version is much newer: v1.26.1; falling back to: stable-1.23
registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.16
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.16
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.16
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.16
registry.aliyuncs.com/google_containers/pause:3.6
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6

[root@server1 ~]#  kubeadm config images pull --image-repository registry.aliyuncs.com/google_containers    拉取镜像   在初始化之前把所需的镜像先拉取下来,使初始化更快
I0120 04:56:03.569116   15324 version.go:255] remote version is much newer: v1.26.1; falling back to: stable-1.23
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.16
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.16
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.16
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.16
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
  • 初始化集群
[root@server1 ~]#  kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository registry.aliyuncs.com/google_containers

 /etc/kubernetes/admin.conf  证书文件是用来连接apiserver健全的,超户是可以直接读取的;如果是超户,直接加变量

[root@server1 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
[root@server1 ~]# kubectl get pod -A   显示整个集群
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d8c4cb4d-7v88n           0/1     Pending   0          4m12s
kube-system   coredns-6d8c4cb4d-xjbvc           0/1     Pending   0          4m12s
kube-system   etcd-server1                      1/1     Running   0          4m26s
kube-system   kube-apiserver-server1            1/1     Running   0          4m25s
kube-system   kube-controller-manager-server1   1/1     Running   0          4m25s
kube-system   kube-proxy-phsct                  1/1     Running   0          4m13s
kube-system   kube-scheduler-server1            1/1     Running   0          4m25s
[root@server1 ~]# kubectl get node    显示节点
NAME      STATUS     ROLES                  AGE   VERSION
server1   NotReady   control-plane,master   10m   v1.23.7
[root@server1 ~]# kubectl get cs   显示整个集群的健康状态
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""} 

因为export KUBECONFIG=/etc/kubernetes/admin.conf 这个变量每次开机都得运行,所以

[root@server1 ~]# vim .bash_profile
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/bin

export PATH
export KUBECONFIG=/etc/kubernetes/admin.conf

配置kubectl命令补齐功能:

[root@server1 ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@server1 ~]# source .bashrc  使之运行
  • 安装flannel网络组件:
[root@server1 ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2023-01-20 11:08:09--  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.111.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 4461 (4.4K) [text/plain]
Saving to: ‘kube-flannel.yml’

100%[========================================================================================================>] 4,461       --.-K/s   in 0s      

2023-01-20 11:08:26 (86.9 MB/s) - ‘kube-flannel.yml’ saved [4461/4461]

拉取flannel镜像:

[root@server1 ~]# docker pull docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
v0.20.2: Pulling from rancher/mirrored-flannelcni-flannel
ca7dd9ec2225: Pull complete 
600bfc1b6c94: Pull complete     
600bfc1b6c94: Pull complete 
1f4c89158c48: Pull complete 
410907778277: Pull complete 
35b4b51c7514: Pull complete 
2fb311c974da: Pull complete 
9d45fc9177ff: Pull complete 
Digest: sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832
Status: Downloaded newer image for rancher/mirrored-flannelcni-flannel:v0.20.2
docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
[root@server1 ~]# docker pull docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
v1.1.2: Pulling from rancher/mirrored-flannelcni-flannel-cni-plugin
72cfd02ff4d0: Pull complete 
d72dedc258c5: Pull complete 
Digest: sha256:1fb99bc37bbaa28710dcfd386392149b57d44e05b8d2a72ca1da9f5e423bc598
Status: Downloaded newer image for rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2
docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.2

查看节点是否就绪:

[root@server1 ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@server1 ~]# kubectl get pod -n kube-system 
NAME                              READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-7v88n           1/1     Running   0          7h48m
coredns-6d8c4cb4d-xjbvc           1/1     Running   0          7h48m
etcd-server1                      1/1     Running   0          7h48m
kube-apiserver-server1            1/1     Running   0          7h48m
kube-controller-manager-server1   1/1     Running   0          7h48m
kube-proxy-phsct                  1/1     Running   0          7h48m
kube-scheduler-server1            1/1     Running   0          7h48m
[root@server1 ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
server1   Ready    control-plane,master   7h48m   v1.23.7

在node节点执行join命令:

[root@server2 ~]# kubeadm join 172.25.77.1:6443 --token 00h8ug.yqm2em8i1wjo7fji \
> --discovery-token-ca-cert-hash sha256:35df36751fdc761cf62dc3be8d6d9882010c62c974a6ca2680310c66204e61ae
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.0. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@server2 ~]# docker images
REPOSITORY                                           TAG        IMAGE ID       CREATED         SIZE
registry.aliyuncs.com/google_containers/kube-proxy   v1.23.16   28204678d22a   2 weeks ago     111MB
flannel/flannel-cni-plugin                           v1.1.2     7a2dcab94698   2 months ago    7.97MB
flannel/flannel                                      v0.20.2    b5c6c9203f83   2 months ago    59.6MB
registry.aliyuncs.com/google_containers/pause        3.6        6270bb605e12   17 months ago   683kB

在master端查看节点是否就绪:

[root@server1 ~]# kubectl get node
NAME      STATUS   ROLES                  AGE     VERSION
server1   Ready    control-plane,master   19m     v1.23.7
server2   Ready    <none>                 7m19s   v1.23.7
server3   Ready    <none>                 6m34s   v1.23.7
[root@server1 ~]# kubectl get pod -n kube-system -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP            NODE      NOMINATED NODE   READINESS GATES
coredns-6d8c4cb4d-c4sbk           1/1     Running   0          21m     10.244.0.2    server1   <none>           <none>
coredns-6d8c4cb4d-ltwdn           1/1     Running   0          21m     10.244.0.3    server1   <none>           <none>
etcd-server1                      1/1     Running   0          21m     172.25.77.1   server1   <none>           <none>
kube-apiserver-server1            1/1     Running   0          21m     172.25.77.1   server1   <none>           <none>
kube-controller-manager-server1   1/1     Running   0          21m     172.25.77.1   server1   <none>           <none>
kube-proxy-4rkvb                  1/1     Running   0          21m     172.25.77.1   server1   <none>           <none>
kube-proxy-vk62s                  1/1     Running   0          8m14s   172.25.77.3   server3   <none>           <none>
kube-proxy-vm6v2                  1/1     Running   0          8m59s   172.25.77.2   server2   <none>           <none>
kube-scheduler-server1            1/1     Running   0          21m     172.25.77.1   server1   <none>           <none>
[root@server1 ~]# kubectl get pod -n kube-system 
NAME                              READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-c4sbk           1/1     Running   0          21m
coredns-6d8c4cb4d-ltwdn           1/1     Running   0          21m
etcd-server1                      1/1     Running   0          22m
kube-apiserver-server1            1/1     Running   0          22m
kube-controller-manager-server1   1/1     Running   0          22m
kube-proxy-4rkvb                  1/1     Running   0          21m
kube-proxy-vk62s                  1/1     Running   0          8m42s
kube-proxy-vm6v2                  1/1     Running   0          9m27s
kube-scheduler-server1            1/1     Running   0          22m

3.集群连接仓库

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐