基于ubuntu containerd 部署kubernetes v1.28.0
在文件名/etc/sysctl.d/99-kubernetes-cri.conf中,“99” 代表文件的优先级或顺序。在/etc/sysctl.d/目录中,可以放置一系列的配置文件,以便在系统启动时自动加载这些参数。这些配置文件按照文件名的字母顺序逐个加载。数字前缀用于指定加载的顺序,较小的数字表示较高的优先级。修改components.yaml中的image为docker.io/unreacha
基于ubuntu containerd 部署kubernetes v1.28.0
软件版本
ubuntu 22.04 LTS
containerd 1.7.3
kubernetes v1.28.0
一,基础配置
1.1配置hosts
192.168.2.66 ops-test-02 【Master】
192.168.2.67 ops-test-03 【Node】
192.168.2.68 ops-test-04 【Node】
关闭swap
swapoff -a
1.2 containerd 依赖配置
创建/etc/modules-load.d/containerd.conf配置文件,确保在系统启动时自动加载所需的内核模块
cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
执行以下命令使配置生效:
modprobe overlay
modprobe br_netfilter
创建/etc/sysctl.d/99-kubernetes-cri.conf配置文件
cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF
执行以下命令使配置生效:
sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf
备注:
在文件名/etc/sysctl.d/99-kubernetes-cri.conf中,“99” 代表文件的优先级或顺序。sysctl是Linux内核参数的配置工具,它可以通过修改/proc/sys/目录下的文件来设置内核参数。在/etc/sysctl.d/目录中,可以放置一系列的配置文件,以便在系统启动时自动加载这些参数。这些配置文件按照文件名的字母顺序逐个加载。数字前缀用于指定加载的顺序,较小的数字表示较高的优先级。
1.3 配置服务器支持开启ipvs的前提条件
由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
安装ipvsadm
apt install -y ipset ipvsadm
创建/etc/modules-load.d/ipvs.conf文件,保证在节点重启后能自动加载所需模块:
cat > /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
EOF
执行以下命令使配置立即生效:
modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
使用lsmod | grep -e ip_vs -e nf_conntrack 命令查看是否已经正确加载所需的内核模块。
root@ops-test-02:~# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 212992 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 204800 1 ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 5 nf_conntrack,btrfs,xfs,raid456,ip_vs
二, 部署容器运行时Containerd
在各个服务器节点上安装容器运行时Containerd
wget https://github.com/containerd/containerd/releases/download/v1.7.3/containerd-1.7.3-linux-amd64.tar.gz
将其解压缩到/usr/local下:
tar Cxzvf /usr/local containerd-1.7.3-linux-amd64.tar.gz
root@ops-test-02:/apps# tar Cxzvf /usr/local containerd-1.7.3-linux-amd64.tar.gz
bin/
bin/containerd-shim-runc-v2
bin/containerd-stress
bin/containerd-shim-runc-v1
bin/containerd-shim
bin/containerd
bin/ctr
wget https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc
生成containerd的配置文件:
mkdir -p /etc/containerd && containerd config default > /etc/containerd/config.toml
配置各个节点上containerd的cgroup driver为systemd。
修改配置文件/etc/containerd/config.toml
vim /etc/containerd/config.toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
...
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
再修改/etc/containerd/config.toml中的
[plugins."io.containerd.grpc.v1.cri"]
...
# sandbox_image = "registry.k8s.io/pause:3.8"
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
配置systemd启动containerd
参考模板,下载containerd.service单元文件
wget https://raw.githubusercontent.com/containerd/containerd/main/containerd.service
并将其放置在 /etc/systemd/system/containerd.service中。
cat << EOF > /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target
EOF
配置containerd开机启动,并启动containerd,执行以下命令:
systemctl daemon-reload
systemctl enable containerd --now
systemctl status containerd
下载安装crictl工具:
wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-amd64.tar.gz
tar -zxvf crictl-v1.28.0-linux-amd64.tar.gz
install -m 755 crictl /usr/local/bin/crictl
使用crictl测试一下,确保可以打印出版本信息并且没有错误信息输出:
crictl --runtime-endpoint=unix:///run/containerd/containerd.sock version
root@ops-test-02:/apps# crictl --runtime-endpoint=unix:///run/containerd/containerd.sock version
Version: 0.1.0
RuntimeName: containerd
RuntimeVersion: v1.7.3
RuntimeApiVersion: v1
三,部署Kubernetes
3.1 安装kubeadm和kubelet
在各节点安装kubeadm和kubelet
#更新包列表
apt-get update
#安装传输HTTPS所需的软件包
apt-get install -y apt-transport-https ca-certificates curl
添加Kubernetes的公钥
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
#添加Kubernetes的软件源到源列表中
tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
再次更新包列表
apt-get update
#安装kubelet、kubeadm和kubectl
apt install kubelet kubeadm kubectl
#如遇不能正常通过HTTPS方式获取数据时,请添加环境变理
将 GNUTLS_CPUID_OVERRIDE 环境变量添加到
/etc/environment 文件中
echo 'export GNUTLS_CPUID_OVERRIDE=0x1' | sudo tee -a /etc/environment
退出重新登录
再执行更新包列表,安装kubelet、kubeadm和kubectl,即可成功
#防止这些包被自动升级
apt-mark hold kubelet kubeadm kubectl
swappiness参数调整,修改/etc/sysctl.d/99-kubernetes-cri.conf添加下面一行:
vm.swappiness=0
使修改配置生效
sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf
3.2 使用kubeadm init初始化集群
在各节点开机启动kubelet服务:
systemctl enable kubelet.service
打印集群初始化默认的使用的配置
kubeadm config print init-defaults --component-configs KubeletConfiguration
root@ops-test-02:~# kubeadm config print init-defaults --component-configs KubeletConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: node
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml如下:
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.2.66
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/containerd.sock
taints:
- effect: PreferNoSchedule
key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
imageRepository: registry.aliyuncs.com/google_containers
networking:
podSubnet: 10.244.0.0/16
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
failSwapOn: false
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
1)定制了imageRepository为阿里云的registry,避免因gcr被墙,无法直接拉取镜像。
2)criSocket设置了容器运行时为containerd。
3)设置kubelet的cgroupDriver为systemd,设置kube-proxy代理模式为ipvs。
在开始初始化集群之前可以使用kubeadm config images pull --config kubeadm.yaml预先在各个服务器节点上拉取所k8s需要的容器镜像。
root@ops-test-02:/apps# kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1
初始化集群,选择192.168.2.66作为Master
#kubeadm init --config kubeadm.yaml
root@ops-test-02:/apps# kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ops-test-02] and IPs [10.96.0.1 192.168.2.66]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ops-test-02] and IPs [192.168.2.66 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ops-test-02] and IPs [192.168.2.66 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.001410 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ops-test-02 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node ops-test-02 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: zqgy73.8i3j79q5v29wd55l
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.66:6443 --token zqgy73.8i3j79q5v29wd55l \
--discovery-token-ca-cert-hash sha256:c9291f9bc916de50705cd884a758c3866ceab979851c5629d2d557e36dd6e490
根据输出的内容可以看出初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:
[certs]生成相关的各种证书
[kubeconfig]生成相关的kubeconfig文件
[kubelet-start] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"
[control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod
[bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
[addons]安装基本插件:CoreDNS, kube-proxy
下面的命令是配置常规用户如何使用kubectl访问集群:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
最节点加入集群的命令:
kubeadm join 192.168.2.66:6443 --token zqgy73.8i3j79q5v29wd55l \
--discovery-token-ca-cert-hash sha256:c9291f9bc916de50705cd884a758c3866ceab979851c5629d2d557e36dd6e490
查看一下集群状态,确认个组件都处于healthy状态
配置kubectl环境:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#kubectl get cs
root@ops-test-02:/apps# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy ok
如果集群初始化如果遇到问题,可以使用kubeadm reset命令进行清理。
四, 安装包管理器helm 3
Helm是Kubernetes的包管理器
wget https://get.helm.sh/helm-v3.12.3-linux-amd64.tar.gz
tar -zxvf helm-v3.12.3-linux-amd64.tar.gz
install -m 755 linux-amd64/helm /usr/local/bin/helm
执行helm list确认没有错误输出。
root@ops-test-02:/apps# helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
五 部署Pod Network组件Calico
下载tigera-operator的helm chart:
wget https://github.com/projectcalico/calico/releases/download/v3.26.1/tigera-operator-v3.26.1.tgz
查看这个chart的中可定制的配置:
helm show values tigera-operator-v3.26.1.tgz
imagePullSecrets: {}
installation:
enabled: true
kubernetesProvider: ''
apiServer:
enabled: true
certs:
node:
key:
cert:
commonName:
typha:
key:
cert:
commonName:
caBundle:
# Resource requests and limits for the tigera/operator pod.
resources: {}
# Tolerations for the tigera/operator pod.
tolerations:
- effect: NoExecute
operator: Exists
- effect: NoSchedule
operator: Exists
# NodeSelector for the tigera/operator pod.
nodeSelector:
kubernetes.io/os: linux
# Custom annotations for the tigera/operator pod.
podAnnotations: {}
# Custom labels for the tigera/operator pod.
podLabels: {}
# Image and registry configuration for the tigera/operator pod.
tigeraOperator:
image: tigera/operator
version: v1.30.4
registry: quay.io
calicoctl:
image: docker.io/calico/ctl
tag: v3.26.1
定制的values.yaml如下:
# 可针对上面的配置进行定制,例如calico的镜像改成从私有库拉取。
apiServer:
enabled: false
installation:
kubeletVolumePluginPath: None
使用helm安装calico:
helm install calico tigera-operator-v3.26.1.tgz -n kube-system --create-namespace -f values.yaml
等待并确认所有pod处于Running状态:
也可以用如下的方法:
kubectl apply -f https://docs.tigera.io/archive/v3.24/manifests/calico.yaml
root@ops-test-02:/apps# kubectl get pod -n kube-system | grep tigera-operator
tigera-operator-94d7f7696-rqdqf 1/1 Running 0 26s
root@ops-test-02:/apps# kubectl get pods -n calico-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-7bf865655f-xt97p 1/1 Running 0 10m
calico-node-pd4jk 1/1 Running 0 10m
calico-typha-6895676874-d6p9d 1/1 Running 0 10m
csi-node-driver-vxp62 2/2 Running 0 10m
查看一下calico向k8s中添加的api资源
kubectl api-resources | grep calico
root@ops-test-02:/apps# kubectl api-resources | grep calico
bgpconfigurations crd.projectcalico.org/v1 false BGPConfiguration
bgpfilters crd.projectcalico.org/v1 false BGPFilter
bgppeers crd.projectcalico.org/v1 false BGPPeer
blockaffinities crd.projectcalico.org/v1 false BlockAffinity
caliconodestatuses crd.projectcalico.org/v1 false CalicoNodeStatus
clusterinformations crd.projectcalico.org/v1 false ClusterInformation
felixconfigurations crd.projectcalico.org/v1 false FelixConfiguration
globalnetworkpolicies crd.projectcalico.org/v1 false GlobalNetworkPolicy
globalnetworksets crd.projectcalico.org/v1 false GlobalNetworkSet
hostendpoints crd.projectcalico.org/v1 false HostEndpoint
ipamblocks crd.projectcalico.org/v1 false IPAMBlock
ipamconfigs crd.projectcalico.org/v1 false IPAMConfig
ipamhandles crd.projectcalico.org/v1 false IPAMHandle
ippools crd.projectcalico.org/v1 false IPPool
ipreservations crd.projectcalico.org/v1 false IPReservation
kubecontrollersconfigurations crd.projectcalico.org/v1 false KubeControllersConfiguration
networkpolicies crd.projectcalico.org/v1 true NetworkPolicy
networksets crd.projectcalico.org/v1 true NetworkSet
bgpconfigurations bgpconfig,bgpconfigs projectcalico.org/v3 false BGPConfiguration
bgpfilters projectcalico.org/v3 false BGPFilter
bgppeers projectcalico.org/v3 false BGPPeer
blockaffinities blockaffinity,affinity,affinities projectcalico.org/v3 false BlockAffinity
caliconodestatuses caliconodestatus projectcalico.org/v3 false CalicoNodeStatus
clusterinformations clusterinfo projectcalico.org/v3 false ClusterInformation
felixconfigurations felixconfig,felixconfigs projectcalico.org/v3 false FelixConfiguration
globalnetworkpolicies gnp,cgnp,calicoglobalnetworkpolicies projectcalico.org/v3 false GlobalNetworkPolicy
globalnetworksets projectcalico.org/v3 false GlobalNetworkSet
hostendpoints hep,heps projectcalico.org/v3 false HostEndpoint
ipamconfigurations ipamconfig projectcalico.org/v3 false IPAMConfiguration
ippools projectcalico.org/v3 false IPPool
ipreservations projectcalico.org/v3 false IPReservation
kubecontrollersconfigurations projectcalico.org/v3 false KubeControllersConfiguration
networkpolicies cnp,caliconetworkpolicy,caliconetworkpolicies projectcalico.org/v3 true NetworkPolicy
networksets netsets projectcalico.org/v3 true NetworkSet
profiles projectcalico.org/v3 false Profile
这些api资源是属于calico的,因此不建议使用kubectl来管理,推荐按照calicoctl来管理这些api资源。 将calicoctl安装为kubectl的插件:
cd /usr/local/bin
curl -o kubectl-calico -O -L "https://github.com/projectcalico/calicoctl/releases/download/v3.21.5/calicoctl-linux-amd64"
chmod +x kubectl-calico
验证插件正常工作:
kubectl calico -h
六,验证k8s DNS是否可用
kubectl run curl --image=radial/busyboxplus:curl -it
If you don't see a command prompt, try pressing enter.
[ root@curl:/ ]$
进入后执行nslookup kubernetes.default确认解析正常:
root@ops-test-02:/apps# kubectl run curl --image=radial/busyboxplus:curl -it
If you don't see a command prompt, try pressing enter.
[ root@curl:/ ]$ nslookup kubernetes.default
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
七, 向Kubernetes集群中添加Node节点
在node节点执行如下命令:
kubeadm join 192.168.2.66:6443 --token zqgy73.8i3j79q5v29wd55l \
--discovery-token-ca-cert-hash sha256:c9291f9bc916de50705cd884a758c3866ceab979851c5629d2d557e36dd6e490
root@ops-test-03:/apps# kubeadm join 192.168.2.66:6443 --token zqgy73.8i3j79q5v29wd55l \
--discovery-token-ca-cert-hash sha256:c9291f9bc916de50705cd884a758c3866ceab979851c5629d2d557e36dd6e490
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
查看节点上的calico相关pod是否启动正常
kubectl get po -n calico-system -o wide
root@ops-test-02:/apps# kubectl get po -n calico-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-7bf865655f-xt97p 1/1 Running 0 21m 10.244.51.66 ops-test-02 <none> <none>
calico-node-pd4jk 1/1 Running 0 21m 192.168.2.66 ops-test-02 <none> <none>
calico-node-pj462 1/1 Running 0 6m7s 192.168.2.67 ops-test-03 <none> <none>
calico-node-s2zlp 1/1 Running 0 5m22s 192.168.2.68 ops-test-04 <none> <none>
calico-typha-6895676874-748p5 1/1 Running 0 5m18s 192.168.2.68 ops-test-04 <none> <none>
calico-typha-6895676874-d6p9d 1/1 Running 0 21m 192.168.2.66 ops-test-02 <none> <none>
csi-node-driver-4fffd 2/2 Running 0 5m22s 10.244.244.1 ops-test-04 <none> <none>
csi-node-driver-qfg22 2/2 Running 0 6m7s 10.244.49.65 ops-test-03 <none> <none>
csi-node-driver-vxp62 2/2 Running 0 21m 10.244.51.67 ops-test-02 <none> <none
#查看节点状态
在master节点上执行命令查看集群中的节点(需要等待新加入节点上的calico-node pod启动正常):
root@ops-test-02:/apps# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ops-test-02 Ready control-plane 60m v1.28.0
ops-test-03 Ready <none> 5m1s v1.28.0
ops-test-04 Ready <none> 4m16s v1.28.0
八.Kubernetes常用组件部署
8.1 使用Helm部署ingress-nginx
使用Helm将ingress-nginx部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的边缘节点上。
将(192.168.2.68)作为边缘节点,打上Label:
kubectl label node ops-test-04 node-role.kubernetes.io/edge=
下载ingress-nginx的helm chart:
wget https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.7.0/ingress-nginx-4.7.0.tgz
查看ingress-nginx-4.7.0.tgz这个chart的可定制配置:
helm show values ingress-nginx-4.7.0.tgz
对values.yaml配置定制如下:
controller:
ingressClassResource:
name: nginx
enabled: true
default: true
controllerValue: "k8s.io/ingress-nginx"
admissionWebhooks:
enabled: false
replicaCount: 1
image:
# registry: registry.k8s.io
# image: ingress-nginx/controller
# tag: "v1.8.0"
registry: docker.io
image: unreachableg/registry.k8s.io_ingress-nginx_controller
tag: "v1.8.0"
digest: sha256:626fc8847e967dc06049c0eda9e093d77a08feff80179ae97538ba8b118570f3
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/edge: ''
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx-ingress
- key: component
operator: In
values:
- controller
topologyKey: kubernetes.io/hostname
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
- key: node-role.kubernetes.io/master
operator: Exists
effect: PreferNoSchedule
1)nginx ingress controller的副本数replicaCount为1,将被调度到n边缘节点上。
2)没有指定nginx ingress controller service的externalIPs,而是通过hostNetwork: true设置nginx ingress controller使用宿主机网络。
3)替换成unreachableg/registry.k8s.io_ingress-nginx_controller提前拉取一下镜像:
#拉取镜像
crictl --runtime-endpoint=unix:///run/containerd/containerd.sock pull unreachableg/registry.k8s.io_ingress-nginx_controller:v1.8.0
#执行安装
helm install ingress-nginx ingress-nginx-4.7.0.tgz --create-namespace -n ingress-nginx -f values.yaml
#查看pod状态
kubectl get po -n ingress-nginx
root@ops-test-02:/apps/ingress-nginx# kubectl get po -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-controller-86878885cd-g7fgv 1/1 Running 0 110s
#测试
http://192.168.2.68
返回默认的nginx 404页,则部署完成。
8.2 部署metrics-server
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.4/components.yaml
修改components.yaml中的image为docker.io/unreachableg/k8s.gcr.io_metrics-server_metrics-server:v0.6.4
修改components.yaml中容器的启动参数,加入–kubelet-insecure-tls
kubectl apply -f components.yaml
root@ops-test-02:/apps# kubectl apply -f components.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
metrics-server的pod正常启动后,等一段时间就可以使用kubectl top查看集群和pod的metrics信息:
kubectl top node
root@ops-test-02:/apps# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
ops-test-02 95m 2% 2120Mi 27%
ops-test-03 32m 0% 1257Mi 16%
ops-test-04 33m 0% 1259Mi 16%
kubectl top pod -n kube-system
root@ops-test-02:/apps# kubectl top pod -n kube-system
NAME CPU(cores) MEMORY(bytes)
coredns-66f779496c-cjwxx 1m 12Mi
coredns-66f779496c-fsd56 1m 12Mi
etcd-ops-test-02 11m 48Mi
kube-apiserver-ops-test-02 27m 372Mi
kube-controller-manager-ops-test-02 6m 52Mi
kube-proxy-449zx 4m 17Mi
kube-proxy-9k7xp 6m 17Mi
kube-proxy-ffd2q 5m 18Mi
kube-scheduler-ops-test-02 2m 18Mi
metrics-server-7d686f4d9d-8msrv 2m 16Mi
tigera-operator-94d7f7696-rqdqf 2m 28Mi
8.3 使用Helm部署dashboard
备注:
从k8s dashboard的v3版本开始,底层架构已更改,需要进行干净的安装,如果是在做升级dashboard操作,请首先移除先前的安装
k8s dashboard的v3版本现在默认使用cert-manager和nginx-ingress-controller。如果选择基于yaml清单的安装,请确保在集群中已安装它们。
安装cert-manager
wget https://github.com/cert-manager/cert-manager/releases/download/v1.12.3/cert-manager.yaml
kubectl apply -f cert-manager.yaml
确保cert-manager的所有pod启动正常:
kubectl get po -n cert-manager
root@ops-test-02:/apps# kubectl get po -n cert-manager
NAME READY STATUS RESTARTS AGE
cert-manager-6774cd657f-6k7xk 1/1 Running 0 3m45s
cert-manager-cainjector-55c8b7b49b-r6hdd 1/1 Running 0 3m45s
cert-manager-webhook-57797c469d-rrvwl 1/1 Running 0 3m45s
下载dashboard的yaml清单文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v3.0.0-alpha0/charts/kubernetes-dashboard.yaml
编辑kubernetes-dashboard.yaml清单文件,将其中的ingress中的host替换你的域名:
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: kubernetes-dashboard
namespace: kubernetes-dashboard
labels:
app.kubernetes.io/name: nginx-ingress
app.kubernetes.io/part-of: kubernetes-dashboard
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: “true”
cert-manager.io/issuer: selfsigned
spec:
ingressClassName: nginx
tls:
- hosts:
- localhost
secretName: kubernetes-dashboard-certs
rules:
- host: dashboard.csctbb.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kubernetes-dashboard-web
port:
name: web
- path: /api
pathType: Prefix
backend:
service:
name: kubernetes-dashboard-api
port:
name: api
执行安装
kubectl apply -f kubernetes-dashboard.yaml
确认dashboard的相关Pod启动正常
kubectl get po -n kubernetes-dashboard
root@ops-test-02:/apps# kubectl get po -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-api-8586787f7-fkt9g 1/1 Running 0 73s
kubernetes-dashboard-metrics-scraper-6959b784dc-j67hs 1/1 Running 0 73s
kubernetes-dashboard-web-6b6d549b4-ndcjg 1/1 Running 0 73s
root@ops-test-02:/apps# kubectl get ingress -n kubernetes-dashboard
NAME CLASS HOSTS ADDRESS PORTS AGE
kubernetes-dashboard nginx dashboard.csctbb.com 80, 443 103s
创建管理员sa
kubectl create serviceaccount kube-dashboard-admin-sa -n kube-system
kubectl create clusterrolebinding kube-dashboard-admin-sa
–clusterrole=cluster-admin --serviceaccount=kube-system:kube-dashboard-admin-sa
创建集群管理员登录dashboard所需token
kubectl create token kube-dashboard-admin-sa -n kube-system --duration=87600h
root@ops-test-02:/apps# kubectl create token kube-dashboard-admin-sa -n kube-system --duration=87600h
eyJhbGciOiJSUzI1NiIsImtpZCI6IllEaUw3WDJ0VllYeUp5V3VydWtfMW9YcTF5TjZlY2diUlBYbUl2eVU2WjAifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoyMDA4MzAyMjk4LCJpYXQiOjE2OTI5NDIyOTgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlLWRhc2hib2FyZC1hZG1pbi1zYSIsInVpZCI6IjhiZTAzODEwLTJmYjgtNGZiYy04MGExLWJkZjk0ZWY4NGRmMiJ9fSwibmJmIjoxNjkyOTQyMjk4LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZS1kYXNoYm9hcmQtYWRtaW4tc2EifQ.eOJ0YAJhyPynTUIwl8dBz515Y8xvUD0VDOPV-fzCmwul7bLuUbFoeJTc4bs1fyhV8AzLj-8OPkwKvVHv5rD0tX_M7a5tQNmQDKJ0iTWRN8QON6ZJoeg-rP_4-hpQYepwDFCP08HTI_ysuCi3ahcY2tvwYGjSLRpmyJZzanCtKsFIQbshNUihOEe6EQOh1axTw0_Dq-5kGDVZ9B86My3sXrEQ4yaMZmt3xdm5MwqDAx-t4XUUCo54NOPMUl3xWcwveapAVryQXCv3V9kRb9d_5WWVlDK_Ky875zS5PYCPOti07CJipr4Jmq6v8-3pOVhsDCHlPAGiD9IoarRxlXnNCA
使用上面的token登录k8s dashboard
<Done>
更多推荐
所有评论(0)