CentOS7安装K8S(实践版)
CentOS7上安装K8S
环境准备
3台CentOS机器:
172.48.22.70 172.48.22.71 172.48.22.72
分别将主机命名为k8s-master、k8s-node1、k8s-node2
#设置每个机器自己的hostname
hostnamectl set-hostname xxx
设置主机名访问:
使3台机器互相都能通过主机名访问
编辑/etc/hosts文件,在文件末尾添加
# vi /etc/hosts
172.48.22.70 k8s-master
172.48.22.71 k8s-node1
172.48.22.72 k8s-node2
安装Docker
yum install -y yum-utils
#配置docker的yum地址
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum makecache fast
#安装
yum install docker-ce docker-ce-cli containerd.io
#启动&开机启动
systemctl enable docker --now
#配置镜像加速器
#需要先在阿里云查看 https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://自己的ID.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}
EOF
systemctl daemon-reload
systemctl restart docker
安装Kubernetes
确保每个节点上 MAC 地址和 product_uuid 的唯一性
- 使用命令
ip link
或ifconfig -a
来获取网络接口的 MAC 地址 - 使用
sudo cat /sys/class/dmi/id/product_uuid
命令对 product_uuid 校验
允许 iptables 检查桥接流量
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system
将 SELinux 设置为 permissive 模式(相当于将其禁用)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
禁用交换分区
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
配置yum地址
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装
yum install -y kubelet kubeadm kubectl
#由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用下面的方式安装
yum install -y --nogpgcheck kubelet kubeadm kubectl
启动
systemctl enable kubelet && systemctl start kubelet
systemctl enable --now kubelet
初始化master节点
提前在防火墙放开端口
firewall-cmd --zone=public --add-port=6443/tcp --permanent
firewall-cmd --zone=public --add-port=10250/tcp --permanent
firewall-cmd --zone=public --add-port=10248/tcp --permanent
firewall-cmd --reload
初始化
kubeadm init \
--apiserver-advertise-address 172.48.22.70 \
--control-plane-endpoint k8s-master \
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.23.3 \
--pod-network-cidr 192.168.192.0/18
如果init失败,使用kubeadm reset
重置
成功之后,得到token
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join k8s-master:6443 --token bsgn3s.kfvjki0yqezp8ge9 \
--discovery-token-ca-cert-hash sha256:2495c596ad177f37eee40c0cf4f5dc77ea7af436f3b4660cc5374960b9cda2cc \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s-master:6443 --token bsgn3s.kfvjki0yqezp8ge9 \
--discovery-token-ca-cert-hash sha256:2495c596ad177f37eee40c0cf4f5dc77ea7af436f3b4660cc5374960b9cda2cc
按照提示操作
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
安装Calico网络插件
准备
# Calico networking (BGP)
firewall-cmd --zone=public --add-port=179/tcp --permanent
# Calico networking with Typha enabled
firewall-cmd --zone=public --add-port=5473/tcp --permanent
# flannel networking (VXLAN)
firewall-cmd --zone=public --add-port=4789/udp --permanent
创建这个配置文件在 /etc/NetworkManager/conf.d/calico.conf
,用来防止 NetworkManager
去干扰网卡:
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*
#得到yaml文件
curl https://docs.projectcalico.org/v3.20/manifests/calico.yaml -O
#修改其中的配置
cat calico.yaml | grep 192.168
#找到这一行 将其替换成192.168.192.0/18并打开注释
#在这个下面
- name: CLUSTER_TYPE
value: "k8s,bgp"
添加
- name: IP_AUTODETECTION_METHOD
value: "interface=ens.*"
#将v1beta1改为v1
kubectl apply -f calico.yaml
加入其他节点
使用master初始化完成后打印的命令
#获取令牌
kubeadm token list
#令牌过期后,生成新令牌
kubeadm token create
kubeadm token create --print-join-command
部署dashboard
https://github.com/kubernetes/dashboard
安装
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml
安装到其他工作节点上一直失败,暂时将其安装到master节点
修改yaml文件
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
ports:
- port: 443
targetPort: 8443
nodePort: 32500
type: NodePort
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
nodeName: k8s-master
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.5.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
nodeName: k8s-master
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.7
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
修改部分为nodeName: k8s-master
所在行以及nodePort: 32500
、 type: NodePort
所在行
访问
kubectl get svc -A | grep dashboard
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.106.127.254 <none> 8000/TCP 95s
kubernetes-dashboard kubernetes-dashboard NodePort 10.105.172.234 <none> 443:32500/TCP 95s
https://172.48.22.70:32500
创建访问账号
#创建访问账号,准备一个yaml文件; vi dash.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
kubectl apply -f dash.yaml
令牌访问
#获取访问令牌
kubectl -n kubernetes-dashboard get secret $(kubectl -n kubernetes-dashboard get sa/admin-user -o jsonpath="{.secrets[0].name}") -o go-template="{{.data.token | base64decode}}"
eyJhbGciOiJSUzI1NiIsImtpZCI6ImQzbmo0WFFSaVF1QjQ2MGVXT1V4OTRmS0tnZ2NHcUpPaUs3bEx3bWdYcE0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWRndG10Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkNmE2YWEwOS0wN2ZmLTRmYmYtOTQ3My04ZTJlYzk0NTYwNzYiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.RWIzgRzdjxeRq5uKs2jGQD1ZJ4oWgRf1oBqXCZNAIKHAJVBADay7tbucyhPK21mzy7QN8BmEdb4FYlKfQIK_8IH51ZNFMVdjj5gVue1nZSD_IJ20yU4gFMdqBYhmH4w2A9vNxDiwd8PrKf_4BPssCPCC1E88tsVDP5wg4A8VtaLXzl8alekLt-KydidfOl4AWUesjx8exLuIwr0SWaRegp_h8nWTm44dk26osPpbf45owNV11w_yO-SG0cXK9jKRG9fWG9nw_WrBapuAAJdHrk90JNess62M2bCv13u_2canT2NxVF3YYsYpXSvVIIEOZInHaKvhnbPTDuk8nZWnGg
Helm
Helm 是查找、分享和使用软件构建 Kubernetes 的最优方式。
安装
用二进制版本安装
每个Helm 版本都提供了各种操作系统的二进制版本,这些版本可以手动下载和安装。
- 下载 需要的版本
- 解压(
tar -zxvf helm-v3.0.0-linux-amd64.tar.gz
) - 在解压目中找到
helm
程序,移动到需要的目录中(mv linux-amd64/helm /usr/local/bin/helm
)
然后就可以执行客户端程序并 添加稳定仓库: helm help
.
注意 针对Linux AMD64,Helm的自动测试只有在CircleCi构建和发布时才会执行。测试其他操作系统是社区针对系统问题请求Helm的责任。
使用脚本安装
Helm现在有个安装脚本可以自动拉取最新的Helm版本并在 本地安装。
您可以获取这个脚本并在本地执行。它良好的文档会让您在执行之前知道脚本都做了什么。
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
如果想直接执行安装,运行curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
。
使用
- 添加常用的chart源
#先添加常用的chart源
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo add aliyuncs https://apphub.aliyuncs.com
#查看chart列表
[root@k8s-master ~]# helm repo list
NAME URL
bitnami https://charts.bitnami.com/bitnami
aliyuncs https://apphub.aliyuncs.com
- 以nginx为例,看下都有哪些版本提供
[root@k8s-master helm]# helm search repo nginx
NAME CHART VERSION APP VERSION DESCRIPTION
aliyuncs/nginx 5.1.5 1.16.1 Chart for the nginx server
aliyuncs/nginx-ingress 1.30.3 0.28.0 An nginx Ingress controller that
aliyuncs/nginx-ingress-controller 5.3.4 0.29.0 Chart for the nginx Ingress
aliyuncs/nginx-lego 0.3.1 Chart for nginx-ingress-
aliyuncs/nginx-php 1.0.0 nginx-1.10.3_php-7.0 Chart for the nginx php server
bitnami/nginx 9.7.6 1.21.6 NGINX Open Source is a web
bitnami/nginx-ingress-controller 9.1.5 1.1.1 NGINX Ingress Controller is an
bitnami/nginx-intel 0.1.2 0.4.7 NGINX Open Source for Intel is a
bitnami/kong 5.0.2 2.7.0 Kong is a scalable, open source
- 选择aliyuncs/nginx 的chart包 先下载看看包有什么内容。比较重要的是values.yaml
helm pull aliyuncs/nginx --untar #将nginx包从仓库拉到当前目录
#查看结构
[root@k8s-master helm]# tree nginx
nginx
├── Chart.yaml
├── ci
│ └── values-with-ingress-metrics-and-serverblock.yaml
├── README.md
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── ingress.yaml
│ ├── NOTES.txt
│ ├── server-block-configmap.yaml
│ ├── servicemonitor.yaml
│ ├── svc.yaml
│ └── tls-secrets.yaml
├── values.schema.json
└── values.yaml
2 directories, 13 files
- 安装nginx到我们的集群中
直接在线安装aliyuncs/nginx,my-nginx为release名称;service.type=NodePort表示将tomcat的service对外暴露端口的方式改为NodePort(缺省为LoadBalancer);persistence.enabled=false表示将不启用持久化存储卷,测试暂不需要使用这个
[root@k8s-master nginx]# helm install my-nginx aliyuncs/nginx --set service.type=NodePort --set persistence.enabled=false
NAME: my-nginx
LAST DEPLOYED: Fri Feb 11 11:02:55 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Get the NGINX URL:
export NODE_PORT=$(kubectl get --namespace default -o jsonpath="{.spec.ports[0].nodePort}" services my-nginx)
export NODE_IP=$(kubectl get nodes --namespace default -o jsonpath="{.items[0].status.addresses[0].address}")
echo "NGINX URL: http://$NODE_IP:$NODE_PORT/"
- 查看是否安装成功
[root@k8s-master helm]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/my-nginx-77d8457bcf-dtjxp 1/1 Running 1 (140m ago) 5h13m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 2d
service/my-nginx NodePort 10.97.2.116 <none> 80:32657/TCP,443:31258/TCP 5h13m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-nginx 1/1 1 1 5h13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/my-nginx-77d8457bcf 1 1 1 5h13m
-
测试访问
http://集群任意IP:32657
-
卸载刚才安装的nginx(未验证)
helm uninstall my-nginx #helm delete已重命名,推荐使用uninstall
其他命令
使用 helm status
来追踪 release 的状态,或是重新读取配置信息
使用 helm show values
可以查看 chart 中的可配置选项
使用 helm get values
命令来看看配置值是否真的生效了
使用 helm uninstall
命令从集群中卸载一个 release
K8S问题排查
Unable to access kubernetes services: no route to host
导致现象: 在POD 内访问集群的某个服务的时候出现no route to host
$ curl my-nginx.nx.svc.cluster.local
curl: (7) Failed connect to my-nginx.nx.svc.cluster.local:80; No route to host
解决方法:清除所有的防火墙规则,然后重启docker 服务
$ iptables --flush && iptables -tnat --flush
$ systemctl restart docker
更多推荐
所有评论(0)