从0到1手动搭建k8s集群-初始化master节点
k8s集群搭建网上已经有多种成熟的方案,例如如kubekey、kubespray。而这些方案大多数也是对kubeadm进行了封装而已。本文就通过具体的实例,手动使用kubeadm一步一步搭建k8s集群。
k8s集群搭建网上已经有多种成熟的方案,例如如kubekey、kubespray。而这些方案大多数也是对kubeadm进行了封装而已。本文就通过具体的实例,手动使用kubeadm一步一步搭建k8s集群。
本文中部署的k8s实例版本号为1.20.4,其他版本部署流程基本相同。
本章节主要介绍k8s集群的第一个节点(master)的初始化流程,后续章节会陆续介绍其他master节点或者worker节点如何加入集群。如果读者仅仅想体验k8s的功能,那么单节点就可以运行了。
目录
1. 整体架构
-
master1:192.168.56.10
-
master2:192.168.56.11
-
master3:192.168.56.12
-
node1:192.168.56.13
-
node2:192.168.56.14
2 部署master1节点
2.1 环境初始化
-
关闭防火墙、虚拟交换分区、selinux
# 关闭防火墙
sudo systemctl stop firewalld && systemctl disable firewalld
sudo systemctl stop ufw && systemctl disable ufw
# 关闭虚拟交换(注释fstab中swap配置)
sudo swapoff -a
sudo sed -i /^[^#]*swap*/s/^/\#/g /etc/fstab
-
设置/etc/hosts
192.168.56.10 master1
192.168.56.11 master2
192.168.56.12 master3
192.168.56.13 node1
192.168.56.14 node2
-
部署docker
#一键式部署docker
curl -fsSL https://get.docker.com | sudo bash -s docker --mirror Aliyun
sudo systemctl enable docker && sudo systemctl restart docker
-
安装必备软件
sudo apt-get install socat conntrack ebtables ipset ipvsadm
-
设置hostname
sudo hostnamectl set-hostname master1
2.2 部署k8s二进制
-
将kubelet、kubectl、kubeadm拷贝到/usr/local/bin路径下,并赋予执行权限
curl https://dl.k8s.io/v1.20.4/kubernetes-node-linux-amd64.tar.gz -o ./kubernetes-node-linux-amd64.tar.gz
tar -zxvf kubernetes-client-linux-amd64.tar.gz -C ./
# 部署kubeadm
sudo cp ./kubernetes/node/bin/kubeadm /usr/local/bin/ && sudo chmod +x /usr/local/bin/kubeadm
# 部署kubectl
sudo cp ./kubernetes/node/bin/kubectl /usr/local/bin/ && sudo chmod +x /usr/local/bin/kubectl
# 部署kubelet
sudo cp ./kubernetes/node/bin/kubelet /usr/local/bin/ && sudo chmod +x /usr/local/bin/kubelet
-
生成kubelet服务/etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
CPUAccounting=true
MemoryAccounting=true
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
-
生成kubelet配置文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generate at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
Environment="KUBELET_EXTRA_ARGS=--node-ip=192.168.56.10 --hostname-override=master1"
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
-
使能kubelet
sudo systemctl disable kubelet
sudo systemctl enable kubelet
sudo ln -snf /usr/local/bin/kubelet /usr/bin/kubelet
2.3 初始化主节点
-
使用kubeadm初始化master。该过程会部署控制面相关组件etcd、kube-apiserver、kube-controller-manager、kube-scheduler
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --apiserver-advertise-address=192.168.56.10 --image-repository registry.aliyuncs.com/google_containers
-
部署网络插件flannel(或者calico),编辑network-plugin.yaml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
- apiGroups:
- networking.k8s.io
resources:
- clustercidrs
verbs:
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: flannel
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
k8s-app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
k8s-app: flannel
spec:
selector:
matchLabels:
app: flannel
k8s-app: flannel
template:
metadata:
labels:
tier: node
app: flannel
k8s-app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
priorityClassName: system-node-critical
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
args:
- -f
- /flannel
- /opt/cni/bin/flannel
command:
- cp
image: flannel/flannel-cni-plugin:v1.1.2
volumeMounts:
- mountPath: /opt/cni/bin
name: cni-plugin
- name: install-cni
image: flannel/flannel:v0.21.3
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: {{ .FlannelImage }}
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- mountPath: /run/xtables.lock
name: xtables-lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
-
部署flannel
sudo kubectl apply -f network-plugin.yaml
更多推荐
所有评论(0)