使用kubeadm安装kubernetes(k8s)集群
一、环境准备这里使用三台服务器进行搭建,具体信息如下:
目录
一、环境准备
这里使用三台服务器进行搭建,具体信息如下:
node51 | ip: 10.10.2.51 | centos7 |
node52 | ip: 10.10.2.52 | centos7 |
node53 | ip: 10.10.2.53 | centos7 |
注:每个节点上都需要执行以下步骤。
1.关闭防火墙
systemctl stop firewalld & systemctl disable firewalld
2.关闭swap分区
在安装K8S集群时,需要关闭Linux的Swap内存交换机制,否则会因为内存交换而影响性能以及稳定性。
vim /etc/fstab
注释掉swap那一行:
#
# /etc/fstab
# Created by anaconda on Thu Mar 25 21:01:16 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=154f1a96-2a93-4884-8903-c7ff378152c4 /boot xfs defaults 0 0
UUID=25AB-483E /boot/efi vfat umask=0077,shortname=winnt 0 0
/dev/mapper/centos-home /home xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
3.添加主机名与ip的对应关系
此处可以通过vi /etc/hostname命令修改每一个主机的主机名,/etc/hosts中的主机名需要和此处修改的主机名相对应。
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.10.2.51 node51
10.10.2.52 node52
10.10.2.53 node53
4.时间同步
systemctl start chronyd.service
systemctl enable chronyd.service
5.将桥接的IPv4流量传递到iptables的链
配置k8s.conf文件(k8s.conf文件原来不存在,需要自己创建的)
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
6.配置软件源
- 配置yum源base repo为阿里云的yum源
cd /etc/yum.repos.d
mv CentOS-Base.repo CentOS-Base.repo.bak
mv epel.repo epel.repo.bak
curl https://mirrors.aliyun.com/repo/Centos-7.repo -o CentOS-Base.repo
sed -i 's/gpgcheck=1/gpgcheck=0/g' /etc/yum.repos.d/CentOS-Base.repo
curl https://mirrors.aliyun.com/repo/epel-7.repo -o epel.repo
- 配置kubernetes源为阿里的yum源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 Kubernetes源设为阿里
gpgcheck=0:表示对从这个源下载的rpm包不进行校验
repo_gpgcheck=0:某些安全性配置文件会在 /etc/yum.conf 内全面启用 repo_gpgcheck,以便能检验软件库的中继数据的加密签署
如果gpgcheck设为1,会进行校验,就会报错如下,所以这里设为0
- update cache 更新缓存
yum clean all && yum makecache && yum repolist
至此,完成了初始环境的部署。
二、为每个节点安装docker(三个节点都要安装)
1.在线安装
yum -y install yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache
yum install docker-ce -y
安装完成后,运行docker --version确认安装成功。
[root@node51 ~]# docker --version
Docker version 20.10.5, build 55c4c88
启动Docker服务并设置激活开机启动:
systemctl start docker & systemctl enable docker
2.离线安装
在docker下载地址中下载docker-20.10.5.tgz(以此版本为例,也可以下载其他版本)
- 解压
tar -xvf docker-20.10.5.tgz
- 将解压出来的docker文件内容移动到 /usr/bin/ 目录下
cp docker/* /usr/bin/
- 将docker注册为service
vim /etc/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
- 启动
chmod +x /etc/systemd/system/docker.service #添加文件权限并启动docker
systemctl daemon-reload #重载unit配置文件
systemctl start docker #启动Docker
systemctl enable docker.service #设置开机自启
安装完成后,运行docker --version确认安装成功。
[root@node51 ~]# docker --version
Docker version 20.10.5, build 55c4c88
3.更改docker镜像源
cat >/etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://********.mirror.aliyuncs.com"]
}
该地址可以从阿里云的容器镜像服务中获得。
三、使用kubeadm安装k8s集群
1.关闭SELINUX
vim /etc/sysconfig/selinux
SELINUX=enforcing 改为 SELINUX=disabled,重启服务reboot
2.安装指定版本
yum install -y kubelet-1.18.2-0.x86_64 kubeadm-1.18.2-0.x86_64 kubectl-1.18.2-0.x86_64
启动kubelet
systemctl enable kubelet && systemctl start kubelet
3.执行kubeadm init
kubeadm init \
--apiserver-advertise-address=10.10.2.51 \ #自己的master节点IP
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.18.2 \ #v1.18.2 版本号选择
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
安装完成后,执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4.部署网络插件flannel
前往https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml,复制该yaml文件。(由于墙的原因,可能会无法访问)
执行
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
这里将kube-flannel.yml放在这里,如果无法访问,可以拷贝后执行kubectl apply -f kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.13.1-rc2
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.13.1-rc2
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
5.集群部署
在master节点执行
kubeadm token create --print-join-command
得到
kubeadm join 10.10.2.51:6443 --token wkokq5.jqdsaz3yhduam9bh --discovery-token-ca-cert-hash sha256:0005e5cbe7d3430fa93c02ab9bbeb1b075b04b117bed201a7915d436ba40d569
将以上命令复制到各个节点执行,就可以将节点添加到集群中。
6.kubectl命令自动补全
yum install -y bash-completion
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
四、配置Ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml
修改deploy.yaml的这几行(否则镜像会拉不下来),随后执行
kubectl apply -f deploy.yaml
五、安装dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
使用nodeport方式暴露dashboard
kubectl patch svc kubernetes-dashboard -n kubernetes-dashboard \
-p '{"spec":{"type":"NodePort","ports":[{"port":443,"targetPort":8443,"nodePort":30443}]}}'
使用浏览器访问dashboard
https://<any_node_ip>:30443
Dashboard 支持 Kubeconfig 和 Token 两种认证方式,我们这里选择Token认证方式登录。
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secrets | awk '/dashboard-admin/{print $1}')
执行这三个命令,即可得到访问Dashboard的Token。
更多推荐
所有评论(0)