KubeAdmin方式搭建k8s
服务器环境配置ip描述配置192.168.43.201master节点CPU(2C) 内存(2G) 硬盘(50G)192.168.43.202node1节点CPU(2C) 内存(2G) 硬盘(50G)192.168.43.203node2节点CPU(2C) 内存(2G) 硬盘(50G)三台服务器都要执行下面的操作设置服务器主机名#192.168.43.201服务器执行hostnamectl set
服务器环境配置
ip | 描述 | 配置 |
---|---|---|
192.168.43.201 | master节点 | CPU(2C) 内存(2G) 硬盘(50G) |
192.168.43.202 | node1节点 | CPU(2C) 内存(2G) 硬盘(50G) |
192.168.43.203 | node2节点 | CPU(2C) 内存(2G) 硬盘(50G) |
三台服务器都要执行下面的操作
设置服务器主机名
#192.168.43.201服务器执行
hostnamectl set-hostname k8s-master
#192.168.43.202服务器执行
hostnamectl set-hostname k8s-node1
#192.168.43.203服务器执行
hostnamectl set-hostname k8s-node2
修改host文件解析域名
cat >> /etc/hosts << EOF
192.168.56.100 k8s-node1
192.168.56.101 k8s-node2
192.168.56.102 k8s-node3
EOF
时间同步
yum -y install ntp ntpdate
#时间服务器可以选择:ime.nist.gov、time.nuri.net、0.asia.pool.ntp.org、1.asia.pool.ntp.org、2.asia.pool.ntp.org、3.asia.pool.ntp.org中任意一个,只要保证可用就OK
tpdate 0.asia.pool.ntp.org
#将系统时间写入硬件时间
hwclock --systohc
timedatectl
禁用iptables和firewalld服务
kubernetes和docker在运行中会产生大量的iptables规则,为了不让系统规则跟它们混淆,直接关闭系统的规则
# 1 关闭firewalld服务
systemctl stop firewalld
systemctl disable firewalld
# 2 关闭iptables服务
systemctl stop iptables
systemctl disable iptables
禁用selinux
selinux是linux系统下的一个安全服务,如果不关闭它,在安装集群中会产生各种各样的奇葩问题
# 编辑 /etc/selinux/config 文件,修改SELINUX的值为disabled
# 注意修改完毕之后需要重启linux服务
SELINUX=disabled
# 永久关闭
sed -i 's/enforcing/disabled/' /etc/selinux/config
# 临时
setenforce 0
禁用swap分区
# vim /etc/fstab 分区配置文件,注释掉swap分区一行
# 注意修改完毕之后需要重启linux服务
UUID=455cc753-7a60-4c17-a424-7741728c44a1 /boot xfs defaults 0 0
/dev/mapper/centos-home /home xfs defaults 0 0
# /dev/mapper/centos-swap swap swap defaults 0 0
# 临时关闭swap
swapoff -a
# 永久关闭swap
sed -ri 's/.*swap.*/#&/' /etc/fstab
修改linux的内核参数
# 修改linux的内核参数,添加网桥过滤和地址转发功能
# 创建/etc/sysctl.d/kubernetes.conf文件,添加如下配置:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
# 重新加载配置
sysctl -p
# 加载网桥过滤模块
modprobe br_netfilter
# 查看网桥过滤模块是否加载成功
lsmod | grep br_netfilter
配置ipvs功能
# 1 安装ipset和ipvsadm
yum install ipset ipvsadmin -y
# 2 添加需要加载的模块写入脚本文件
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
# 3 为脚本文件添加执行权限
chmod +x /etc/sysconfig/modules/ipvs.modules
# 4 执行脚本文件
/bin/bash /etc/sysconfig/modules/ipvs.modules
# 5 查看对应的模块是否加载成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
重启服务器
reboot
安装Docker
安装最新版的docker即可,参考博客:Centos安装docker
集群搭建
安装kubeadm、kubelet和kubectl
三台服务器都要执行
- 修改yum源为阿里源,提高下载速度
cat >/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
- 安装 kubeadm,kubelet和kubectl
yum install -y --setopt=obsoletes=0 kubeadm-1.17.17-0 kubelet-1.17.17-0 kubectl-1.17.17-0
- 修改kubelet配置
# 配置kubelet的cgroup
# vim /etc/sysconfig/kubelet,添加下面的配置
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
# 4 设置kubelet开机自启
systemctl enable kubelet
下载docker镜像
kubeadmin方式安装的k8s运行在docker容器内,需要提前下载docker镜像,注意master节点和node节点的镜像不一样,要分开下载
- master服务器执行下载镜像的脚本
#vim install.sh 新增如下配置
images=(
kube-apiserver:v1.17.17
kube-controller-manager:v1.17.17
kube-scheduler:v1.17.17
kube-proxy:v1.17.17
pause:3.1
etcd:3.4.3-0
coredns:1.6.5
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
#执行脚本进行安装
chmod +x install.sh
./install.sh
- node节点下载镜像
#vim install.sh 新增如下配置
images=(
kube-proxy:v1.17.17
pause:3.1
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
#执行脚本进行安装
chmod +x install.sh
./install.sh
master服务器初始化
只需要在master服务器上执行
#初始化 注意最后面的ip为master主机的ip,需要修改服务器的实际ip
kubeadm init \
--kubernetes-version=v1.17.17 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=192.168.43.201
# ===================输出信息========================
Your Kubernetes control-plane has initialized successfully! # 控制面板就是master
To start using your cluster, you need to run the following as a regular user:
然后他告诉你如果想要使用集群,需要把配置文件放到家目录下
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
你可以使用k8s集群了 但是你应该安装网络插件
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
# 如果你想要添加工作结点,可以在node结点上以root身份执行下面的话
kubeadm join 192.168.56.100:6443 --token 15ckqi.et50udit3pqdn180 \
--discovery-token-ca-cert-hash
sha256:a23f2c32749a128d653712e33c4045e5932339ddd96e61251348b203cfdba4bb
# ===================根据输出信息做一些操作========================
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
node节点加入master
node服务器执行命令加入master节点
# 将node节点加入集群 # 自己输入自己命令行输出的,不要全部复制
kubeadm join 192.168.56.100:6443 --token 8507uc.o0knircuri8etnw2 --discovery-token-ca-cert-hash sha256:acc37967fb5b0acf39d7598f8a439cc7dc88f439a3f4d0c9cae88e7901b9d3f
token过期或忘记了可以在master节点重新生成token
kubeadm token create --print-join-command
master节点安装网络插件
master节点查看集群状态 ,状态均为NotReady,因为还没有配置网络插件
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 6m43s v1.17.4
node1 NotReady <none> 22s v1.17.4
node2 NotReady <none> 19s v1.17.4
安装网络插件:
kubernetes支持多种网络插件,比如flannel
、calico、canal等等,任选一种使用即可,本次选择flannel
#master节点安装 获取fannel的配置文件,这个文件是外网的,如果下载不下来可以用下面提供的文件
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 使用配置文件启动flannel
kubectl apply -f kube-flannel.yml
kube-flannel.yml文件:
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel:v0.17.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.17.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel:v0.17.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
执行完 kubectl apply后可以通过 kubectl get pods -A 查看pods是否创建成功,这里需要多等待下,等到pods的状态都为就绪状态就表示安装成功了。
#稍等片刻再次查看集群状态
kubectl get nodes
卸载k8s
# 每个结点上执行
kubectl delete -f kube-flannel.yml
kubeadm reset
rm -rf $HOME/.kube
rm -rf /etc/cni/net.d
参考 k8s学习笔记
更多推荐
所有评论(0)