k8s-环境搭建
k8s环境搭建-基于VMware和centos7
目录
6.2 时间同步这里用chronyd,企业中一般用时间同步服务器
为了避免和docker中产生的大量iptables规则产生混淆
6.5 禁用swap并调整内核参数,对于 K8S(所有节点都要操作)
6.6 调整系统时区(所有节点都要操作)上面启用了chronyd的话跳过此步
6.7 设置 rsyslogd 和 systemd journald(所有节点都要操作)这一步可跳过
6.8 kube-proxy开启ipvs的前置条件(所有节点都要操作)
部署k8s集群主要有两类架构:
一主多从和多主多从,这个不用解释了吧
1.搭建方式介绍:
minikube 一个快速搭建单节点的k8s的工具
kubeadm Kubeadm 是一个K8s 部署工具,提供kubeadm init 和kubeadm join,用于快速部署Kubernetes 集群。
官方地址:Kubeadm | Kubernetes
二进制包
从github 下载发行版的二进制包,手动部署每个组件,组成Kubernetes 集群。
Kubeadm 降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes 集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。
2.kubeadm 部署方式介绍
kubeadm 是官方社区推出的一个用于快速部署kubernetes 集群的工具,这个工具能通过两条指令完成一个kubernetes 集群的部署:
-
创建一个Master 节点kubeadm init
-
将Node 节点加入到当前集群中$ kubeadm join <Master 节点的IP 和端口>
3.安装环境准备
-
一台或多台机器,操作系统CentOS7.x-86_x64
-
硬件配置:2GB 或更多RAM,2 个CPU 或更多CPU,硬盘40GB 或更多
-
集群中所有机器之间网络互通
-
可以访问外网或者阿里的库,需要拉取镜像
-
禁止swap 分区(防止iptables规则混乱)
4.最终目标
-
在所有节点上安装Docker 和kubeadm
-
部署Kubernetes Master
-
部署容器网络插件
-
部署Kubernetes Node,将节点加入Kubernetes 集群中
-
部署Dashboard Web 页面,可视化查看Kubernetes 资源
5.准备环境
| 角色 | IP地址 | 组件 |
|---|---|---|
| master | 192.168.64.128 | docker,kubectl,kubeadm,kubelet |
| node1 | 192.168.64.130 | docker,kubectl,kubeadm,kubelet |
| node2 | 192.168.64.129 | docker,kubectl,kubeadm,kubelet |
6.系统初始化
6.1 设置系统主机名以及 Host 文件的相互解析
hostnamectl set-hostname master && bash hostnamectl set-hostname node1 && bash hostnamectl set-hostname node2 && bash
cat <<EOF>> /etc/hosts 192.168.64.128 master1 192.168.64.130 node1 192.168.64.129 node2 EOF
scp /etc/hosts root@192.168.64.130:/etc/hosts scp /etc/hosts root@192.168.64.129:/etc/hosts
6.2 时间同步这里用chronyd,企业中一般用时间同步服务器
systemctl start chronyd systemctl enabled chronyd
6.3 关闭firewalld和iptables
为了避免和docker中产生的大量iptables规则产生混淆
systemctl stop firewalld && systemctl disable firewalld systemctl stop iptables && systemctl disable iptables #yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
6.4 关闭 SELINUX(所有节点都要操作)
vim /etc/selinux/config SELINUX=disabled
6.5 禁用swap并调整内核参数,对于 K8S(所有节点都要操作)
vim /etc/fstab #/dev/mapper/centos-swap swap swap defaults 0 0 将其注释掉 创建并编辑 vim /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 增加以上内容后保存 重新加载配置 执行 sysctl -p 加载网桥过滤 modprobe br_netfilter 检查是否成功 lsmod | grep br_netfilter
6.6 调整系统时区(所有节点都要操作)上面启用了chronyd的话跳过此步
# 设置系统时区为 中国/上海 timedatectl set-timezone Asia/Shanghai # 将当前的 UTC 时间写入硬件时钟 timedatectl set-local-rtc 0 # 重启依赖于系统时间的服务 systemctl restart rsyslog systemctl restart crond
6.7 设置 rsyslogd 和 systemd journald(所有节点都要操作)这一步可跳过
# 持久化保存日志的目录 mkdir /var/log/journal mkdir /etc/systemd/journald.conf.d cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF [Journal] # 持久化保存到磁盘 Storage=persistent # 压缩历史日志 Compress=yes SyncIntervalSec=5m RateLimitInterval=30s RateLimitBurst=1000 # 最大占用空间 10G SystemMaxUse=10G # 单日志文件最大 200M SystemMaxFileSize=200M # 日志保存时间 2 周 MaxRetentionSec=2week # 不将日志转发到 syslog ForwardToSyslog=no EOF systemctl restart systemd-journald
6.8 kube-proxy开启ipvs的前置条件(所有节点都要操作)
kubernetes的service代理有两种代理模型,基于iptables和ipvs,后者性能较好但是需要手动载入
安装ipset和ipvsadm yum install ipset ipvsadm -y cat <<EOF> /etc/sysconfig/modules/ipvs.modules #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
6.9 安装 Docker 软件(所有节点都要操作)
#yum install -y yum-utils device-mapper-persistent-data lvm2 暂时未加,可跳过
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
#配置为从阿里云镜像库下载
yum install -y --setopt=obsoletes docker-ce-18.06.3.ce-3.el7
## 创建 /etc/docker 目录
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://1b39b574a0a34e19acb8b5f35dd9b41a.mirror.swr.myhuaweicloud.com"]
}
EOF
#上面的地址是镜像加速器的地址 我用的自己买的华为云服务器的镜像加速器的地址为期一年,大家也可以用
mkdir -p /etc/systemd/system/docker.service.d
# 重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
上传文件到/etc/yum.repos.d/目录下,也可以 代替 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 命令
docker-ce.repo文件内容如下
[docker-ce-stable] name=Docker CE Stable - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable enabled=1 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-stable-debuginfo] name=Docker CE Stable - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/stable enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-stable-source] name=Docker CE Stable - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/stable enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test] name=Docker CE Test - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test-debuginfo] name=Docker CE Test - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-test-source] name=Docker CE Test - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/test enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly] name=Docker CE Nightly - $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly-debuginfo] name=Docker CE Nightly - Debuginfo $basearch baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg [docker-ce-nightly-source] name=Docker CE Nightly - Sources baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/nightly enabled=0 gpgcheck=1 gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg
2.6.10 安装 K8s组件(所有节点都要操作)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF yum install -y kubelet kubeadm kubectl && systemctl enable kubelet 或者 yum install -y --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 kubectl-1.17.4-0 #修改配置文件 vim /etc/sysconfig/kubelet #添加以下内容 KUBELET_CGROUP_ARGS="--cgroup-driver=systemd" KUBE_PROXY_MODE="ipvs"
准备集群镜像(因为镜像在外网 由于网络原因无法连接)一下提供替代方案
images=(
kube-apiserver:v1.17.4 kube-controller-manager:v1.17.4 kube-scheduler:v1.17.4 kube-proxy:v1.17.4 pause:3.1 etcd:3.4.3-0 coredns:1.6.5
)
for imageName in ${images[@]};do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
2.7 部署Kubernetes Master
2.7.1 初始化主节点(主节点操作)
kubeadm init \ --kubernetes-version v1.17.4 \ --pod-network-cidr=10.244.0.0/16 \ --service-cidr=10.96.0.0/12 \ --apiserver-advertise-address=192.168.64.128 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
2.7.2 加入主节点以及其余工作节点
kubeadm join 192.168.64.128:6443 --token h0uelc.l46qp29nxscke7f7 \ --discovery-token-ca-cert-hash sha256:abc807778e24bff73362ceeb783cc7f6feec96f20b4fd707c3f8e8312294e28f
在每个节点执行这个即可
2.7.3 部署网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
下边是文件
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.14.0
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.14.0
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
2.8 测试kubernetes 集群
2.8.1 部署nginx 测试
#部署nginx kubectl create deployment nginx --image=nginx #暴露端口 kubectl expose deployment nginx --port=80 --type=NodePort #查看服务状态 kubectl get pod,svc

像我这种有东西不报错就没问题了,第一行不用管
更多推荐



所有评论(0)