centos7 k8s集群配置
centos7 k8s 1master+2node 集群配置文章目录centos7 k8s 1master+2node 集群配置0 环境1 centos7 优化(所有节点)1.1 改阿里源1.2 关闭防火墙1.3 关闭selinux1.4 安装命令补全bash-completion2 安装docker(所有节点)2.1 删除原有的docker软件2.2 安装docker-ce2.3 配置阿里镜像.
centos7 k8s 1master+2node 集群配置
0 环境
软件 | 版本 |
---|---|
centos | CentOS Linux release 7.8.2003 |
docker | Docker version 19.03.8, build afacb8b |
k8s | 1.18.2 |
ip | 角色 | hostname |
---|---|---|
192.168.6.101 | master | master01 |
192.168.6.102 | node | node01 |
192.168.6.103 | node | node02 |
1 centos7 优化(所有节点)
1.1 改阿里源
如果安装[2.2 安装docker-ce]有问题,则此步骤去掉
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
yum update -y
yum clean all
yum makecache
1.2 关闭防火墙
firewall-cmd --state #查看防火墙状态
systemctl stop firewalld.service #停止firewall
systemctl disable firewalld.service #禁止firewall开机启动
1.3 关闭selinux
getenforce #查看selinux状态
vi /etc/selinux/config
将SELINUX=enforcing改为SELINUX=disabled
1.4 安装命令补全bash-completion
yum install -y bash-completion
2 安装docker(所有节点)
2.1 删除原有的docker软件
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
2.2 安装docker-ce
yum install -y yum-utils
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install docker-ce docker-ce-cli containerd.io
systemctl start docker #启动docker
systemctl enable docker #设置为系统服务
docker -v # 验证docker是否安装成功
2.3 配置阿里镜像实现docker下载加速
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://[自己配置的docker加速].mirror.aliyuncs.com","https://registry.docker-cn.com"]
}
EOF
重启生效
如何配置阿里镜像加速,请参考传送门
2.4 免sudo运行docker命令(可选)
sudo groupadd docker #添加用户组(如果没有docker组的话)
sudo gpasswd -a ${USER} docker #把当前用户添加到docker用户组
sudo systemctl daemon-reload #重载配置文件
sudo systemctl restart docker #重启docker服务
3 K8S群集
3.1 配置hosts、hostname(所有节点)
修改/etc/hosts文件
199.232.68.133 raw.githubusercontent.com #非必须,如果无法从github下载文件,可以配置次行试试
192.168.6.101 master01
192.168.6.102 node01
192.168.6.103 node02
对应主机修改/etc/hostname文件
3.2 禁用swap(所有节点)
修改配置文件/etc/fstab
vi /etc/fstab
注释下面这一行
/mnt/swap swap swap defaults 0 0
修改/etc/sysctl.conf文件下vm.swappiness参数
echo vm.swappiness=0 >> /etc/sysctl.conf
使用free -m
验证是否关闭成功
3.3 内核参数修改(所有节点)
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf
3.4 安装Kubernetes源(所有节点)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
更新缓存
yum clean all
yum -y makecache
安装kubelet、kubeadm、kubectl
yum install -y kubelet kubeadm kubectl
安装结果
Installed:
kubeadm.x86_64 0:1.18.2-0 kubectl.x86_64 0:1.18.2-0 kubelet.x86_64 0:1.18.2-0
Dependency Installed:
conntrack-tools.x86_64 0:1.4.4-7.el7
cri-tools.x86_64 0:1.13.0-0
kubernetes-cni.x86_64 0:0.7.5-0
libnetfilter_cthelper.x86_64 0:1.0.0-11.el7
libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7
libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
socat.x86_64 0:1.7.3.2-2.el7
启动并将kubelet配置为系统服务
systemctl enable kubelet && systemctl start kubelet
3.5 master配置
生成默认配置文件
cd ~
mkdir k8s
cd k8s
kubeadm config print init-defaults ClusterConfiguration > kubeadm.conf
修改kubeadm.conf配置文件
mageRepository修改为
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
advertiseAddress修改为master01的ip
localAPIEndpoint:
advertiseAddress: 192.168.6.101
修改k8s的实际版本号
kubernetesVersion: v1.18.2
配置podSubnet
podSubnet: 10.244.0.0/16
根据kubeadm.conf配置文件下载镜像
kubeadm config images pull --config ./kubeadm.conf
根据kubeadm.conf初始化环境
sudo kubeadm init --config ./kubeadm.conf
下载成功如有如下提示,要记录好kubeadm join 192.168.6.101:6443 ...
这段命令,这段命令用于配置node环境
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.6.101:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:d60829461dcbfd149c00b41e2baa3c500d9a459840578faf2c0a6e33635fc9fd
创建配置文件并赋予权限,重启kubelet服务
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo systemctl enable kubelet
sudo systemctl restart kubelet
配置flannel覆盖网络工具
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
验证master01是否配置成功
kubectl get nodes
配置成功会如下图所示,status为Ready
将master01上的配置文件通过scp拷贝到node01、node02上
sudo scp /etc/kubernetes/admin.conf lovewinner@node01:/home/lovewinner
sudo scp /home/lovewinner/k8s/kube-flannel.yml lovewinner@node01:/home/lovewinner
sudo scp /etc/kubernetes/admin.conf lovewinner@node02:/home/lovewinner
sudo scp /home/lovewinner/k8s/kube-flannel.yml lovewinner@node02:/home/lovewinner
3.6 node配置(node01、node02配置)
创建配置文件并赋予权限
mkdir -p $HOME/.kube
sudo cp -i $HOME/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
加入到集群中
sudo kubeadm join 192.168.6.101:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:d60829461dcbfd149c00b41e2baa3c500d9a459840578faf2c0a6e33635fc9fd
配置flannel覆盖网络工具
cd /home/lovewinner
kubectl apply -f kube-flannel.yml
3.7 安装控制面板(master01配置)
下载配置文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
修改配置文件
spec:
type: NodePort #添加此属性
ports:
- port: 443
targetPort: 8443
nodePort: 30443 #添加此属性
更新配置
kubectl apply -f recommended.yaml
浏览器访问https://master01:30443 ,登录dashboard
创建dashboard-adminuser.yaml,并复制下面内容
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
创建登录用户
kubectl apply -f dashboard-adminuser.yaml
查看token
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
执行完以后如下所示
Name: admin-user-token-6hw6v
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: 5b5af84f-0266-47f3-9b9a-5d51117be850
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6Inp5NmRITFR4SjNGaWN2b1pQd2RyRnlqcjZDSmJOZmV3VnhCcFhYS2RSeWsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZodzZ2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1YjVhZjg0Zi0wMjY2LTQ3ZjMtOWI5YS01ZDUxMTE3YmU4NTAiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.v2-KCiTnQcK60Sw_Ey_0p3Fvln8U1HbzlVV9JBcPu3eXXzpMliRPDQEJJ9V0wewyZk5zo6pub2g7Gv8xla2H3krosnAfucVnPKpRgyxVtKdhdst2SdQr0TZZ0tTd-wGq0Gjoti1UQcZsvQaNeE6NALrDdeEcuziMGOZVnW1qpwx8sBceK4h0GVas3LE7j1vQPBH3w_qaNA_JY_NeQNZe0UCZXCNkBjtTnaWfQh2lXcal5UuzUWqsEcd42t9qh63GyX04wwG85jJbP0zerwpB0M8LN3axzM-hojxvOwYSvUbR7Ws4igKYDbdFez6_oViqE3SIvQJU1Xs-twOm9zzQ9A
ca.crt: 1025 bytes
namespace: 20 bytes
参考资源
更多推荐
所有评论(0)