使用kubeadm搭建k8s集群
目录配置ssh配置网络转发配置ipvs配置host配置仓库安装组件:配置KUBE_PROXY_MODE创建集群node加入集群安装flannel否则节点一直处于noready创建nginx的pod创建svc并访问服务:配置ssh各节点都需要修改/etc/ssh/sshd_config助释放开:PubkeyAuthentication yesmkdir /root/.sshscp ./id_rsa.
目录
设置host
各主机执行
hostnamectl --static set-hostname master
hostnamectl --static set-hostname node1
hostnamectl --static set-hostname node2
关闭防火墙
各主机执行
systemctl stop firewalld & systemctl disable firewalld
systemctl stop iptables & systemctl disable iptables
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
配置ssh
各主机执行
修改/etc/ssh/sshd_config助释放开:PubkeyAuthentication yes
mkdir /root/.ssh
scp ./id_rsa.pub root@10.211.55.5:/root/.ssh
cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keys
chmod 700 /root/.ssh
chmod 600 /root/.ssh/authorized_keys
配置网络转发
各主机执行
编辑:vi /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
#重载配置
sysctl -p
# 加载网桥过滤模块
modprobe br_netfilter
# 查看网桥过滤模块是否加载成功
lsmod | grep br_netfilter
配置ipvs
各主机执行
yum install ipset ipvsadmin -y
cat <<EOF > /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod +x /etc/sysconfig/modules/ipvs.modules
/bin/bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
配置host
各主机执行
cat <<EOF > /etc/hosts
10.211.55.3 master
10.211.55.4 node1
10.211.55.5 node2
EOF
配置仓库
各主机执行
配置kubernetes.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
配置docker
registry-mirrors镜像地址获取方法:
访问:https://cr.console.aliyun.com/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install --setopt=obsoletes=0 docker-ce-18.06.3.ce-3.el7 -y
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://xxxxx.mirror.aliyuncs.com"]
}
EOF
安装组件:
各主机执行
yum install --setopt=obsoletes=0 kubeadm-1.17.4-0 kubelet-1.17.4-0 -y
配置KUBE_PROXY_MODE
各主机执行
vim /etc/sysconfig/kubelet
KUBELET_CGROUP_ARGS="--cgroup-driver=systemd"
KUBE_PROXY_MODE="ipvs"
创建集群
各主机执行:
systemctl enable kubelet.service
systemctl start kubelet.service
systemctl enable kube-proxy.service
systemctl start kube-proxy.service
systemctl start docker & systemctl enable docker
master执行
kubeadm init \
--apiserver-advertise-address=10.211.55.3 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.17.4 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12
master主机执行
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
node加入集群
node主机需要
kubeadm join 10.211.55.3:6443 --token 1rp3tx.xgumf4f7vcdbf0u2 \
--discovery-token-ca-cert-hash sha256:5978ae9c8cf4f17af81a65c338f9b21e3440ab878ae79daab4119c9568a5ca4e
如果遇到报错,根据显示有可能是docker没启动或者k8s api-server 没启动,或者token失效,对于token失效,则通过kubeadm join token失效 - mvpbang - 博客园
中kubeadm token create --print-join-command创建新token,默认token有失时间,也可以设置用不失效
kubeadm join 10.211.55.3:6443 --token 1rp3tx.xgumf4f7vcdbf0u2 --discovery-token-ca-cert-hash sha256:5978ae9c8cf4f17af81a65c338f9b21e3440ab878ae79daab4119c9568a5ca4e
安装flannel否则节点一直处于noready
注意如果在执行kubectl的时候遇到了报错“The connection to server localhost:8080 was refused did you specify the right host or port”,可以参考flannel安装报错_Fly_鹏程万里-CSDN博客配置添加环境变量,export KUBECONFIG=/etc/kubernetes/admin.conf 至source /etc/profile,没有则忽略
master主机需要
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
kubectl get nodes
master节点镜像:
查看所有命令空间和命名空间下的pod:
创建nginx的pod
master主机执行
kubectl create deployment nginx --image=nginx:1.14-alpine
kubectl get deploy
kubectl describe pod nginx-6867cdf567-9tbg9
创建svc并访问服务:
kubectl expose deploy nginx --port=80 --target-port=80 --type=NodePort
service/nginx exposed
外部访问:master节点ip+svc中的ports端口
kubectl get pod -o wide
查看pod ip
在集群内部可以:
kubectl get svc
kubectl get svc --all-namespaces
查看service ip
这里要区分:
ClusterIP:只对集群内部可见
NodePort:对外部可见
安装dashboard
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
nodePort: 30001
kubectl apply -f kubernetes-dashboard.yaml
访问:https://10.211.55.3:30001/#!/login
重装或删除dashboard
kubectl delete -f kubernetes-dashboard.yaml kubectl create -f kubernetes-dashboard.yaml
删除pod
kubectl -n kube-system delete $(kubectl -n kube-system get pod -o name | grep dashboard)
删除kubernetes-dashboard
kubectl delete deployment kubernetes-dashboard --namespace=kube-system
kubectl delete service kubernetes-dashboard --namespace=kube-system
kubectl delete role kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete rolebinding kubernetes-dashboard-minimal --namespace=kube-system
kubectl delete sa kubernetes-dashboard --namespace=kube-system
kubectl delete secret kubernetes-dashboard-certs --namespace=kube-system
kubectl delete secret kubernetes-dashboard-csrf --namespace=kube-system
kubectl delete secret kubernetes-dashboard-key-holder --namespace=kube-system
创建相关凭证
# 创建用户
kubectl create serviceaccount dashboard-admin -n kube-system
# 用户授权
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
#获取用户Token
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
拿到token后登录
授予匿名用户访问权限
kubectl create clusterrolebinding test:anonymous --clusterrole=dashboard-admin --user=system:anonymous
有效的解决办法(1.17)
kubectl create serviceaccount k8s-admin -n kubernetes-dashboard
kubectl create clusterrolebinding k8s-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:k8s-admin
kubectl get sa,secrets -n kubernetes-dashboard
kubectl describe secret k8s-admin-token-xxxx -n kubernetes-dashboard
#查看创建好的clusterrolebinding
kubectl get clusterrolebinding
安装dashborad过程中遇到的坑
如果是新版本的dashborad,默认会创建一个命名空间kubernetes-dashboard,但是访问会出现问题,
解决:
删除原有命名空间下的secret
kubectl delete secret kubernetes-dashboard-certs -n kubernetes-dashboard
创建新的(如果是用kubeadm安装的,如果是二进制方式安装参考解决k8s Dashboard其他浏览器(火狐除外)不能访问_小科蜜666的博客-CSDN博客_k8s的dashboard页面访问不了)
kubectl create secret generic kubernetes-dashboard-certs \
--from-file=/etc/kubernetes/pki/apiserver.key --from-file=/etc/kubernetes/pki/apiserver.crt -n kubernetes-dashboard
修改dashboard.yaml
重新访问后就有提示确认风险了
参考:
Kubernetes-v1.17集群安装dashboard_我的博客-CSDN博客_kubernetes 安装dashboard
https://segmentfault.com/a/1190000039805777
手把手从零搭建与运营生产级的 Kubernetes 集群与 KubeSphere_Kubernetes中文社区
centos8操作系统初始化设置_u013078871的博客-CSDN博客
Kubernetes-v1.17集群安装dashboard_我的博客-CSDN博客_kubernetes安装dashboard
更多推荐
所有评论(0)