二进制部署K8S多Master+LB负载均衡群集+K8S日志排错
文章目录服务器环境实验步骤Master2节点部署LB1,2负载均衡部署Node节点修改实验测试在LB1上查看nginx的K8S日志创建测试pod在Node节点上测试nginx服务器环境角色IPmaster1192.168.18.10master2192.168.18.40node1192.168.18.20node2192.168.18.30...
·
文章目录
服务器环境
角色 | IP |
---|---|
master1 | 192.168.18.10 |
master2 | 192.168.18.40 |
node1 | 192.168.18.20 |
node2 | 192.168.18.30 |
LB1 | 192.168.18.50 |
LB2 | 192.168.18.60 |
VIP | 192.168.18.70 |
Tips:其他节点已在单节点部署
实验步骤
Master2节点部署
- 拷贝Master1上面的文件到Master2上面
scp -r /opt/kubernetes/ root@192.168.18.40:/opt/
scp -r /usr/lib/systemd/system/{kube-apiserver,kube-scheduler,kube-controller-manager}.service root@192.168.18.40:/usr/lib/systemd/system/
scp -r /opt/etcd/ root@192.168.18.40:/opt/
- 修改Master2上的配置文件
vim /opt/kubernetes/cfg/kube-apiserver
#只需要修改:
--bind-address=192.168.18.40 \
--advertise-address=192.168.18.40 \
- 启动服务
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
systemctl start kube-scheduler
systemctl enable kube-scheduler
- 添加环境变量
vim /etc/profile
#末行添加
export PATH=$PATH:/opt/kubernetes/bin
source /etc/profile
- 查看节点
kubectl get node
LB1,2负载均衡部署
- 关闭防火墙及核心防护
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
- 配置nginx的yum源
vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
#加载yum仓库
yum list
- 安装nginx
yum install nginx -y
- 添加四层转发
vim /etc/nginx/nginx.conf
#在events和http中间添加
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
server 192.168.18.10:6443;
server 192.168.18.40:6443;
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
- 网页测试
vim /usr/share/nginx/html/index.html
#LB1
<h1>Welcome to master nginx!</h1>
#LB2
<h1>Welcome to backup nginx!</h1>
systemctl start nginx
- 访问:
http://192.168.18.50
http://192.168.18.60
- 安装Keepalived
yum install keepalived -y
修改Keepalived配置文件
vim /etc/keepalived/keepalived.conf
#修改
vrrp_script check_nginx {
script "/etc/nginx/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER #备服务器改为BACKUP
interface ens33
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.18.70/24
}
track_script {
check_nginx
}
}
- 编写check_nginx脚本
vim /etc/nginx/check_nginx.sh
chmod +x /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
- 测试负载均衡
-
测试VIP访问nginx
-
systemctl start keepalived
-
此时VIP是在LB1上
-
pkill nginx
LB1后,查看VIP,发现VIP浮动到LB2上面了
Node节点修改
- 修改node节点的ip
cd /opt/kubernetes/cfg
vim bootstrap.kubeconfig
server: https://192.168.18.70:6443 #修改为VIP的地址
vim kubelet.kubeconfig
server: https://192.168.18.70:6443 #修改为VIP的地址
vim kube-proxy.kubeconfig
server: https://192.168.18.70:6443 #修改为VIP的地址
- 重启node节点上的服务
systemctl restart kubelet
systemctl restart kube-proxy
实验测试
在LB1上查看nginx的K8S日志
tail -f /var/log/nginx/k8s-access.log
- 节点已实现负载均衡
创建测试pod
kubectl run nginx --image=nginx
- 创建成功
在Node节点上测试nginx
-
kubectl get pods -o wide
-
Node1
-
Node2
-
node1与node2通过Flannel网络实现容器之间的访问与通信
查看日志报错解决方法
- error: You must be logged in to the server (the server has asked for the client to provide credentials
- 解决方法:修改node节点上kubelet.config文件
vim /opt/kubernetes/cfg/kubelet.config
#末行添加
authentication:
anonymous:
enabled: true
- Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy)
- 解决方法:anonymous用户绑定一个cluster-admin的权限
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
- 通过排错之后,我们就可以正常查看日志了
kubectl logs nginx-dbddb74b8-nn7z2
更多推荐
已为社区贡献11条内容
所有评论(0)