k8s多master节点部署(实验)
k8s多master节点部署(实验)文章目录k8s多master节点部署(实验)前言1. 多节点的部署2. 搭建nginx负载均衡3. 配置keepalived高可用服务4. 修改两个node节点5. 测试前言上节,我们部署了k8s的单节点,主要的核心点就是证书的创建和颁发,flannel网络组件也是相当重要的。本文主要是基于单master节点的部署(https://blog.csdn.n...
·
k8s多master节点部署(实验)
前言
- 上节,我们部署了k8s的单节点,主要的核心点就是证书的创建和颁发,flannel网络组件也是相当重要的。本文主要是基于单master节点的部署(https://blog.csdn.net/double_happy111/article/details/105858003)来升级并部署的。
1. 多节点的部署
- 部署master02节点
- 复制kubernetes目录下面的所有文件
#复制/opt/kubernetes/目录下的所有文件到master02节点上
[root@localhost kubeconfig]# scp -r /opt/kubernetes/ root@192.168.73.62:/opt
#复制master1中三个组件的启动脚本:kube-apiserver.service、kube-controller-manager.service、kube-scheduler.service
[root@localhost kubeconfig]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.73.62:/usr/lib/systemd/system/
#修改赋值的配置文件
[root@master2 ~]# cd /opt/kubernetes/cfg/
[root@master2 cfg]# ls
kube-apiserver kube-controller-manager kube-scheduler token.csv
[root@master2 cfg]# vim kube-apiserver
#修改两处:
--bind-address=192.168.73.62
--advertise-address=192.168.73.62
- 拷贝master01上已有的etcd证书给master2使用
#拷贝master01上已有的etcd证书给master2使用
[root@master1 ~]# scp -r /opt/etcd/ root@192.168.73.62:/opt/
root@192.168.73.62's password:
etcd 100% 523 326.2KB/s 00:00
etcd 100% 18MB 45.1MB/s 00:00
etcdctl 100% 15MB 33.0MB/s 00:00
ca-key.pem 100% 1679 160.2KB/s 00:00
ca.pem 100% 1265 592.6KB/s 00:00
server-key.pem 100% 1679 884.2KB/s 00:00
server.pem 100% 1338 768.5KB/s 00:00
- 启动master02的组件
#启动master2的三个组件
//开启 apiserver 组件
[root@master2 cfg]# systemctl start kube-apiserver.service
[root@master2 cfg]# systemctl enable kube-apiserver.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
开启 controller-manager 组件
[root@master2 cfg]# systemctl start kube-controller-manager.service
[root@master2 cfg]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
//开启 scheduler 组件
[root@master2 cfg]# systemctl start kube-scheduler.service
[root@master2 cfg]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
- 添加环境变量,优化kubectl命令
#增加环境变量,优化kubectl命令
[root@master2 cfg]# vim /etc/profile
在末尾添加:
export PATH=$PATH:/opt/kubernetes/bin/
[root@master2 cfg]# source /etc/profile 使之生效
- 验证master是否加入k8s集群
#验证master2是否加入k8s集群
[root@localhost cfg]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.73.63 Ready <none> 50m v1.12.3
192.168.73.64 Ready <none> 22s v1.12.3
2. 搭建nginx负载均衡
- 关闭防火墙
[root@localhost ~]# systemctl stop firewalld.service
[root@localhost ~]# setenforce 0
- 配置官方nginx的yum源
//编写repo文件
[root@localhost ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
enable=1
- 加载yum源,安装nginx
[root@localhost ~]# yum list
[root@localhost ~]# yum install nginx -y //下载nginx
- 修改nginx的主配置文件
#[root@localhost ~]# vim /etc/nginx/nginx.conf
在events模块下添加以下内容:日志格式、日志存放位置、upstream模块
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
#指向master节点的IP地址
upstream k8s-apiserver {
server 192.168.73.61:6443;
server 192.168.73.62:6443;
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
- 检查配置文件
//检查配置文件是否有语法错误
[root@localhost ~]# nginx -t
[root@localhost ~]# systemctl start nginx //开启服务
[root@localhost ~]# netstat -natp | grep nginx
3. 配置keepalived高可用服务
- 安装keepalived软件
#yum安装keepalived软件
[root@localhost ~]# yum install keepalived -y
- nginx01为master节点
#nginx1节点作为master
[root@nginx1 ~]# vim /etc/keepalived/keepalived.conf
//删除配置文件全部内容,添加以下内容:
! Configuration File for keepalived
global_defs {
# 接收邮件地址
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
# 邮件发送地址
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/nginx/check_nginx.sh" ##检测nginx脚本的路径,稍后会创建
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100 ##优先级
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.100/24 ##虚拟IP地址
}
track_script {
check_nginx
}
}
- nginx02为backup
#node2作为backup
[root@nginx2 ~]# vim /etc/keepalived/keepalived.conf
//删除配置文件全部内容,添加以下内容:
! Configuration File for keepalived
global_defs {
# 接收邮件地址
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
# 邮件发送地址
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_SLAVE
}
vrrp_script check_nginx {
script "/etc/nginx/check_nginx.sh" ##检测脚本的路径,稍后会创建
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 90 ##优先级低于master
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.100.100/24 ##虚拟IP地址
}
track_script {
check_nginx
}
}
- 创建检测脚本
#创建检测脚本
[root@localhost ~]# vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
[root@localhost ~]# chmod +x /etc/nginx/check_nginx.sh //授权
#开启keepalived服务
[root@localhost ~]# systemctl start keepalived.service
- 查看高可用的的漂移地址
#查看ip地址,可以看到高可用群集中的master节点上有漂移地址,backup节点上没有
[root@localhost ~]# ip a
#将nginx01的nginx关闭
[root@localhost ~]# systemctl stop nginx
[root@localhost ~]# ip a
#在nginx02上面查看漂移地址
[root@localhost ~]# ip a
- 然后在nginx01上面重新开启nginx服务
[root@localhost ~]# systemctl start nginx
4. 修改两个node节点
- 修改两个node节点的配置文件
#修改两个node节点的配置文件,server ip地址为同一的IP地址(三个文件)
//修改内容:server: https://192.168.100.100:6443(都改成vip地址)
[root@localhost cfg]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@localhost cfg]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@localhost cfg]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
- 重启服务
#重启服务
[root@localhost cfg]# systemctl restart kubelet.service
[root@localhost cfg]# systemctl restart kube-proxy.service
- 查看修改内容
#检查修改内容
//确保必须在此路径下 grep 检查
[root@localhost ~]# cd /opt/kubernetes/cfg/
[root@localhost cfg]# grep 100 *
#接下来在 nginx1 上查看 nginx 的 k8s日志:
[root@localhost ~]# tail /var/log/nginx/k8s-access.log
5. 测试
- 在master01上操作
#在 master1上操作,创建 Pod进行测试
[root@localhost kubeconfig]# kubectl run nginx --image=nginx
- 查看pod状态
#查看 Pod 状态
[root@master1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dbddb74b8-vj4wk 1/1 Running 0 16s
此时已经创建完成,正在运行中。
- 查看刚刚创建的nginx日志
#查看刚刚创建的nginx日志
[root@master1 ~]# kubectl logs nginx-dbddb74b8-vj4wk
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-vj4wk)
- 查看问题的解决办法
#出现 error 是由于权限不足,下面来授权解决一下这个问题。
解决办法(添加匿名用户授予权限):
[root@master1 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
此时,再次查看日志,就不会出现报错,但是没有信息产生,因为没有访问容器。
- 查看网络pod
#查看pod网络
[root@localhost kubeconfig]# kubectl get pods -o wide
更多推荐
已为社区贡献5条内容
所有评论(0)