k8s-----------------多master&lvs&dashboard
目录一、多mater加入之前的单节点二、lvs(keepalived+nginx)三、dashboard1、 创建rdac控制管理资源(kind:Role)2、 创建secret安全资源(kind:Secret)3、 创建configmap配置管理资源(kind:ConfigMap)4、创建控制资源(kind:ServiceAccount、Deployment)5、 创建service资源(kin
·
目录
一、多mater加入之前的单节点
systemctl stop firewalld
setenforce 0
#部署master2
将master01上的kuberetes目录拷贝至master02上
scp -r /opt/kubernetes/ root@192.168.241.5:/opt
将master01上的三个组件启动脚本拷贝至master02上
scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.241.5:/usr/lib/systemd/system/
修改master02配置文件 kube-apiserver中的IP地址
cd /opt/kubernetes/cfg
vim kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.241.20:2379,https://192.168.241.3:2379,https://192.168.241.4:2379 \
--bind-address=192.168.241.5 \ #修改master2的IP地址
--secure-port=6443 \
--advertise-address=192.168.241.5 \ #修改master2的IP地址
拷贝master01上已有的etcd证书给master02使用
注意:master02一定要有etcd证书(不装etcd也需要etcd证书,因为master02也是要与etcd交互的)
scp -r /opt/etcd/ root@192.168.241.5:/opt
#启动master02的三个组件服务
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl status kube-apiserver
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
systemctl status kube-controller-manager
systemctl start kube-scheduler
systemctl enable kube-scheduler
systemctl status kube-scheduler
#增加环境变量
vim /etc/profile
#末尾添加
export PATH=$path:/opt/kubernetes/bin
source /etc/profile
# 查看master02是否可以检测到node节点
kubectl get node
此时的master2无法控制node节点,只能访问etcd
二、lvs(keepalived+nginx)
所有nginx节点都需要操作
#负载均衡部署
systemctl stop firewalld
setenforce 0
#部署nginx
vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
yum list
yum -y install nginx
#添加四层转发(添加stream模块)
#在events模块和http模块中间添加一个独立的stream模块
vim /etc/nginx/nginx.conf
…………省略内容
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main; #指定日志存放目录
upstream k8s-apiserver {
#master01的ip地址和端口
server 192.168.241.20:6443; #6443是apiserver的端口号
#master02的ip地址和端口
server 192.168.241.5:6443;
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
…………省略内容
# 开启nginx服务
nginx -t #检查配置文件是否有语法错误
systemctl start nginx #开启nginx服务
netstat -ntap | grep nginx #查看nginx状态及监听端口6443
#部署keepalived高可用 #在2台nginx服务器上配置
yum -y install keepalived
#修改配置文件
删除原有配置文件,重新定义添加
vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
# 接收邮件地址
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
# 邮件发送地址
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/nginx/check_nginx.sh" #监控nginx脚本的路径,稍后会创建
}
vrrp_instance VI_1 {
state MASTER #lb01该节点为MASTER,lb02设为BACKUP
interface ens33
virtual_router_id 51
priority 100 #优先级,lb01为master,优先级100。lb02为backup,优先级设90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.241.100/24 #VIP地址(虚拟IP)
}
track_script {
check_nginx
}
}
#创建nginx监控脚本
vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
systemctl stop keepalived
fi
chmod +x /etc/nginx/check_nginx.sh
#开启服务
systemctl start keepalived.service
systemctl status keepalived.service
#查看漂移地址
ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:91:1c:7f brd ff:ff:ff:ff:ff:ff
inet 192.168.241.6/24 brd 192.168.241.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.241.100/24 scope global secondary ens33 ##vip已经起来了
valid_lft forever preferred_lft forever
inet6 fe80::38d2:d1fa:bd9c:3f26/64 scope link
valid_lft forever preferred_lft forever
#验证漂移地址
lb01中使用pkill nginx,再在lb02中使用ip a查看vip地址
#结束lb01上的nginx
pkill nginx
#keepalived也关闭了
systemctl status keepalived
#查看lb01地址
[root@localhost nginx]# ip a
......
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:91:1c:7f brd ff:ff:ff:ff:ff:ff
inet 192.168.241.6/24 brd 192.168.241.255 scope global ens33
valid_lft forever preferred_lft forever
inet6 fe80::38d2:d1fa:bd9c:3f26/64 scope link
valid_lft forever preferred_lft forever
......
这时lb01上的漂移地址就没有了
#查看lb02地址
[root@localhost nginx]# ip a
......
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:2a:02:b2 brd ff:ff:ff:ff:ff:ff
inet 192.168.241.7/24 brd 192.168.241.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.241.100/24 scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::fda:8925:c9d0:1438/64 scope link
valid_lft forever preferred_lft forever
......
# 恢复操作(在lb01中先启动给nginx服务,在启动keepalived服务)
因为有nginx监控,如果先启动keepalived是启不了的,
systemctl start nginx
systemctl start keepalived
#再次使用ip a查看lb01地址
漂移地址就回到了lb01上,因为lb01是主节点,优先级高
[root@localhost nginx]# ip a
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:91:1c:7f brd ff:ff:ff:ff:ff:ff
inet 192.168.241.6/24 brd 192.168.241.255 scope global ens33
valid_lft forever preferred_lft forever
inet 192.168.241.100/24 scope global secondary ens33
valid_lft forever preferred_lft forever
inet6 fe80::38d2:d1fa:bd9c:3f26/64 scope link
valid_lft forever preferred_lft forever
#node节点指向VIP漂移地址
两个node均需要操作
#修改两个node节点配置文件(bootstrap.kubeconfig 、kubelet.kubeconfig、kube-proxy.kubeconfig),server ip统一VIP地址
vim /opt/kubernetes/cfg/bootstrap.kubeconfig
server: https://192.168.241.100:6443
vim /opt/kubernetes/cfg/kubelet.kubeconfig
server: https://192.168.241.100:6443
vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
server: https://192.168.241.100:6443
#重启服务
systemctl restart kubelet.service
systemctl restart kube-proxy.service
#替换完成后自检
grep 100 *
bootstrap.kubeconfig: server: https://192.168.241.100:6443
kubelet.kubeconfig: server: https://192.168.241.100:6443
kube-proxy.kubeconfig: server: https://192.168.241.100:6443
#在lb01上查看nginx的k8s日志
#检查日志是否完成了访问,建立了负载均衡
cat /var/log/nginx/k8s-access.log
192.168.241.3 192.168.241.20:6443 - [14/Apr/2021:17:33:07 +0800] 200 1119
192.168.241.3 192.168.241.5:6443 - [14/Apr/2021:17:33:07 +0800] 200 1120
192.168.241.4 192.168.241.5:6443 - [14/Apr/2021:17:33:11 +0800] 200 1120
192.168.241.4 192.168.241.5:6443 - [14/Apr/2021:17:33:11 +0800] 200 1118
##k8s集群测试
master01操作
# 测试创建pod
kubectl run nginx --image=nginx
#查看pod状态
kubectl get pods
#该指令可以查看到资源具体信息,IP及所在节点
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-dbddb74b8-8qrm4 1/1 Running 0 7m18s 172.17.69.3 192.168.241.3 <none>
#查看日志问题
#查看pod资源的日志
[root@localhost ~]# kubectl logs nginx-dbddb74b8-8qrm4
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-8qrm4)
原因:出现 error 是由于权限不足,需要提权解决办法(添加匿名用户授予权限):
kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
提权后查看日志记录是空的,因为现在这个容器并没有被访问
[root@localhost ~]# kubectl logs nginx-dbddb74b8-8qrm4
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
#访问测试(创建的资源是在192.168.241.3节点上的,所以在该节点进行访问)
[root@localhost ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
nginx-dbddb74b8-8qrm4 1/1 Running 0 7m18s 172.17.69.3 192.168.241.3 <none>
去对应的node节点上使用curl 172.17.69.3
[root@hzh ~]# curl 172.17.69.3
#访问后就会产生日志记录(再次回到master01查看日志)
[root@localhost ~]# kubectl logs nginx-dbddb74b8-8qrm4
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
172.17.69.1 - - [14/Apr/2021:09:42:34 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
172.17.79.0 - - [14/Apr/2021:09:43:31 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"
三、dashboard
# 在master01上操作
#创建dashboard工作目录
[root@master01 k8s]# mkdir dashboard
[root@master01 k8s]# cd dashboard
#拷贝官方的yaml文件(此处已经提前下载好,直接拷贝至dashboard工作目录)
官网下载地址:https://github.com/kubernetes/tree/master/cluster/addons/dashboard
1、 创建rdac控制管理资源(kind:Role)
1)创建
[root@master01 dashboard]# kubectl create -f dashboard-rbac.yaml #-f:以文件(yaml文件)的形式创建资源
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
2)查看yaml文件,查看name名称及namespace命名空间
[root@master01 dashboard]# vim dashboard-rbac.yaml
里面创建的资源kind是Role角色
3)查看Role角色资源
[root@master01 dashboard]# kubectl get Role -n kube-system #-n:指向命名空间
NAME AGE
extension-apiserver-authentication-reader 27h
kubernetes-dashboard-minimal 91s
system::leader-locking-kube-controller-manager 27h
system::leader-locking-kube-scheduler 27h
system:controller:bootstrap-signer 27h
system:controller:cloud-provider 27h
system:controller:token-cleaner 27h
2、 创建secret安全资源(kind:Secret)
1)创建
[root@master01 dashboard]# kubectl create -f dashboard-secret.yaml
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-key-holder created
2)查看yaml文件,查看namespace命名空间及name名称
[root@master01 dashboard]# vim dashboard-secret.yaml
3) 查看创建的资源
[root@master01 dashboard]# kubectl get Secret -n kube-system
NAME TYPE DATA AGE
default-token-4vhn6 kubernetes.io/service-account-token 3 27h
kubernetes-dashboard-certs Opaque 0 23s
kubernetes-dashboard-key-holder Opaque 0 23s
3、 创建configmap配置管理资源(kind:ConfigMap)
1)创建
[root@master01 dashboard]# kubectl create -f dashboard-configmap.yaml
configmap/kubernetes-dashboard-settings created
2)查看yaml文件,查看namespace命名空间及name名称
[root@master01 dashboard]# vim dashboard-configmap.yaml
3)查看创建的资源
[root@master01 dashboard]# kubectl get ConfigMap -n kube-system
NAME DATA AGE
extension-apiserver-authentication 1 27h
kubernetes-dashboard-settings 0 22s
4、创建控制资源(kind:ServiceAccount、Deployment)
1)创建
[root@master01 dashboard]# kubectl create -f dashboard-controller.yaml
serviceaccount/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
创建了两个资源(ServiceAccount服务访问、Deployment控制器资源)
2) 查看yaml文件,查看namespace命名空间及name名称
[root@master01 dashboard]# vim dashboard-controller.yaml
3) 查看创建的资源
[root@master01 dashboard]# kubectl get ServiceAccount -n kube-system
NAME SECRETS AGE
default 1 27h
kubernetes-dashboard 1 27s
[root@master01 dashboard]# kubectl get Deployment -n kube-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 1 1 1 1 59s
5、 创建service资源(kind:Service)
1) 创建
[root@master01 dashboard]# kubectl create -f dashboard-service.yaml
service/kubernetes-dashboard created
2) 查看yaml文件,查看namespace命名空间及name名称
service资源一旦使用,则说明服务就已经提供出去并且被访问,所以必定会提供端口
[root@master01 dashboard]# vim dashboard-service.yaml
pod提供的端口,用户不能直接访问
node节点端口,用户可以访问node节点端口,映射到pod提供的端口
3) 查看创建的资源
[root@master01 dashboard]# kubectl get Service -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard NodePort 10.0.0.60 <none> 443:30001/TCP 18s
6、查看资源
1) 创建完成后查看指定的命名空间kube-system下的pod资源
这时资源创建完成,可以查看整个pods资源
[root@master01 dashboard]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-65f974f565-2xl7d 1/1 Running 0 2m15s
2) 查看多个资源(可以用逗号" ," 隔开)
[root@master01 dashboard]# kubectl get pods,service -n kube-system
NAME READY STATUS RESTARTS AGE
pod/kubernetes-dashboard-65f974f565-2xl7d 1/1 Running 0 2m39s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes-dashboard NodePort 10.0.0.60 <none> 443:30001/TCP 80s
3) 查看pods资源具体创建在哪个节点
[root@master01 dashboard]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
kubernetes-dashboard-65f974f565-2xl7d 1/1 Running 0 3m17s 172.17.69.4 192.168.241.3 <none>
7、访问测试
访问nodeIP加端口号进行测试:https://192.168.241.3:30001,如出现以下问题,属于浏览器证书不被信任(谷歌浏览器的问题),一些老版本的浏览器可以进行访问
下面我们就来解决证书不被信任的问题
8、解决浏览器无法访问的问题
1、创建证书
[root@master01 dashboard]# vim dashboard-cert.sh
cat > dashboard-csr.json <<EOF
{
"CN": "Dashboard",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
EOF
#产生ca证书
K8S_CA=$1
cfssl gencert -ca=$K8S_CA/ca.pem -ca-key=$K8S_CA/ca-key.pem -config=$K8S_CA/ca-config.json -profile=kubernetes dashboard-csr.json | cfssljson -bare dashboard
#删除原本的证书并重新创建证书
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=./ -n kube-system
2、生成证书到指定目录
[root@master01 dashboard]# bash dashboard-cert.sh /root/k8s/k8s-cert/
3、更改yaml文件指向证书位置
[root@master01 dashboard]# vim dashboard-controller.yaml
……省略内容
args:
# PLATFORM-SPECIFIC ARGS HERE
- --auto-generate-certificates
- --tls-key-file=dashboard-key.pem #添加
- --tls-cert-file=dashboard.pem #添加
……省略内容
4、apply重新部署
变更过的yaml文件需要重新部署资源
[root@master01 dashboard]# kubectl apply -f dashboard-controller.yaml
注意:重新部署后,资源所在的节点有可能会变动,建议重新检查下资源信息及所在节点
[root@master01 dashboard]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
kubernetes-dashboard-7dffbccd68-pt2st 1/1 Running 0 35s 172.17.79.3 192.168.241.4 <none>
5、访问验证
浏览器访问:https://192.168.241.4:30001,这时就可以正常访问了
9、访问web网站页面
输入nodeIP及端口访问后,会进入登录方式的选择,正常生产环境会选择令牌的方式区登录
这里我们也选择令牌的方式登录
1、生成令牌,创建资源
1) 创建admin账户角色资源,相当于管理员
[root@master01 dashboard]# kubectl create -f k8s-admin.yaml
serviceaccount/dashboard-admin created
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created
2) 查看secret安全资源
[root@master01 dashboard]# kubectl get secret -n kube-system
NAME TYPE DATA AGE
dashboard-admin-token-8xp9n kubernetes.io/service-account-token 3 15s #生成了admin-token资源
default-token-4vhn6 kubernetes.io/service-account-token 3 27h
kubernetes-dashboard-certs Opaque 11 3m42s
kubernetes-dashboard-key-holder Opaque 2 12m
kubernetes-dashboard-token-mxlp9 kubernetes.io/service-account-token 3 9m50s
3) 查看admin-token
[root@master01 dashboard]# kubectl describe secret dashboard-admin-token-8xp9n -n kube-system
Name: dashboard-admin-token-8xp9n
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: dashboard-admin
kubernetes.io/service-account.uid: a502f1dd-9d0e-11eb-9757-000c297eb227
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1359 bytes
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tOHhwOW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYTUwMmYxZGQtOWQwZS0xMWViLTk3NTctMDAwYzI5N2ViMjI3Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.gH8HMFx1lLsXEP9Lh1wq4hhG-Kq6MyCUNsUo30hNVAduVomgHYFIKNeFxC82oBrSbZX2keM2D2qfIQJk-LSImehDuHrqje67btaQxGGb0bk3RAN4-GDF4JdeFjGYQdIgXfrajbYqICYg1EsvQVWTjQEP5cJ3VJUKXOg4_8Yee3b8h6J5EsX-r7R4I68nghQeh9hiMb5FS_iVPrc2CHHGNbavekI671NwnrFJ_IkwFguHHJ8yNx3pve3UYRPRWAyhcSP16EJfoHFgUK4m7JdzLl1oEhjxjf5hE8N3LnFFmphahGrM_cjLBctLx-hkoL-gPxv5mVi-OCyW60xzJHUjPw
末尾token:后面的内容就是令牌码,记录下来用于web网站的登录
2、复制令牌码,进入web登录页面,登录
现在我们就进入到了k8s的web网站页面,可以在里面进行相应的查看及操作,这里就不一一演示了。
至此,我们整个k8s集群的二进制部署就全部完成了,包括单master节点,多master节点以及web网站页面的显示
更多推荐
已为社区贡献5条内容
所有评论(0)