k8s系列-10-k8s集群验证和图形化界面访问k8s
老板们,点个关注吧。 上一篇我们成功使用kubespary部署了最新版本的k8s,那么这篇我们就查看下到底安装了哪些服务,以及如何图形化界面访问k8s系统。 查看集群内容1、查看命名空间[root@node1 ~]# kubectl get nsNAMESTATUSAGEdefaultActive23hingress-nginxActive23hkube-node-leaseA
老板们,点个关注吧。
上一篇我们成功使用kubespary部署了最新版本的k8s,那么这篇我们就查看下到底安装了哪些服务,以及如何图形化界面访问k8s系统。
查看集群内容
1、查看命名空间
[root@node1 ~]# kubectl get ns
NAME STATUS AGE
default Active 23h
ingress-nginx Active 23h
kube-node-lease Active 23h
kube-public Active 23h
kube-system Active 23h
[root@node1 ~]#
2、查看default有什么内容
[root@node1 ~]# kubectl get all -n default
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.200.0.1 <none> 443/TCP 23h
[root@node1 ~]#
3、查看ingress-nginx有什么内容
[root@node1 ~]# kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-controller-68hbq 1/1 Running 1 (14m ago) 23h
pod/ingress-nginx-controller-clt8p 1/1 Running 1 (14m ago) 23h
pod/ingress-nginx-controller-hmcf6 1/1 Running 1 (14m ago) 23h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/ingress-nginx-controller 3 3 3 3 3 kubernetes.io/os=linux 23h
[root@node1 ~]#
可以看到有三个pod和一个daemonset,说明每一个节点上都运行了一个ingerss-nginx。
4、查看kube-node-lease和kube-public有什么内容
[root@node1 ~]# kubectl get all -n kube-node-lease
No resources found in kube-node-lease namespace.
[root@node1 ~]# kubectl get all -n kube-public
No resources found in kube-public namespace.
[root@node1 ~]#
这两个命名空间下什么内容都没有,为空。
5、查看kube-system有什么内容
[root@node1 ~]# kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/calico-kube-controllers-5788f6558-tv62d 1/1 Running 2 (21m ago) 24h
pod/calico-node-lv2mq 1/1 Running 1 (21m ago) 24h
pod/calico-node-nvlvd 1/1 Running 1 (21m ago) 24h
pod/calico-node-r9znq 1/1 Running 1 (21m ago) 24h
pod/coredns-8474476ff8-2zmkb 1/1 Running 1 (21m ago) 23h
pod/coredns-8474476ff8-bjssc 1/1 Running 1 (21m ago) 23h
pod/dns-autoscaler-5ffdc7f89d-pstjw 1/1 Running 1 (21m ago) 23h
pod/kube-apiserver-node1 1/1 Running 2 (21m ago) 24h
pod/kube-apiserver-node2 1/1 Running 2 (21m ago) 24h
pod/kube-controller-manager-node1 1/1 Running 2 (21m ago) 24h
pod/kube-controller-manager-node2 1/1 Running 2 (21m ago) 24h
pod/kube-proxy-8qfm5 1/1 Running 1 (21m ago) 24h
pod/kube-proxy-d8d7d 1/1 Running 1 (21m ago) 24h
pod/kube-proxy-vlfb2 1/1 Running 1 (21m ago) 24h
pod/kube-scheduler-node1 1/1 Running 3 (21m ago) 24h
pod/kube-scheduler-node2 1/1 Running 2 (21m ago) 24h
pod/kubernetes-dashboard-548847967d-5fk9w 1/1 Running 1 (21m ago) 23h
pod/kubernetes-metrics-scraper-6d49f96c97-thk7v 1/1 Running 1 (21m ago) 23h
pod/nginx-proxy-node3 1/1 Running 1 (21m ago) 24h
pod/nodelocaldns-5lfw2 0/1 Pending 0 23h
pod/nodelocaldns-9pk5m 1/1 Running 1 (21m ago) 23h
pod/nodelocaldns-d8m7h 0/1 Pending 0 23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/coredns ClusterIP 10.200.0.3 <none> 53/UDP,53/TCP,9153/TCP 23h
service/dashboard-metrics-scraper ClusterIP 10.200.177.189 <none> 8000/TCP 23h
service/kubernetes-dashboard ClusterIP 10.200.30.147 <none> 443/TCP 23h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/calico-node 3 3 3 3 3 kubernetes.io/os=linux 24h
daemonset.apps/kube-proxy 3 3 3 3 3 kubernetes.io/os=linux 24h
daemonset.apps/nodelocaldns 3 3 1 3 1 kubernetes.io/os=linux 23h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/calico-kube-controllers 1/1 1 1 24h
deployment.apps/coredns 2/2 2 2 23h
deployment.apps/dns-autoscaler 1/1 1 1 23h
deployment.apps/kubernetes-dashboard 1/1 1 1 23h
deployment.apps/kubernetes-metrics-scraper 1/1 1 1 23h
NAME DESIRED CURRENT READY AGE
replicaset.apps/calico-kube-controllers-5788f6558 1 1 1 24h
replicaset.apps/coredns-8474476ff8 2 2 2 23h
replicaset.apps/dns-autoscaler-5ffdc7f89d 1 1 1 23h
replicaset.apps/kubernetes-dashboard-548847967d 1 1 1 23h
replicaset.apps/kubernetes-metrics-scraper-6d49f96c97 1 1 1 23h
[root@node1 ~]#
在这个命名空间下可以看到有很多pod、service、daemonset、deployment、replicaset,除了pod我们前面介绍过之外,剩余的那些东西都分别代表什么内容呢?下面咱们来详细说明下。
6、资源分类
pod:是k8s中最小的单元
ReplicaSet:调度器,通过标签控制 pod 的副本数目
Deployment:控制器,管理无状态的应用
StatefulSet:管理有状态的应用
DaemonSet:可以在每个节点运行 pod 主键
Job:批处理
CronJob:批处理
上面的都是在node1节点,也就是master节点看到的内容,至于别的节点就自行去实践即可。
ingress-nginx
从上面kube-system这个命名空间中可以看到,ingerss-nginx只有在node3上运行了,所以我们需要切换到node上查看,至于为什么只在node3运行,看完我们就知道了。
[root@node3 ~]# crictl ps
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
570b3da43f846 a9f76bcccfb5f 35 minutes ago Running ingress-nginx-controller 1 a6c5c45508d8c
a11842fca1afe 6570786a0fd3b 36 minutes ago Running calico-node 1 641441cd33f53
c6b5f26c06708 7801cfc6d5c07 36 minutes ago Running kubernetes-metrics-scraper 1 526b653df8827
dae61986d67eb 296a6d5035e2d 36 minutes ago Running coredns 1 a04b5d5cf27a2
f3f2e6a0677c1 72f07539ffb58 36 minutes ago Running kubernetes-dashboard 1 6d28d27cd415b
ce6269e3270f8 fcd3512f2a7c5 36 minutes ago Running calico-kube-controllers 2 5f489f39e9c14
fc1f16997de4a 5bae806f8f123 36 minutes ago Running node-cache 1 331eac1c6a981
7dd92d5eeeebf 8f8fdd6672d48 36 minutes ago Running kube-proxy 1 69655ac71243d
1d5821872d6ac f6987c8d6ed59 36 minutes ago Running nginx-proxy 1 f5e8c2f953ca7
[root@node3 ~]# cat /etc/kubernetes/manifests/nginx-proxy.yml
apiVersion: v1
kind: Pod
metadata:
name: nginx-proxy
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-nginx
annotations:
nginx-cfg-checksum: "a9814dd8ff52d61bc33226a61d3159315ba1c9ad"
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-node-critical
containers:
- name: nginx-proxy
image: docker.io/library/nginx:1.21.4
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 25m
memory: 32M
livenessProbe:
httpGet:
path: /healthz
port: 8081
readinessProbe:
httpGet:
path: /healthz
port: 8081
volumeMounts:
- mountPath: /etc/nginx
name: etc-nginx
readOnly: true
volumes:
- name: etc-nginx
hostPath:
path: /etc/nginx
[root@node3 ~]#
从上面“volumeMounts”参数可以看到,挂载的目录是/etc/nginx,那么我们去看下那个里面有什么:
[root@node3 ~]# cat /etc/nginx/nginx.conf
error_log stderr notice;
worker_processes 2;
worker_rlimit_nofile 130048;
worker_shutdown_timeout 10s;
events {
multi_accept on;
use epoll;
worker_connections 16384;
}
stream {
upstream kube_apiserver {
least_conn;
server 192.168.112.130:6443;
server 192.168.112.131:6443;
}
server {
listen 127.0.0.1:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
http {
aio threads;
aio_write on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 5m;
keepalive_requests 100;
reset_timedout_connection on;
server_tokens off;
autoindex off;
server {
listen 8081;
location /healthz {
access_log off;
return 200;
}
location /stub_status {
stub_status on;
access_log off;
}
}
}
[root@node3 ~]#
我们重点关注下以下这一段:
stream {
upstream kube_apiserver {
least_conn;
server 192.168.112.130:6443;
server 192.168.112.131:6443;
}
server {
listen 127.0.0.1:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
}
}
哦,懂了,原来是将master节点上的kube_apiserver做了负载,由于master是node1和node2,已经占用了6443端口,所以不会在node1和node2上启动ingerss-nginx,故而只有在node3上有该服务。
清理代理
我们在安装的时候,还记得添加了一个外网代理吧,是不是没有删除,这个时候我们先清空掉,不需要了,这叫什么?
每个节点上都需要操作哈:
[root@node1 ~]# rm -f /etc/systemd/system/containerd.service.d/http-proxy.conf
[root@node1 ~]# systemctl daemon-reload
[root@node1 ~]# systemctl restart containerd
[root@node1 ~]# grep 8118 -r /etc/yum*
/etc/yum.conf:proxy=http://192.168.112.130:8118
[root@node1 ~]#
[root@node1 ~]# vim /etc/yum.conf
# 把上面grep出来的那一行,删除掉。
[root@node1 ~]#
测试集群
1、新建一个nginx的daemonset:
剩余内容请转至VX公众号 “运维家” ,回复 “118” 查看。
更多推荐
所有评论(0)