k8s之常用操作命令(还在补充ing,先看着吧)
像小编这样,依靠百度的日子是真不好受,有时候领导催的时候,你就是想不起来这个命令,百度的时候也出不来,就是难受,全都是玄学的,那怎么办呢,好办自己整一版出来不就ok了,整好也可以在内部宣贯一下本文是基于二进制部署的集群,版本是1.22的哈。
·
k8s之常用操作命令(还在补充ing,先看着吧)
像小编这样,依靠百度的日子是真不好受,有时候领导催的时候,你就是想不起来这个命令,百度的时候也出不来,就是难受,全都是玄学的,那怎么办呢,好办自己整一版出来不就ok了,整好也可以在内部宣贯一下
本文是基于二进制部署的集群,版本是1.22的哈
篇幅较长,小编会持续更新的哈,先看着吧,小编也只是写到了一半,还带完善---------
带"?"号的,小编还没彻底的明白,有待深研
认识万能的kubectl
通过名字就可以看出来这是一个ctl(控制工具)
[root@k8s-master1 ~]# kubectl --help
kubectl controls the Kubernetes cluster manager.
Find more information at: https://kubernetes.io/docs/reference/kubectl/overview/
#基础命令(初级)
Basic Commands (Beginner):
#创建一个资源从一个文件或者是标准输入
create Create a resource from a file or from stdin
#使 replication controller, service, deployment or pod 并暴露他当作一个新的k8s服务
expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service
#运行一个指定的镜像在集群上
run Run a particular image on the cluster
#设置对象的特定要素
set Set specific features on objects
#基础命令(中级)
Basic Commands (Intermediate):
#获取资源的文档
explain Get documentation for a resource
#描述一个或者多个资源
get Display one or many resources
#编辑一个资源在这个服务上
edit Edit a resource on the server
#删除资源通过文件名字,标准输入,资源名字或者资源标签
delete Delete resources by file names, stdin, resources and names, or by resources and label selector
#部署命令
Deploy Commands:
#???
rollout Manage the rollout of a resource
#设置一个新的大小为deployment, replica set, or replication controller(副本数)
scale Set a new size for a deployment, replica set, or replication controller
#自动缩放deployment, replica set, stateful set, or replication controller
autoscale Auto-scale a deployment, replica set, stateful set, or replication controller
#集群控制命令
Cluster Management Commands:
#??
certificate Modify certificate resources.
#展示集群的信息
cluster-info Display cluster information
#展示资源的(cpu/内存)的使用情况
top Display resource (CPU/memory) usage
#使Node节点不能被调度
cordon Mark node as unschedulable
#使Node节点可以被调度
uncordon Mark node as schedulable
#驱赶Node在维护准备的时候
drain Drain node in preparation for maintenance
#更新一个或者多个node为污点
taint Update the taints on one or more nodes
#故障排除和调试命令
Troubleshooting and Debugging Commands:
#显示特指的资源或者资源组细节
describe Show details of a specific resource or group of resources
#打印容器日志在pod里面
logs Print the logs for a container in a pod
#附加到一个正在运行的容器
attach Attach to a running container
#执行一个命令在容器里面
exec Execute a command in a container
#转发一个或多个Pod本地端口
port-forward Forward one or more local ports to a pod
#运行一个代理为k8s api服务
proxy Run a proxy to the Kubernetes API server
#将文件和目录复制到容器和从容器复制文件和目录
cp Copy files and directories to and from containers
#检查授权
auth Inspect authorization
#创建调试会话以排除工作负载和节点的故障
debug Create debugging sessions for troubleshooting workloads and nodes
#高级命令
Advanced Commands:
#将当前运行的版本和将要apply的版本进行对比
diff Diff the live version against a would-be applied version
#应用一个资源配置从文件或者标准输入
apply Apply a configuration to a resource by file name or stdin
#更新资源
patch Update fields of a resource
#利用文件或者标准输入替换一个资源
replace Replace a resource by file name or stdin
#实验:等待一个或者多个指定条件
wait Experimental: Wait for a specific condition on one or many resources
#??
kustomize Build a kustomization target from a directory or URL.
#设置命令
Settings Commands:
#更新一个标签在资源上面
label Update the labels on a resource
#更新资源上的注释
annotate Update the annotations on a resource
#输出到shell完成指定shell(bash/zsh)的代码
completion Output shell completion code for the specified shell (bash or zsh)
#其他命令
Other Commands:
#输出支持的api资源
api-resources Print the supported API resources on the server
#输出支持的api版本
api-versions Print the supported API versions on the server, in the form of "group/version"
#修改kubeconfig文件
config Modify kubeconfig files
#提供与插件交互的实用程序???
plugin Provides utilities for interacting with plugins
#输出客户端或者服务版本信息
version Print the client and server version information
#用法
Usage:
kubectl [flags] [options]
#有关给定命令的详细信息,请使用“kubectl<command>--help”。
Use "kubectl <command> --help" for more information about a given command.
#使用“kubectl选项”获取全局命令行选项列表(适用于所有命令)。
Use "kubectl options" for a list of global command-line options (applies to all commands).
常用操作命令
基础操作命令(create,expose,run,set,explain,get,edit,delete)
#基础命令(初级)
Basic Commands (Beginner):
#创建一个资源从一个文件或者是标准输入
create Create a resource from a file or from stdin
#使 replication controller, service, deployment or pod 并暴露他当作一个新的k8s服务
expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes service
#运行一个指定的镜像在集群上
run Run a particular image on the cluster
#设置对象的特定要素
set Set specific features on objects
#基础命令(中级)
Basic Commands (Intermediate):
#获取资源的文档
explain Get documentation for a resource
#描述一个或者多个资源
get Display one or many resources
#编辑一个资源在这个服务上
edit Edit a resource on the server
#删除资源通过文件名字,标准输入,资源名字或者资源标签
delete Delete resources by file names, stdin, resources and names, or by resources and label selector
create创建资源命令
通过yaml文件或者标准输入来创建资源
###创建一个deployment名字为Nginx --image指定所需要的镜像,--dry-run表示干跑一边不真正的运行
[root@k8s-master1 ~]# kubectl create deployment nginx --image=nginx --dry-run
W1220 04:15:16.730929 65402 helpers.go:555] --dry-run is deprecated and can be replaced with --dry-run=client.
deployment.apps/nginx created (dry run)
###通过一个yaml文件来进行创建资源
[root@k8s-master1 ~]# kubectl create -f coredns.yaml --dry-run
W1220 04:16:55.194647 66630 helpers.go:555] --dry-run is deprecated and can be replaced with --dry-run=client.
serviceaccount/coredns created (dry run)
clusterrole.rbac.authorization.k8s.io/system:coredns created (dry run)
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created (dry run)
configmap/coredns created (dry run)
deployment.apps/coredns created (dry run)
service/kube-dns created (dry run)
expose暴露并创建一个service服务(有则不创建)
###将一个叫Nginx的deployment应用已modeport方式暴漏出去可以让集群外进行访问
[root@k8s-master1 ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort
##--port可以理解为service的端口,--target-port可以理解为映射到pod内部的端口
service/nginx exposed
[root@k8s-master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2d22h
nginx NodePort 10.0.0.162 <none> 80:31178/TCP 7s
run
###运行一个叫nginx的应用,并指定使用的镜像和镜像拉取策略
[root@k8s-master1 ~]# kubectl run nginx --image=10.245.4.88:8888/base-images/nginx --image-pull-policy="IfNotPresent"
pod/nginx created
[root@k8s-master1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 13s
tomcat-596db6d496-4q2wt 1/1 Running 0 27h
tomcat-596db6d496-cn5h7 1/1 Running 0 27h
tomcat-596db6d496-gf668 1/1 Running 0 27h
tomcat-596db6d496-n7gh8 1/1 Running 0 27h
###行一个叫nginx的应用,并指定使用的镜像和镜像拉取策略,并配合secret到指定的harbor仓库进行拉取镜像上
[root@k8s-master1 ~]# kubectl run nginx-pull --image=10.245.4.88:8888/base-images/nginx --image-pull-policy="IfNotPresent" --overrides='{"apiVersion":"v1","spec":{"template":{"spec":{"ImagePullSecrets":{"name":"harbor-login"}}}}}'
pod/nginx-pull created
[root@k8s-master1 ~]# kubectl get pod -o wide | grep nginx-pull
nginx-pull 1/1 Running 0 20s 10.244.224.15 k8s-master2 <none> <none>
mater2上
[root@k8s-master2 docker]# docker images | grep nginx
[root@k8s-master2 docker]# docker images | grep nginx ##看到了镜像的拉取
10.245.4.88:8888/base-images/nginx latest 605c77e624dd 11 months ago 141MB
set
###将deployment的nginx容器cpu限制为500m,内存限制为512M
[root@k8s-master1 ~]# kubectl set resources deployment nginx --limits=cpu=500m,memory=512Mi
deployment.apps/nginx resource requirements updated
[root@k8s-master1 ~]# kubectl get deployment nginx -o yaml | grep -A 3 limits
limits:
cpu: 500m
memory: 512Mi
terminationMessagePath: /dev/termination-log
###设置deployment的nginx种的requests和limits
[root@k8s-master1 ~]# kubectl set resources deployment nginx --limits=cpu=500m,memory=512Mi --requests=cpu=100m,memory=256Mi
deployment.apps/nginx resource requirements updated
[root@k8s-master1 ~]# kubectl get deployment nginx -o yaml | grep -A 6 resources
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 100m
memory: 256Mi
###清除资源requests和limits或者置为0
[root@k8s-master1 ~]# kubectl set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0
deployment.apps/nginx resource requirements updated
[root@k8s-master1 ~]# kubectl get deployment nginx -o yaml | grep -A 6 resources
resources:
limits:
cpu: "0"
memory: "0"
requests:
cpu: "0"
memory: "0"
explain
###显示deployment资源的相关信息
[root@k8s-master1 ~]# kubectl explain deployment
KIND: Deployment
VERSION: apps/v1
DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
Specification of the desired behavior of the Deployment.
status <Object>
Most recently observed status of the Deployment.
get
##获取当前命名空间下的service的信息
[root@k8s-master1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2d18h
nginx NodePort 10.0.0.252 <none> 80:30910/TCP 29h
tomcat NodePort 10.0.0.160 <none> 8080:30322/TCP 25h
###获取其他命名空间下的service信息
[root@k8s-master1 ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP,9153/TCP 2d16h
###获取所有的命名空间所有的service信息
[root@k8s-master1 ~]# kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2d18h
kube-system kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP,9153/TCP 2d16h
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.0.0.225 <none> 8000/TCP 2d16h
kubernetes-dashboard kubernetes-dashboard NodePort 10.0.0.167 <none> 443:30001/TCP 2d16h
###获取node节点列表
[root@k8s-master1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready <none> 2d18h v1.22.4
k8s-master2 Ready <none> 44h v1.22.4
k8s-node1 Ready <none> 2d17h v1.22.4
k8s-node2 Ready <none> 2d17h v1.22.4
###获取node节点的标签信息
[root@k8s-master1 ~]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-master1 Ready <none> 2d18h v1.22.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master1,kubernetes.io/os=linux
k8s-master2 Ready <none> 44h v1.22.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master2,kubernetes.io/os=linux
k8s-node1 Ready <none> 2d17h v1.22.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node1,kubernetes.io/os=linux
k8s-node2 Ready <none> 2d17h v1.22.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-node2,kubernetes.io/os=linux
###获取当前命名空间下已经部署的pod应用信息
[root@k8s-master1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-5896cbffc6-62lvh 1/1 Running 0 29h
nginx-5896cbffc6-slt98 1/1 Running 0 25h
tomcat-596db6d496-4q2wt 1/1 Running 0 25h
tomcat-596db6d496-cn5h7 1/1 Running 0 25h
tomcat-596db6d496-gf668 1/1 Running 0 25h
tomcat-596db6d496-n7gh8 1/1 Running 0 25h
###获取当前命名空间下已经部署的pod应用部署在哪个节点上面
[root@k8s-master1 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-5896cbffc6-62lvh 1/1 Running 0 29h 10.244.224.3 k8s-master2 <none> <none>
nginx-5896cbffc6-slt98 1/1 Running 0 25h 10.244.36.70 k8s-node1 <none> <none>
tomcat-596db6d496-4q2wt 1/1 Running 0 25h 10.244.169.141 k8s-node2 <none> <none>
tomcat-596db6d496-cn5h7 1/1 Running 0 25h 10.244.169.139 k8s-node2 <none> <none>
tomcat-596db6d496-gf668 1/1 Running 0 25h 10.244.159.136 k8s-master1 <none> <none>
tomcat-596db6d496-n7gh8 1/1 Running 0 25h 10.244.169.140 k8s-node2 <none> <none>
###获取其他命名空间下的已经部署的pod应用信息
[root@k8s-master1 ~]# kubectl get pod -o wide -n kube-system
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-8db96c76-w4njz 1/1 Running 0 41h 10.244.224.1 k8s-master2 <none> <none>
calico-node-lxfbc 1/1 Running 0 41h 10.245.4.4 k8s-node2 <none> <none>
calico-node-v2fbv 1/1 Running 0 41h 10.245.4.2 k8s-master2 <none> <none>
calico-node-vd6v2 1/1 Running 0 41h 10.245.4.1 k8s-master1 <none> <none>
calico-node-xvqmt 1/1 Running 0 41h 10.245.4.3 k8s-node1 <none> <none>
coredns-7b9f9f5dfd-6fhwh 1/1 Running 1 (44h ago) 2d16h 10.244.169.133 k8s-node2 <none> <none>
###获取所有命名空间下面的所有pod信息
[root@k8s-master1 ~]# kubectl get pod -o wide -A
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx-5896cbffc6-62lvh 1/1 Running 0 29h 10.244.224.3 k8s-master2 <none> <none>
default nginx-5896cbffc6-slt98 1/1 Running 0 25h 10.244.36.70 k8s-node1 <none> <none>
default tomcat-596db6d496-4q2wt 1/1 Running 0 25h 10.244.169.141 k8s-node2 <none> <none>
default tomcat-596db6d496-cn5h7 1/1 Running 0 25h 10.244.169.139 k8s-node2 <none> <none>
default tomcat-596db6d496-gf668 1/1 Running 0 25h 10.244.159.136 k8s-master1 <none> <none>
default tomcat-596db6d496-n7gh8 1/1 Running 0 25h 10.244.169.140 k8s-node2 <none> <none>
kube-system calico-kube-controllers-8db96c76-w4njz 1/1 Running 0 41h 10.244.224.1 k8s-master2 <none> <none>
kube-system calico-node-lxfbc 1/1 Running 0 41h 10.245.4.4 k8s-node2 <none> <none>
kube-system calico-node-v2fbv 1/1 Running 0 41h 10.245.4.2 k8s-master2 <none> <none>
kube-system calico-node-vd6v2 1/1 Running 0 41h 10.245.4.1 k8s-master1 <none> <none>
kube-system calico-node-xvqmt 1/1 Running 0 41h 10.245.4.3 k8s-node1 <none> <none>
kube-system coredns-7b9f9f5dfd-6fhwh 1/1 Running 1 (44h ago) 2d16h 10.244.169.133 k8s-node2 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-5594697f48-kw9zt 1/1 Running 1 (44h ago) 2d16h 10.244.159.130 k8s-master1 <none> <none>
kubernetes-dashboard kubernetes-dashboard-686cc7c688-5tz4m 1/1 Running 1 (44h ago) 2d16h 10.244.36.67 k8s-node1 <none> <none>
###获取当前命名空间下面的所有资源
[root@k8s-master1 ~]# kubectl get all
NAME READY STATUS RESTARTS AGE
pod/nginx-5896cbffc6-62lvh 1/1 Running 0 29h
pod/nginx-5896cbffc6-slt98 1/1 Running 0 25h
pod/tomcat-596db6d496-4q2wt 1/1 Running 0 25h
pod/tomcat-596db6d496-cn5h7 1/1 Running 0 25h
pod/tomcat-596db6d496-gf668 1/1 Running 0 25h
pod/tomcat-596db6d496-n7gh8 1/1 Running 0 25h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2d18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/nginx 2/2 2 2 29h
deployment.apps/tomcat 4/4 4 4 25h
NAME DESIRED CURRENT READY AGE
replicaset.apps/nginx-5896cbffc6 2 2 2 29h
replicaset.apps/nginx-784b7db4b9 0 0 0 29h
replicaset.apps/tomcat-596db6d496 4 4 4 25h
###获取某个命名空间下的全部资源
[root@k8s-master1 ~]# kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGE
pod/calico-kube-controllers-8db96c76-w4njz 1/1 Running 0 41h
pod/calico-node-lxfbc 1/1 Running 0 41h
pod/calico-node-v2fbv 1/1 Running 0 41h
pod/calico-node-vd6v2 1/1 Running 0 41h
pod/calico-node-xvqmt 1/1 Running 0 41h
pod/coredns-7b9f9f5dfd-6fhwh 1/1 Running 1 (43h ago) 2d16h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP,9153/TCP 2d16h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/calico-node 4 4 4 4 4 kubernetes.io/os=linux 2d17h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/calico-kube-controllers 1/1 1 1 2d17h
deployment.apps/coredns 1/1 1 1 2d16h
NAME DESIRED CURRENT READY AGE
replicaset.apps/calico-kube-controllers-644d967594 0 0 0 42h
replicaset.apps/calico-kube-controllers-8db96c76 1 1 1 2d17h
replicaset.apps/coredns-7b9f9f5dfd 1 1 1 2d16h
###获取所有命名空间下面的所有资源
[root@k8s-master1 ~]# kubectl get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod/nginx-5896cbffc6-62lvh 1/1 Running 0 29h
default pod/nginx-5896cbffc6-slt98 1/1 Running 0 25h
default pod/tomcat-596db6d496-4q2wt 1/1 Running 0 25h
default pod/tomcat-596db6d496-cn5h7 1/1 Running 0 25h
default pod/tomcat-596db6d496-gf668 1/1 Running 0 25h
default pod/tomcat-596db6d496-n7gh8 1/1 Running 0 25h
kube-system pod/calico-kube-controllers-8db96c76-w4njz 1/1 Running 0 41h
kube-system pod/calico-node-lxfbc 1/1 Running 0 41h
kube-system pod/calico-node-v2fbv 1/1 Running 0 41h
kube-system pod/calico-node-vd6v2 1/1 Running 0 41h
kube-system pod/calico-node-xvqmt 1/1 Running 0 41h
kube-system pod/coredns-7b9f9f5dfd-6fhwh 1/1 Running 1 (43h ago) 2d16h
kubernetes-dashboard pod/dashboard-metrics-scraper-5594697f48-kw9zt 1/1 Running 1 (43h ago) 2d16h
kubernetes-dashboard pod/kubernetes-dashboard-686cc7c688-5tz4m 1/1 Running 1 (43h ago) 2d16h
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 2d18h
kube-system service/kube-dns ClusterIP 10.0.0.2 <none> 53/UDP,53/TCP,9153/TCP 2d16h
kubernetes-dashboard service/dashboard-metrics-scraper ClusterIP 10.0.0.225 <none> 8000/TCP 2d16h
kubernetes-dashboard service/kubernetes-dashboard NodePort 10.0.0.167 <none> 443:30001/TCP 2d16h
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 4 4 4 4 4 kubernetes.io/os=linux 2d17h
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default deployment.apps/nginx 2/2 2 2 29h
default deployment.apps/tomcat 4/4 4 4 25h
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 2d17h
kube-system deployment.apps/coredns 1/1 1 1 2d16h
kubernetes-dashboard deployment.apps/dashboard-metrics-scraper 1/1 1 1 2d16h
kubernetes-dashboard deployment.apps/kubernetes-dashboard 1/1 1 1 2d16h
NAMESPACE NAME DESIRED CURRENT READY AGE
default replicaset.apps/nginx-5896cbffc6 2 2 2 29h
default replicaset.apps/nginx-784b7db4b9 0 0 0 29h
default replicaset.apps/tomcat-596db6d496 4 4 4 25h
kube-system replicaset.apps/calico-kube-controllers-644d967594 0 0 0 42h
kube-system replicaset.apps/calico-kube-controllers-8db96c76 1 1 1 2d17h
kube-system replicaset.apps/coredns-7b9f9f5dfd 1 1 1 2d16h
kubernetes-dashboard replicaset.apps/dashboard-metrics-scraper-5594697f48 1 1 1 2d16h
kubernetes-dashboard replicaset.apps/kubernetes-dashboard-686cc7c688 1 1 1 2d16h
###获取当前命名空间下的replicaset
[root@k8s-master1 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-5896cbffc6 2 2 2 29h
nginx-784b7db4b9 0 0 0 29h
tomcat-596db6d496 4 4 4 25h
edit
###对叫Nginx的deployment进行在线编辑
[root@k8s-master1 ~]# kubectl edit deployment nginx
deployment.apps/nginx edited
##可以在线修改相关资源的配置,使用方法与vi和vim相同
delete
###指定yaml文件进行资源的删除
[root@k8s-master1 ~]# kubectl delete -f deployment-nginx.yaml
deployment.apps "nginx" deleted
###删除一个名称为nginx的pod
[root@k8s-master1 ~]# kubectl delete pod nginx
pod "nginx" deleted
###删除一个叫nginxz的deployment控制器
[root@k8s-master1 ~]# kubectl delete deployment nginx
deployment.apps "nginx" deleted
部署命令(rollout,scale,autoscale)
#部署命令
Deploy Commands:
#对资源进行管理如历史版本。暂停和恢复,查看和回滚
rollout Manage the rollout of a resource
#设置一个新的大小为deployment, replica set, or replication controller(副本数)
scale Set a new size for a deployment, replica set, or replication controller
#自动缩放deployment, replica set, stateful set, or replication controller
autoscale Auto-scale a deployment, replica set, stateful set, or replication controller
rollout命令
###查看叫nginx的deployment资源的历史版本
[root@k8s-master1 ~]# kubectl rollout history deployment nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
2 <none>
###查看叫nginx的deployment资源历史版本的第二版的详细信息
[root@k8s-master1 ~]# kubectl rollout history deployment nginx --revision=2
deployment.apps/nginx with revision #2
Pod Template:
Labels: app=nginx
pod-template-hash=89c9b4b66
Containers:
nginx:
Image: 10.245.4.88:8888/base-images/tomcat
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
###将叫nginx的deployment资源恢复到第二版
[root@k8s-master1 ~]# kubectl rollout history deployment nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
2 <none>
3 <none>
[root@k8s-master1 ~]# kubectl rollout undo deployment nginx --to-revision=2
deployment.apps/nginx rolled back
[root@k8s-master1 ~]# kubectl rollout history deployment nginx
deployment.apps/nginx
REVISION CHANGE-CAUSE
3 <none>
4 <none>
###查看叫Nginx的deployment资源的状态
[root@k8s-master1 ~]# kubectl rollout status deployment nginx
deployment "nginx" successfully rolled out
###暂停叫Nginx的deployment资源,
####只要deployment在暂停中,使用deployment更新将不会生效
[root@k8s-master1 ~]# kubectl rollout pause deployment nginx
deployment.apps/nginx paused
[root@k8s-master1 ~]# kubectl describe deployment nginx | grep -A 6 "Status"
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing Unknown DeploymentPaused
OldReplicaSets: <none>
NewReplicaSet: nginx-89c9b4b66 (4/4 replicas created)
Events:
###恢复叫Nginx的deployment资源
[root@k8s-master1 ~]# kubectl describe deployment nginx | grep -A 6 "Status"
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing Unknown DeploymentPaused
OldReplicaSets: <none>
NewReplicaSet: nginx-89c9b4b66 (4/4 replicas created)
Events:
[root@k8s-master1 ~]# kubectl rollout resume deployment nginx
deployment.apps/nginx resumed
[root@k8s-master1 ~]# kubectl describe deployment nginx | grep -A 6 "Status"
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-89c9b4b66 (4/4 replicas created)
Events:
scale
###将叫Ngix的deployment资源应用副本数调整为6个
[root@k8s-master1 ~]# kubectl get po | grep nginx
nginx-89c9b4b66-5w4t6 1/1 Running 0 9m3s
nginx-89c9b4b66-pddbd 1/1 Running 0 9m3s
nginx-89c9b4b66-rwfnv 1/1 Running 0 9m1s
nginx-89c9b4b66-x5r9h 1/1 Running 0 9m1s
[root@k8s-master1 ~]# kubectl scale deployment/nginx --replicas=6
deployment.apps/nginx scaled
[root@k8s-master1 ~]# kubectl get po | grep nginx
nginx-89c9b4b66-5w4t6 1/1 Running 0 9m34s
nginx-89c9b4b66-pddbd 1/1 Running 0 9m34s
nginx-89c9b4b66-rw729 1/1 Running 0 4s
nginx-89c9b4b66-rwfnv 1/1 Running 0 9m32s
nginx-89c9b4b66-tnwjs 0/1 ContainerCreating 0 4s
nginx-89c9b4b66-x5r9h 1/1 Running 0 9m32s
###针对yaml文件对其所创建的资源副本数调整为3个
root@k8s-master1 ~]# kubectl get po | grep nginx
nginx-89c9b4b66-5w4t6 1/1 Running 0 9m34s
nginx-89c9b4b66-pddbd 1/1 Running 0 9m34s
nginx-89c9b4b66-rw729 1/1 Running 0 4s
nginx-89c9b4b66-rwfnv 1/1 Running 0 9m32s
nginx-89c9b4b66-tnwjs 1/1 Running 0 4s
nginx-89c9b4b66-x5r9h 1/1 Running 0 9m32s
[root@k8s-master1 ~]# kubectl scale --replicas=3 -f deployment-nginx.yaml
deployment.apps/nginx scaled
[root@k8s-master1 ~]# kubectl get po | grep nginx
nginx-89c9b4b66-5w4t6 1/1 Terminating 0 10m
nginx-89c9b4b66-pddbd 1/1 Terminating 0 10m
nginx-89c9b4b66-rw729 1/1 Running 0 53s
nginx-89c9b4b66-rwfnv 1/1 Terminating 0 10m
nginx-89c9b4b66-tnwjs 1/1 Running 0 53s
nginx-89c9b4b66-x5r9h 1/1 Running 0 10m
[root@k8s-master1 ~]# kubectl get po | grep nginx
nginx-89c9b4b66-rw729 1/1 Running 0 59s
nginx-89c9b4b66-tnwjs 1/1 Running 0 59s
nginx-89c9b4b66-x5r9h 1/1 Running 0 10m
###调整多个deploymen的资源副本数
[root@k8s-master1 ~]# kubectl scale deployment/nginx deployment/tomcat --replicas=6
deployment.apps/nginx scaled
deployment.apps/tomcat scaled
[root@k8s-master1 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-89c9b4b66-rw729 1/1 Running 0 119s
nginx-89c9b4b66-thrp2 1/1 Running 0 9s
nginx-89c9b4b66-tnwjs 1/1 Running 0 119s
nginx-89c9b4b66-wkfqn 1/1 Running 0 9s
nginx-89c9b4b66-x5r9h 1/1 Running 0 11m
nginx-89c9b4b66-xr8ml 1/1 Running 0 9s
tomcat-596db6d496-4q2wt 1/1 Running 0 29h
tomcat-596db6d496-cn5h7 1/1 Running 0 29h
tomcat-596db6d496-gf668 1/1 Running 1 (86m ago) 29h
tomcat-596db6d496-n7gh8 1/1 Running 0 29h
tomcat-596db6d496-s9zb7 1/1 Running 0 9s
tomcat-596db6d496-tnjg2 1/1 Running 0 9s
autoscale
###设置叫Nginx的deployment进行自动伸缩,指定cpu的使用率,指定最大和最小Pod数量
[root@k8s-master1 ~]# kubectl autoscale deployment nginx --min=2 --max=10 --cpu-percent=80
horizontalpodautoscaler.autoscaling/nginx autoscaled
集群控制命令(certificate,cluster-info,top,cordon,ujncordon,drain,taint)
#集群控制命令
Cluster Management Commands:
#??
certificate Modify certificate resources.
#展示集群的信息
cluster-info Display cluster information
#展示资源的(cpu/内存)的使用情况
top Display resource (CPU/memory) usage
#使Node节点不能被调度
cordon Mark node as unschedulable
#使Node节点可以被调度
uncordon Mark node as schedulable
#驱赶Node在维护准备的时候
drain Drain node in preparation for maintenance
#更新一个或者多个node为污点
taint Update the taints on one or more nodes
certificate
###授权节点加入集群
[root@k8s-master1 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
node-csr-L0gO2mlT81Nvva24M1TkHCm4V-Etws2WssxUeXzqJGU 8s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Pending
[root@k8s-master1 ~]# kubectl certificate approve node-csr-L0gO2mlT81Nvva24M1TkHCm4V-Etws2WssxUeXzqJGU
certificatesigningrequest.certificates.k8s.io/node-csr-L0gO2mlT81Nvva24M1TkHCm4V-Etws2WssxUeXzqJGU approved
[root@k8s-master1 ~]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
node-csr-L0gO2mlT81Nvva24M1TkHCm4V-Etws2WssxUeXzqJGU 76s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap <none> Approved,Issued
cluster-info
###查看当前集群的相关信息
[root@k8s-master1 ~]# kubectl cluster-info
Kubernetes control plane is running at https://10.245.4.1:6443
CoreDNS is running at https://10.245.4.1:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
###打印出当前集群的调试和诊断信息
[root@k8s-master1 ~]# kubectl cluster-info dump
调试和诊断内容过多这里忽略
###将当前集群的调试和诊断信息输出到指定的文件中
[root@k8s-master1 ~]# cd /
[root@k8s-master1 /]# ls
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
[root@k8s-master1 /]# mkdir k8s-dug
[root@k8s-master1 /]# cd
[root@k8s-master1 ~]# kubectl cluster-info dump --output-directory=/k8s-dug
Cluster info dumped to /k8s-dug
[root@k8s-master1 ~]# ll /k8s-dug/
total 40
drwxr-xr-x. 14 root root 4096 Dec 20 09:20 default
drwxr-xr-x. 8 root root 4096 Dec 20 09:20 kube-system
-rw-r--r--. 1 root root 31937 Dec 20 09:20 nodes.json
top
###查看node节点的资源使用情况
[root@k8s-master1 ~]# kubectl top node
error: Metrics API not available ###需要Metrics API支持
###查看pod的资源使用情况
[root@k8s-master1 ~]# kubectl top pod
error: Metrics API not available ###需要Metrics API支持
cordon
###将k8s-node1设置为不可调度状态
[root@k8s-master1 ~]# kubectl cordon k8s-node1
node/k8s-node1 already cordoned
[root@k8s-master1 ~]# kubectl describe node k8s-node1 | grep "Taints"
Taints: node.kubernetes.io/unschedulable:NoSchedule
uncordon
###将k8s-node1解除不可调度状态
[root@k8s-master1 ~]# kubectl uncordon k8s-node1
node/k8s-node1 uncordoned
[root@k8s-master1 ~]# kubectl describe node k8s-node1 | grep "Taints"
Taints: <none>
drain
###将k8s-node1上面的所有pod进行驱逐
[root@k8s-master1 ~]# kubectl get pod -o wide | grep node1
nginx-89c9b4b66-thrp2 1/1 Running 0 25m 10.244.36.86 k8s-node1 <none> <none>
nginx-89c9b4b66-tnwjs 1/1 Running 0 27m 10.244.36.85 k8s-node1 <none> <none>
nginx-89c9b4b66-wkfqn 1/1 Running 0 25m 10.244.36.87 k8s-node1 <none> <none>
nginx-89c9b4b66-xr8ml 1/1 Running 0 25m 10.244.36.88 k8s-node1 <none> <none>
tomcat-596db6d496-s9zb7 1/1 Running 0 25m 10.244.36.89 k8s-node1 <none> <none>
[root@k8s-master1 ~]# kubectl drain k8s-node1
node/k8s-node1 already cordoned
DEPRECATED WARNING: Aborting the drain command in a list of nodes will be deprecated in v1.23.
The new behavior will make the drain command go through all nodes even if one or more nodes failed during the drain.
For now, users can try such experience via: --ignore-errors
error: unable to drain node "k8s-node1", aborting command...
##报错告知上面有 DaemonSet-managed Pods和 Pods with local storage,需要添加--delete-emptydir-data,--ignore-daemonsets两个参数
There are pending nodes to be drained:
k8s-node1
cannot delete Pods with local storage (use --delete-emptydir-data to override): kubernetes-dashboard/kubernetes-dashboard-686cc7c688-5tz4m
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-system/calico-node-xvqmt
[root@k8s-master1 ~]# kubectl drain k8s-node1 --ignore-daemonsets --delete-emptydir-data ##驱逐成功
node/k8s-node1 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-xvqmt
evicting pod kubernetes-dashboard/kubernetes-dashboard-686cc7c688-5tz4m
evicting pod default/nginx-89c9b4b66-thrp2
evicting pod default/nginx-89c9b4b66-tnwjs
evicting pod default/nginx-89c9b4b66-wkfqn
evicting pod default/nginx-89c9b4b66-xr8ml
evicting pod default/tomcat-596db6d496-s9zb7
pod/kubernetes-dashboard-686cc7c688-5tz4m evicted
pod/nginx-89c9b4b66-tnwjs evicted
pod/nginx-89c9b4b66-wkfqn evicted
pod/tomcat-596db6d496-s9zb7 evicted
pod/nginx-89c9b4b66-xr8ml evicted
pod/nginx-89c9b4b66-thrp2 evicted
node/k8s-node1 evicted
[root@k8s-master1 ~]# kubectl get pod -o wide | grep node1
[root@k8s-master1 ~]#
taint
- taint effect 支持如下三个选项
- NoSchedule:表示 k8s 将不会将 Pod 调度到具有该污点的 Node 上
- PreferNoSchedule:表示 k8s 将尽量避免将 Pod 调度到具有该污点的 Node 上
- NoExecute:表示 k8s 将不会将 Pod 调度到具有该污点的 Node 上,同时会将 Node 上已经存在的 Pod 驱逐出去
###给节点k8s-node1打上污点
[root@k8s-master1 ~]# kubectl taint node k8s-node1 key=smoke:NoSchedule
node/k8s-node1 tainted
[root@k8s-master1 ~]# kubectl describe node k8s-node1 | grep "Taint"
Taints: key=smoke:NoSchedule
###给节点k8s-node1解除污点
[root@k8s-master1 ~]# kubectl taint node k8s-node1 key:NoSchedule-
node/k8s-node1 untainted
[root@k8s-master1 ~]# kubectl describe node k8s-node1 | grep "Taint"
Taints: <none>
故障排除和调试命令(describe,logs,attach,exec,poer-forward,proxy,cp,auth,debug)
#故障排除和调试命令
Troubleshooting and Debugging Commands:
#显示特指的资源或者资源组细节
describe Show details of a specific resource or group of resources
#打印容器日志在pod里面
logs Print the logs for a container in a pod
#附加到一个正在运行的容器
attach Attach to a running container
#执行一个命令在容器里面
exec Execute a command in a container
#转发一个或多个Pod本地端口
port-forward Forward one or more local ports to a pod
#运行一个代理为k8s api服务
proxy Run a proxy to the Kubernetes API server
#将文件和目录复制到容器和从容器复制文件和目录
cp Copy files and directories to and from containers
#检查授权
auth Inspect authorization
#创建调试会话以排除工作负载和节点的故障
debug Create debugging sessions for troubleshooting workloads and nodes
describe
###查看一个pod启动不起来原因
[root@k8s-master1 ~]# kubectl describe pod nginx-5896cbffc6-mgrxd
Name: nginx-5896cbffc6-mgrxd
Namespace: default
Priority: 0
Node: k8s-master1/10.245.4.1
Start Time: Tue, 20 Dec 2022 07:36:59 -0500
Labels: app=nginx
pod-template-hash=5896cbffc6
Annotations: cni.projectcalico.org/podIP: 10.244.159.138/32
cni.projectcalico.org/podIPs: 10.244.159.138/32
Status: Pending
IP: 10.244.159.138
IPs:
IP: 10.244.159.138
Controlled By: ReplicaSet/nginx-5896cbffc6
Containers:
nginx:
Container ID:
Image: 10.245.4.88:8888/base-images/nginx
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8mx4k (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-8mx4k:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 54s default-scheduler Successfully assigned default/nginx-5896cbffc6-mgrxd to k8s-master1
Normal BackOff 38s kubelet Back-off pulling image "10.245.4.88:8888/base-images/nginx"
Warning Failed 38s kubelet Error: ImagePullBackOff
Normal Pulling 26s (x2 over 53s) kubelet Pulling image "10.245.4.88:8888/base-images/nginx"
Warning Failed 11s (x2 over 38s) kubelet Failed to pull image "10.245.4.88:8888/base-images/nginx": rpc error: code = Unknown desc = Error response from daemon: Get http://10.245.4.88:8888/v2/base-images/nginx/manifests/latest: Get http://10.245.4.44:10011/service/token?account=admin&scope=repository%3Abase-images%2Fnginx%3Apull&service=harbor-registry: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Warning Failed 11s (x2 over 38s) kubelet Error: ErrImagePull
###查看Node节点的详细信息
[root@k8s-master1 ~]# kubectl describe node k8s-master2
Name: k8s-master2
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-master2
kubernetes.io/os=linux
Annotations: node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 10.245.4.2/21
projectcalico.org/IPv4IPIPTunnelAddr: 10.244.224.0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 18 Dec 2022 08:35:20 -0500
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: k8s-master2
AcquireTime: <unset>
RenewTime: Tue, 20 Dec 2022 07:44:35 -0500
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Sun, 18 Dec 2022 18:48:54 -0500 Sun, 18 Dec 2022 18:48:54 -0500 CalicoIsUp Calico is running on this node
MemoryPressure False Tue, 20 Dec 2022 07:44:27 -0500 Sun, 18 Dec 2022 16:35:19 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Tue, 20 Dec 2022 07:44:27 -0500 Sun, 18 Dec 2022 16:35:19 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Tue, 20 Dec 2022 07:44:27 -0500 Sun, 18 Dec 2022 16:35:19 -0500 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Tue, 20 Dec 2022 07:44:27 -0500 Sun, 18 Dec 2022 18:47:59 -0500 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 10.245.4.2
Hostname: k8s-master2
Capacity:
cpu: 2
ephemeral-storage: 17394Mi
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1865308Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 16415037823
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 1762908Ki
pods: 110
System Info:
Machine ID: def3e463a2664ef09ae0446711f8f522
System UUID: 70494D56-287D-771E-490E-FB8C3B53C380
Boot ID: 0b0d8934-83f4-4f02-8da0-a4d82f4b90f5
Kernel Version: 3.10.0-862.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.9
Kubelet Version: v1.22.4
Kube-Proxy Version: v1.22.4
PodCIDR: 10.244.3.0/24
PodCIDRs: 10.244.3.0/24
Non-terminated Pods: (4 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
default nginx 0 (0%) 0 (0%) 0 (0%) 0 (0%) 48s
default nginx-5896cbffc6-j6wtc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 7m43s
kube-system calico-kube-controllers-8db96c76-w4njz 0 (0%) 0 (0%) 0 (0%) 0 (0%) 44h
kube-system calico-node-v2fbv 250m (12%) 0 (0%) 0 (0%) 0 (0%) 44h
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 250m (12%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events: <none>
logs
###打印出来只有一个容器的pod的日志
[root@k8s-master1 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-89c9b4b66-25jpk 1/1 Running 0 20m
[root@k8s-master1 ~]# kubectl logs nginx-89c9b4b66-25jpk
NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
******省略********小编为了做一个历史版本回滚。里面用的是tomcat的镜像,所以这里是tomcat 的日志
20-Dec-2022 14:36:26.257 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
###持续输出日志
[root@k8s-master1 ~]# kubectl logs nginx-89c9b4b66-25jpk -f
###针对一个pod里面有多个容器的情况用-c指定容器名称
[root@k8s-master1 ~]# kubectl logs nginx-89c9b4b66-25jpk -c nginx
NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-
******省略********小编为了做一个历史版本回滚。里面用的是tomcat的镜像,所以这里是tomcat 的日志
20-Dec-2022 14:36:26.257 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
20-Dec-2022 14:36:26.411 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [1627] milliseconds
###只查看最后3条日志
[root@k8s-master1 ~]# kubectl logs nginx-89c9b4b66-25jpk --tail=3
20-Dec-2022 14:36:26.066 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/10.0.14]
20-Dec-2022 14:36:26.257 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
20-Dec-2022 14:36:26.411 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [1627] milliseconds
###输出的日志里面包含时间戳
[root@k8s-master1 ~]# kubectl logs nginx-89c9b4b66-25jpk --tail=3 --timestamps
2022-12-20T14:36:26.067686737Z 20-Dec-2022 14:36:26.066 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/10.0.14]
2022-12-20T14:36:26.257795883Z 20-Dec-2022 14:36:26.257 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
2022-12-20T14:36:26.411997507Z 20-Dec-2022 14:36:26.411 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [1627] milliseconds
###查看最近一个小时的日志
[root@k8s-master1 ~]# kubectl logs nginx-89c9b4b66-25jpk --tail=3 --timestamps --since=1h
2022-12-20T14:36:26.067686737Z 20-Dec-2022 14:36:26.066 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/10.0.14]
2022-12-20T14:36:26.257795883Z 20-Dec-2022 14:36:26.257 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
2022-12-20T14:36:26.411997507Z 20-Dec-2022 14:36:26.411 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [1627] milliseconds
###查看最近2分钟的日志
[root@k8s-master1 ~]# kubectl logs nginx-89c9b4b66-25jpk --tail=3 --timestamps --since=2m
##查看最近5秒的日志
[root@k8s-master1 ~]# kubectl logs nginx-89c9b4b66-25jpk --tail=3 --timestamps --since=5s
###输出指定时间之后的日志
[root@k8s-master1 ~]# kubectl logs nginx-89c9b4b66-25jpk --timestamps --since-time=2022-12-20T14:36:26.257795883Z
2022-12-20T14:36:26.067664760Z 20-Dec-2022 14:36:26.051 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
2022-12-20T14:36:26.067686737Z 20-Dec-2022 14:36:26.066 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/10.0.14]
2022-12-20T14:36:26.257795883Z 20-Dec-2022 14:36:26.257 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
2022-12-20T14:36:26.411997507Z 20-Dec-2022 14:36:26.411 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [1627] milliseconds
attach
与exec的区别就是,exec可以带一个/bin/bash进去,但是attch进去就是容器的主进程
待补充
exec
###利用-i -t打开一个pod进行交互,默认是pod的第一个容器,如果pod里面存在多个容器使用-c进行指定容器
root@k8s-master1 ~]# kubectl exec -it nginx-89c9b4b66-25jpk /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-89c9b4b66-25jpk:/usr/local/tomcat# date
Tue Dec 20 15:14:12 UTC 2022
root@nginx-89c9b4b66-25jpk:/usr/local/tomcat# exit
exit
[root@k8s-master1 ~]# kubectl exec -it nginx-89c9b4b66-25jpk -c nginx /bin/bash ##多容器-c指定容器
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-89c9b4b66-25jpk:/usr/local/tomcat# date
Tue Dec 20 15:15:30 UTC 2022
root@nginx-89c9b4b66-25jpk:/usr/local/tomcat# exit
exit
###不进行交互获取命令在容器中执行的结果
[root@k8s-master1 ~]# kubectl exec nginx-89c9b4b66-25jpk -c nginx date
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Tue Dec 20 15:16:12 UTC 2022
port-forward
待补充
proxy
待补充
cp
待补充
auth
待补充
debug
待补充
高级命令(diff,apply,patch,replace,wait,kustomize)
#高级命令
Advanced Commands:
#将当前运行的版本和将要apply的版本进行对比
diff Diff the live version against a would-be applied version
#应用一个资源配置从文件或者标准输入
apply Apply a configuration to a resource by file name or stdin
#更新资源
patch Update fields of a resource
#利用文件或者标准输入替换一个资源
replace Replace a resource by file name or stdin
#实验:等待一个或者多个指定条件
wait Experimental: Wait for a specific condition on one or many resources
#??
kustomize Build a kustomization target from a directory or URL.
diff
###对比当前的pod应用和即将发布应用的yaml文件,查看区别
[root@k8s-master1 ~]# kubectl diff -f deployment-nginx.yaml
diff -u -N /tmp/LIVE-057640128/apps.v1.Deployment.default.nginx /tmp/MERGED-382072607/apps.v1.Deployment.default.nginx
--- /tmp/LIVE-057640128/apps.v1.Deployment.default.nginx 2022-12-21 11:23:16.065032455 -0500
+++ /tmp/MERGED-382072607/apps.v1.Deployment.default.nginx 2022-12-21 11:23:16.067032455 -0500
@@ -6,18 +6,44 @@
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"replicas":4,"selector":{"matchLabels":{"app":"nginx"}},"template":{"metadata":{"creationTimestamp":null,"labels":{"app":"nginx"}},"spec":{"containers":[{"image":"10.245.4.88:8888/base-images/tomcat","imagePullPolicy":"IfNotPresent","name":"nginx"}],"imagePullSecrets":[{"name":"harbor-login"}]}}}}
creationTimestamp: "2022-12-20T12:36:59Z"
- generation: 10
+ generation: 11
labels:
app: nginx
managedFields:
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
- f:spec:
+ f:metadata:
+ f:annotations:
+ f:deployment.kubernetes.io/revision: {}
+ f:status:
+ f:availableReplicas: {}
+ f:conditions:
+ .: {}
+ k:{"type":"Available"}:
+ .: {}
+ f:lastTransitionTime: {}
+ f:lastUpdateTime: {}
+ f:message: {}
+ f:reason: {}
+ f:status: {}
+ f:type: {}
+ k:{"type":"Progressing"}:
+ .: {}
+ f:lastTransitionTime: {}
+ f:lastUpdateTime: {}
+ f:message: {}
+ f:reason: {}
+ f:status: {}
+ f:type: {}
+ f:observedGeneration: {}
+ f:readyReplicas: {}
f:replicas: {}
- manager: kubectl
+ f:updatedReplicas: {}
+ manager: kube-controller-manager
operation: Update
- subresource: scale
+ subresource: status
+ time: "2022-12-21T15:22:39Z"
- apiVersion: apps/v1
fieldsType: FieldsV1
fieldsV1:
@@ -30,6 +56,7 @@
f:app: {}
f:spec:
f:progressDeadlineSeconds: {}
+ f:replicas: {}
f:revisionHistoryLimit: {}
f:selector: {}
f:strategy:
@@ -47,6 +74,7 @@
f:containers:
k:{"name":"nginx"}:
.: {}
+ f:image: {}
f:imagePullPolicy: {}
f:name: {}
f:resources: {}
@@ -62,60 +90,14 @@
f:terminationGracePeriodSeconds: {}
manager: kubectl-client-side-apply
operation: Update
- time: "2022-12-20T13:19:10Z"
- - apiVersion: apps/v1
- fieldsType: FieldsV1
- fieldsV1:
- f:spec:
- f:template:
- f:spec:
- f:containers:
- k:{"name":"nginx"}:
- f:image: {}
- manager: kubectl
- operation: Update
- time: "2022-12-20T13:54:35Z"
- - apiVersion: apps/v1
- fieldsType: FieldsV1
- fieldsV1:
- f:metadata:
- f:annotations:
- f:deployment.kubernetes.io/revision: {}
- f:status:
- f:availableReplicas: {}
- f:conditions:
- .: {}
- k:{"type":"Available"}:
- .: {}
- f:lastTransitionTime: {}
- f:lastUpdateTime: {}
- f:message: {}
- f:reason: {}
- f:status: {}
- f:type: {}
- k:{"type":"Progressing"}:
- .: {}
- f:lastTransitionTime: {}
- f:lastUpdateTime: {}
- f:message: {}
- f:reason: {}
- f:status: {}
- f:type: {}
- f:observedGeneration: {}
- f:readyReplicas: {}
- f:replicas: {}
- f:updatedReplicas: {}
- manager: kube-controller-manager
- operation: Update
- subresource: status
- time: "2022-12-21T15:22:39Z"
+ time: "2022-12-21T16:23:16Z"
name: nginx
namespace: default
resourceVersion: "146949"
uid: 4f6c9751-08eb-4a38-8377-81d47c64bd25
spec:
progressDeadlineSeconds: 600
- replicas: 6
+ replicas: 4
revisionHistoryLimit: 10
selector:
matchLabels:
@@ -132,7 +114,7 @@
app: nginx
spec:
containers:
- - image: 10.245.4.88:8888/base-images/tomcat
+ - image: 10.245.4.88:8888/base-images/nginx
imagePullPolicy: IfNotPresent
name: nginx
resources: {}
[root@k8s-master1 ~]#
+号是Yaml脚本里面的,即将发布的
-号是当前应用的配置,当前使用的
apply
待补充
patch
待补充
replace
待补充
wait
待补充
kustomize
设置命令(label,annotate,completion)
#设置命令
Settings Commands:
#更新一个标签在资源上面
label Update the labels on a resource
#更新资源上的注释
annotate Update the annotations on a resource
#输出到shell完成指定shell(bash/zsh)的代码
completion Output shell completion code for the specified shell (bash or zsh)
label
待补充
annotate
待补充
completion
待补充
其他命令(api-resources,config,plugin,wersion)
#其他命令
Other Commands:
#输出支持的api资源
api-resources Print the supported API resources on the server
#输出支持的api版本
api-versions Print the supported API versions on the server, in the form of "group/version"
#修改kubeconfig文件
config Modify kubeconfig files
#提供与插件交互的实用程序???
plugin Provides utilities for interacting with plugins
#输出客户端或者服务版本信息
version Print the client and server version information
api-resources
###打印当前集群支持的资源
[root@k8s-master1 ~]# kubectl api-resources
NAME SHORTNAMES APIVERSION NAMESPACED KIND
bindings v1 true Binding
componentstatuses cs v1 false ComponentStatus
configmaps cm v1 true ConfigMap
endpoints ep v1 true Endpoints
events ev v1 true Event
limitranges limits v1 true LimitRange
namespaces ns v1 false Namespace
nodes no v1 false Node
persistentvolumeclaims pvc v1 true PersistentVolumeClaim
persistentvolumes pv v1 false PersistentVolume
pods po v1 true Pod
podtemplates v1 true PodTemplate
replicationcontrollers rc v1 true ReplicationController
resourcequotas quota v1 true ResourceQuota
secrets v1 true Secret
serviceaccounts sa v1 true ServiceAccount
services svc v1 true Service
mutatingwebhookconfigurations admissionregistration.k8s.io/v1 false MutatingWebhookConfiguration
validatingwebhookconfigurations admissionregistration.k8s.io/v1 false ValidatingWebhookConfiguration
customresourcedefinitions crd,crds apiextensions.k8s.io/v1 false CustomResourceDefinition
apiservices apiregistration.k8s.io/v1 false APIService
controllerrevisions apps/v1 true ControllerRevision
daemonsets ds apps/v1 true DaemonSet
deployments deploy apps/v1 true Deployment
replicasets rs apps/v1 true ReplicaSet
statefulsets sts apps/v1 true StatefulSet
tokenreviews authentication.k8s.io/v1 false TokenReview
localsubjectaccessreviews authorization.k8s.io/v1 true LocalSubjectAccessReview
selfsubjectaccessreviews authorization.k8s.io/v1 false SelfSubjectAccessReview
selfsubjectrulesreviews authorization.k8s.io/v1 false SelfSubjectRulesReview
subjectaccessreviews authorization.k8s.io/v1 false SubjectAccessReview
horizontalpodautoscalers hpa autoscaling/v1 true HorizontalPodAutoscaler
cronjobs cj batch/v1 true CronJob
jobs batch/v1 true Job
certificatesigningrequests csr certificates.k8s.io/v1 false CertificateSigningRequest
leases coordination.k8s.io/v1 true Lease
bgpconfigurations crd.projectcalico.org/v1 false BGPConfiguration
bgppeers crd.projectcalico.org/v1 false BGPPeer
blockaffinities crd.projectcalico.org/v1 false BlockAffinity
clusterinformations crd.projectcalico.org/v1 false ClusterInformation
felixconfigurations crd.projectcalico.org/v1 false FelixConfiguration
globalnetworkpolicies crd.projectcalico.org/v1 false GlobalNetworkPolicy
globalnetworksets crd.projectcalico.org/v1 false GlobalNetworkSet
hostendpoints crd.projectcalico.org/v1 false HostEndpoint
ipamblocks crd.projectcalico.org/v1 false IPAMBlock
ipamconfigs crd.projectcalico.org/v1 false IPAMConfig
ipamhandles crd.projectcalico.org/v1 false IPAMHandle
ippools crd.projectcalico.org/v1 false IPPool
kubecontrollersconfigurations crd.projectcalico.org/v1 false KubeControllersConfiguration
networkpolicies crd.projectcalico.org/v1 true NetworkPolicy
networksets crd.projectcalico.org/v1 true NetworkSet
endpointslices discovery.k8s.io/v1 true EndpointSlice
events ev events.k8s.io/v1 true Event
flowschemas flowcontrol.apiserver.k8s.io/v1beta1 false FlowSchema
prioritylevelconfigurations flowcontrol.apiserver.k8s.io/v1beta1 false PriorityLevelConfiguration
ingressclasses networking.k8s.io/v1 false IngressClass
ingresses ing networking.k8s.io/v1 true Ingress
networkpolicies netpol networking.k8s.io/v1 true NetworkPolicy
runtimeclasses node.k8s.io/v1 false RuntimeClass
poddisruptionbudgets pdb policy/v1 true PodDisruptionBudget
podsecuritypolicies psp policy/v1beta1 false PodSecurityPolicy
clusterrolebindings rbac.authorization.k8s.io/v1 false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io/v1 false ClusterRole
rolebindings rbac.authorization.k8s.io/v1 true RoleBinding
roles rbac.authorization.k8s.io/v1 true Role
priorityclasses pc scheduling.k8s.io/v1 false PriorityClass
csidrivers storage.k8s.io/v1 false CSIDriver
csinodes storage.k8s.io/v1 false CSINode
csistoragecapacities storage.k8s.io/v1beta1 true CSIStorageCapacity
storageclasses sc storage.k8s.io/v1 false StorageClass
volumeattachments storage.k8s.io/v1 false VolumeAttachment
api-versions
###打印当前集群资源版本
[root@k8s-master1 ~]# kubectl api-versions
admissionregistration.k8s.io/v1
apiextensions.k8s.io/v1
apiregistration.k8s.io/v1
apps/v1
authentication.k8s.io/v1
authorization.k8s.io/v1
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2
batch/v1
batch/v1beta1
certificates.k8s.io/v1
coordination.k8s.io/v1
crd.projectcalico.org/v1
discovery.k8s.io/v1
discovery.k8s.io/v1beta1
events.k8s.io/v1
events.k8s.io/v1beta1
flowcontrol.apiserver.k8s.io/v1beta1
networking.k8s.io/v1
node.k8s.io/v1
node.k8s.io/v1beta1
policy/v1
policy/v1beta1
rbac.authorization.k8s.io/v1
scheduling.k8s.io/v1
storage.k8s.io/v1
storage.k8s.io/v1beta1
v1
config
待补充
plugin
待补充
version
###查看kubectl的版本
[root@k8s-master1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:48:33Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.4", GitCommit:"b695d79d4f967c403a96986f1750a35eb75e75f1", GitTreeState:"clean", BuildDate:"2021-11-17T15:42:41Z", GoVersion:"go1.16.10", Compiler:"gc", Platform:"linux/amd64"}
更多推荐
已为社区贡献10条内容
所有评论(0)