k8s学习第一天
学习K8s第一天
一、环境准备
系统:CentOS 7.6
k8s 版本:v1.23(与当前CKA考证版本一致)
master、node(4核4G内存20G硬盘)
NAT网络搭建,碰到一些小问题,CSDN可解决。重点要处理wlan共享给vm8会自动分配IP,以这个IP同段IP作为网关IP即可,如下图一、二、三、四
一 二
三 四
二、搭建K8s
第一步Master和Node都要执行,下载新的 CentOS-Base.repo和epel.repo 到 /etc/yum.repos.d/
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
第二步Master和Node都要执行,系统参数修改
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
systemctl disable firewalld
systemctl stop firewalld
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules &&lsmod | grep -e ip_vs -e nf_conntrack
# Apply sysctl params without reboot
sudo sysctl –system
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
第三步 Master 和 Node 都需执行
# 安装docker
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# Step 4: 更新并安装Docker-CE
sudo yum -y install docker-ce
# Create /etc/docker directory.
mkdir /etc/docker
# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"registry-mirrors": ["https://e6vlzg9v.mirror.aliyuncs.com"]
}
EOF
mkdir -p /etc/systemd/system/docker.service.d
# Step 4: 开启Docker服务
systemctl daemon-reload && sudo service docker start && systemctl enable docker && systemctl status docker
第四步,Master 和 Node 都需执行
# 安装kubeadm
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y --nogpgcheck kubelet-1.23.3 kubeadm-1.23.3 kubectl-1.23.3
systemctl enable kubelet && systemctl start kubelet && systemctl status kubelet
第五步 Master执行
hostnamectl set-hostname master
# kubeadm初始化k8s集群
kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.23.3
第六步 Node执行
hostnamectl set-hostname node
粘贴Master屏幕输出的 join 语句。如没有保存,可以执行下面语句输出join命令。
kubeadm token create --print-join-command
kubeadm join 192.168.137.223:6443 --token e43v7a.b9mcvunc4onao9ap \
--discovery-token-ca-cert-hash sha256:259a1534425d43a51efa2b3c875b55d1872df7d6cc30acf60be88b960f5113db
第七步 Master执行
# 安装k8s网络插件
curl https://docs.projectcalico.org/manifests/calico.yaml -O && kubectl apply -f calico.yaml
第八步Master执行
kubectl get node。显示Ready时再执行下面语句。
出现报错:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
解决办法:
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
创建一个tomcat应用并访问
kubectl create deployment tomcat --image=tomcat
kubectl expose deployment tomcat --port=8080 --target-port=8080 --type=NodePort
kubectl get service
浏览器访问应用。
http://xxxxx:xxx
其中碰到了master和node状态都为NOTREADY的问题,具体分析思路如下:
kubectl get pods --all-namespaces
发现coredns一直是pending,于是
journalctl -f -u kubelet
报错:
6月 17 11:37:58 master kubelet[10242]: I0617 11:37:58.054575 10242 cni.go:334] "CNI failed to retrieve network namespace path" err="cannot find network namespace for the terminated container \"700ad75a78e99cec04fe95e2bcfbb2530a3972bd36ce8ae3e4a312bd5320fa9a\""
执行网络插件:
curl https://docs.projectcalico.org/manifests/calico.yaml -O && kubectl apply -f calico.yaml
如果还不行可以考虑检查kubelet服务
systemctl status kubelet.service
重启kubeadm,重启master的kubeadm即可,node的kubeadm会自己拉进来。然后初始化k8s集群,再把node加进去可以解决大部分问题。但这里的重点一定是会找kubelet的报错
Tomcat服务成功启动后:
执行容器命令:
kubectl exec -it tomcat-655b94657b-cfztw -- bash
curl tomcat:8080
<!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>Apache Tomcat/10.0.14</h3></body></html>root@tomcat-655b94657b-cfztw:/usr/local/tomcat#
二、pod的网络模型
三、Kubernets的架构与核心组件
1、kube-apiserer用于暴露Kubernetes API。任何的资源请求/调用操作都 是通过kube-apiserver提供的接口进行;
2、ETCD 是Kubernetes提供默认的存储系统,保存所有集群数据,使用时需 要为etcd数据提供备份计划;
3、kube-controller-manager 管理控制器,是集群中处理常规任务的后台线 程;这些控制器包括:
节点 (Node)控制器;
副本(Replication)控制器:负责维护系统中每个副本中的pod;
端点(Endpoints)控制器:填充Endpoints对象(即连接Services& Pods);
Service Account和Token控制器:为新的Namespaces创建默认帐户访问 API Token;
4、kube-scheduler 监视新创建没有分配到Node的Pod,为Pod选择一个 Node;
5、kube-proxy 通过在主机上维护网络规则,并执行连接转发来实现service (Iptables/ipvs);
6、Kubelet 负责维护容器的生命周期;
四、pod的概念
Pod相当于容器组,它可以运行好几组容器。
Pod中可以共享两种资源:网络和存储。
• 网络:每个Pod都会被分配一个唯一的IP地址。Pod中的所 容器共享网络空间,包括IP地址和端口。Pod内部的容器可 使用localhost互相通信。Pod中的容器与外界通信时,必 配共享网络资源(例如使用宿主机的端口映射)。
• 存储:Pod可以指定多个共享的Volume。Pod中的所有容器都可以访问共享的volume。Volume也可以用来持久化Pod 的存储资源,以防容器重启后文件丢失。 Kubernetes直接管理对象是pod,而不是底层的docker,对于 docker的操作,被封装在pod中,不会直接操作
案例:
建立一个pod,里面只设置一个container
[root@master 0718]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod0718
spec:
containers:
- name: pod
image: tomcat
# - name: pod2
# image: tomcat
加载
[root@master 0718]# kubectl apply -f pod.yaml
pod/pod0718 created
[root@master 0718]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod0718 1/1 Running 0 37s
tomcat-655b94657b-cfztw 1/1 Running 0 31d
接着删掉这个pod
[root@master 0718]# kubectl delete -f pod.yaml
pod "pod0718" deleted
创建两个container
[root@master 0718]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod0718
spec:
containers:
- name: pod
image: tomcat
- name: pod2
image: nginx
[root@master 0718]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod0718 2/2 Running 0 46s 10.244.167.132 node <none> <none>
tomcat-655b94657b-cfztw 1/1 Running 0 31d 10.244.167.129 node <none> <none>
创建三个container,其中两个是tomcat
[root@master 0718]# kubectl apply -f pod.yaml
pod/pod0718 created
[root@master 0718]# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod0718 2/3 NotReady 0 43s
tomcat-655b94657b-cfztw 1/1 Running 0 31d
查看日志(因为我们已知pod3建立的tomcat肯定会对pod1建立的tomcat存在端口冲突的情况,因此用-c指定镜像输出日志):
kubectl logs pod0718 -c pod3
19-Jul-2022 02:51:13.731 SEVERE [main] org.apache.catalina.core.StandardServer.await Failed to create server shutdown socket on address [localhost] and port [8005] (base port [8005] and offset [0])
java.net.BindException: Address already in use (Bind failed)
at java.base/java.net.PlainSocketImpl.socketBind(Native Method)
at java.base/java.net.AbstractPlainSocketImpl.bind(AbstractPlainSocketImpl.java:436)
at java.base/java.net.ServerSocket.bind(ServerSocket.java:395)
at java.base/java.net.ServerSocket.<init>(ServerSocket.java:257)
at org.apache.catalina.core.StandardServer.await(StandardServer.java:577)
at org.apache.catalina.startup.Catalina.await(Catalina.java:887)
at org.apache.catalina.startup.Catalina.start(Catalina.java:833)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:345)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:476)
五、Pod的创建流程图
日志体现:
kubectl describe pods tomcat-655b94657b-cfztw
六、常用命令
创建命名空间
kubectl create namespace dev
获取命名空间
kubectl get namespaces
NAME STATUS AGE
default Active 32d
dev Active 2m10s
kube-node-lease Active 32d
kube-public Active 32d
kube-system Active 32d
kubectl调用apiserver创建一个namespace,创建完后写入etcd
查看pod
kubectl get pods -n default(也等于kubectl get pods)
NAME READY STATUS RESTARTS AGE
pod0718 2/3 CrashLoopBackOff 20 (5m ago) 89m
tomcat-655b94657b-cfztw 1/1 Running 0 32d
kubectl get pods -n dev
No resources found in dev namespace.
能做到资源隔离
查看pod的详细情况
kubectl describe pod -n default pod0718
查看镜像日志
kubectl get pods
NAME READY STATUS RESTARTS AGE
pod0718 2/3 CrashLoopBackOff 25 (36s ago) 111m
tomcat-655b94657b-cfztw 1/1 Running 0 32d
---------------------------------------------------------------------
kubectl logs -n default pod0718 -c pod3
kubectl logs -n default tomcat-655b94657b-cfztw -c tomcat(完整)
kubectl logs -n default tomcat-655b94657b-cfztw
证书有限期
kubectl get cm
NAME DATA AGE
kube-root-ca.crt 1 32d
查看集群状态
kubectl cluster-info
Kubernetes control plane is running at https://192.168.137.223:6443
CoreDNS is running at https://192.168.137.223:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
查看pod更详细的情况
kubectl get pod -o wide
查看node
kubectl get node
-------------------------------------------------------
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 32d v1.23.3
node Ready <none> 32d v1.23.3
查看node的详情
kubectl describe node node
删除创建的pod
kubectl delete pod pod0718
删除所有deployment
kubectl delete deployment --all --wait=false
通过kubectl创建pod
kubectl run 0719-run-pod --image=tomcat
pod/0719-run-pod created
[root@master 0718]# kubectl get pods
NAME READY STATUS RESTARTS AGE
0719-run-pod 1/1 Running 0 19s
查看Pod的文档,可以看到apiversion以及另外一些yaml的参数,利用这个文档可以通过创建yaml来创建pod
kubectl explain pod
---------------------------------------------------------------------------
KIND: Pod
VERSION: v1
DESCRIPTION:
Pod is a collection of containers that can run on a host. This resource is
created by clients and scheduled onto hosts.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
Specification of the desired behavior of the pod. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
status <Object>
Most recently observed status of the pod. This data may not be up to date.
Populated by the system. Read-only. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
[root@master 0718]# cat pod0719.yaml
apiVersion: v1
kind: Pod
metadata:
name: 0719-yamlpod -podName
labels:
0719: yamlpod
spec:
containers:
- name: 0719-yamlpod-contariners -containerName
image: tomcat
案例如下:
[root@master 0718]# kubectl get pods
NAME READY STATUS RESTARTS AGE
0719-run-pod 1/1 Running 0 18m
0719-yamlpod 1/1 Running 0 2m1s
[root@master 0718]# kubectl logs -n default 0719-yamlpod -c 0719-yamlpod-contariners
NOTE: Picked up JDK_JAVA_OPTIONS: --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
19-Jul-2022 06:30:37.907 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version name: Apache Tomcat/10.0.14
19-Jul-2022 06:30:37.921 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Dec 2 2021 22:01:36 UTC
19-Jul-2022 06:30:37.922 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version number: 10.0.14.0
19-Jul-2022 06:30:37.922 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
19-Jul-2022 06:30:37.922 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 3.10.0-957.el7.x86_64
19-Jul-2022 06:30:37.922 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
19-Jul-2022 06:30:37.922 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/local/openjdk-11
19-Jul-2022 06:30:37.922 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 11.0.13+8
19-Jul-2022 06:30:37.923 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
19-Jul-2022 06:30:37.923 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /usr/local/tomcat
19-Jul-2022 06:30:37.923 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /usr/local/tomcat
19-Jul-2022 06:30:37.931 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.lang=ALL-UNNAMED
19-Jul-2022 06:30:37.931 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.io=ALL-UNNAMED
19-Jul-2022 06:30:37.931 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.util=ALL-UNNAMED
19-Jul-2022 06:30:37.931 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.base/java.util.concurrent=ALL-UNNAMED
19-Jul-2022 06:30:37.931 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED
19-Jul-2022 06:30:37.931 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
19-Jul-2022 06:30:37.932 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
19-Jul-2022 06:30:37.932 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
19-Jul-2022 06:30:37.932 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
19-Jul-2022 06:30:37.932 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dorg.apache.catalina.security.SecurityListener.UMASK=0027
19-Jul-2022 06:30:37.932 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dignore.endorsed.dirs=
19-Jul-2022 06:30:37.932 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/local/tomcat
19-Jul-2022 06:30:37.932 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/local/tomcat
19-Jul-2022 06:30:37.932 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/usr/local/tomcat/temp
19-Jul-2022 06:30:37.947 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded Apache Tomcat Native library [1.2.31] using APR version [1.7.0].
19-Jul-2022 06:30:37.947 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true], UDS [true].
19-Jul-2022 06:30:37.950 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized [OpenSSL 1.1.1k 25 Mar 2021]
19-Jul-2022 06:30:38.319 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
19-Jul-2022 06:30:38.344 INFO [main] org.apache.catalina.startup.Catalina.load Server initialization in [651] milliseconds
19-Jul-2022 06:30:38.423 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service [Catalina]
19-Jul-2022 06:30:38.423 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet engine: [Apache Tomcat/10.0.14]
19-Jul-2022 06:30:38.432 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler ["http-nio-8080"]
19-Jul-2022 06:30:38.447 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [102] milliseconds
问题一:
如果出现kubectl get pods报错,6443端口拒绝连接,可以推测是apiserver没启动,这时候可以通过重启kubelet来解决。
因为apiserver这个pod的yaml文件在master这个地方:
[root@master manifests]#
[root@master manifests]# pwd
/etc/kubernetes/manifests
[root@master manifests]# ls
etcd.yaml kube-apiserver.yaml kube-controller-manager.yaml kube-scheduler.yaml
[root@master manifests]# cat kube-apiserver.yaml |grep 6443
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.137.223:6443
- --secure-port=6443
port: 6443
port: 6443
port: 6443
问题二:
如果你不是手写yaml,也可以通过kubect run来输出一个yaml内容你再粘贴就行,通过这个内容定义一个pod,pod名为0720-1411,image名为0720-1412-image,容器是nginx的yaml并运行它。
[root@master manifests]# kubectl run 0719-1411 --image=tomcat --dry-run -o yaml
W0720 13:46:50.690065 99004 helpers.go:598] --dry-run is deprecated and can be replaced with --dry-run=client.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: 0719-1411
name: 0719-1411
spec:
containers:
- image: tomcat
name: 0719-1411
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
------------------------------------------------------------------------------------
[root@master 0718]# cat pod0720.yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: 0720-1411
name: 0720-1411
spec:
containers:
- image: nginx
name: 0720-1412-image
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {running}
------------------------------------------------------------------------------------
kubectl apply -f pod0720.yaml
[root@master 0718]# kubectl get pods
NAME READY STATUS RESTARTS AGE
0719-run-pod 1/1 Running 0 23h
0719-yamlpod 1/1 Running 0 23h
0720-1411 1/1 Running 0 10m
通过label指定pod
[root@master 0718]# kubectl get pods -n default --show-labels
NAME READY STATUS RESTARTS AGE LABELS
0719-run-pod 1/1 Running 0 23h run=0719-run-pod
0719-yamlpod 1/1 Running 0 23h 719=yamlpod
0720-1411 1/1 Running 0 8m8s run=0720-1411
[root@master 0718]# kubectl get pods -n default --show-labels -l run=0720-1411
NAME READY STATUS RESTARTS AGE LABELS
0720-1411 1/1 Running 0 12m run=0720-1411
六、deploymeng
[root@master 0718]# kubectl explain deployment
KIND: Deployment
VERSION: apps/v1
DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
Specification of the desired behavior of the Deployment.
status <Object>
Most recently observed status of the Deployment.
1、将Pod以yaml文件输出get pods
kubectl get pods -o yaml
2、pod与deployment的区别
pod是没有控制器,删除掉就不存在了
3、按照样例写一个yaml文件
[root@master 0718]# cat deploymeng.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: 0722-deployment
labels:
labels: 0722-deployment
spec:
replicas: 2
selector:
matchLabels:
labels: 0722-pod
template: //相当于一个pod的内容
metadata:
labels:
labels: 0722-pod
spec:
containers:
- name: 0722-deployment-image
image: tomcat
[root@master 0718]# kubectl get deployment --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
0722-deployment 2/2 2 2 3m3s labels=0722-deployment
READY:2/2说明了副本数是2,生成了2个Pod
NAME:对应了一开头metadata的name
LABELS:对应了一开头metadata的labels
[root@master 0718]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
0719-run-pod 1/1 Running 0 2d21h run=0719-run-pod
0719-yamlpod 1/1 Running 0 2d21h 719=yamlpod
0720-1411 1/1 Running 0 46h run=0720-1411
0722-deployment-7d65c6488d-dd2th 1/1 Running 0 4m51s labels=0722-pod,pod-template-hash=7d65c6488d
0722-deployment-7d65c6488d-wzwzn 1/1 Running 0 4m51s labels=0722-pod,pod-template-hash=7d65c6488d
4、删除一个Pod,观察是否会重建
[root@master 0718]# kubectl delete pod 0722-deployment-7d65c6488d-dd2th
pod "0722-deployment-7d65c6488d-dd2th" deleted
[root@master 0718]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
0719-run-pod 1/1 Running 0 2d22h run=0719-run-pod
0719-yamlpod 1/1 Running 0 2d21h 719=yamlpod
0720-1411 1/1 Running 0 46h run=0720-1411
0722-deployment-7d65c6488d-ql9wd 0/1 ContainerCreating 0 4s labels=0722-pod,pod-template-hash=7d65c6488d
0722-deployment-7d65c6488d-wzwzn 1/1 Running 0 9m17s labels=0722-pod,pod-template-hash=7d65c6488d
马上新建一个Pod
5、按要求新建一个deployment
[root@master 0718]# cat deploy072201.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: 0722-deployment-xs
labels:
labels: 0722-deployment-xs-label
spec:
replicas: 3
selector:
matchLabels:
labels: 0722-deployment-xs-pod-label
template:
metadata:
labels:
labels: 0722-deployment-xs-pod-label
spec:
containers:
- name: 0722-deployment-imagename
image: nginx
[root@master 0718]# kubectl get deploy --show-labels
NAME READY UP-TO-DATE AVAILABLE AGE LABELS
0722-deployment 2/2 2 2 161m labels=0722-deployment
0722-deployment-xs 3/3 3 3 101m labels=0722-deployment-xs-label
七、Service
kubectl get endpoints
获取service的落点情况
Endpoints表示一个Service对应的所有Pod副本的访问地址。
[root@master 0718]# kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 192.168.137.223:6443 38d
service-0722 10.244.167.137:8080,10.244.167.139:8080 19m
tomcat <none> 37d
kubectl get service
[root@master 0718]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 38d
service-0722 ClusterIP 10.111.130.36 <none> 8080/TCP 19m
tomcat NodePort 10.108.189.115 <none> 8080:30792/TCP 37d
[root@master 0718]# cat service.yaml
apiVersion: v1
kind: Service
metadata:
name: service-0722
labels:
labels: 0722-service
spec:
selector:
labels: 0722-pod //要选择deployment里面的pod的labels,如果本地有不同的pod但有重复的label就会出现循环调用的问题,如果是同个服务,就会多一个pod,但如果不同的服务就会报错
ports:
- name: deployment-service
port: 8080
targetPort: 8080
案例:
[root@master 0718]# cat service0725.yaml
apiVersion: v1
kind: Service
metadata:
name: service-1543
labels:
labels: service-1543
spec:
selector:
labels: 0722-pod
ports:
- name: deployment-service
port: 9999
targetPort: 8080
验证:
[root@master 0718]# kubectl exec -it 0722-deployment-7d65c6488d-ql9wd -- bash
root@0722-deployment-7d65c6488d-ql9wd:/usr/local/tomcat# curl service-1543:9999
<!doctype html><html lang="en"><head><title>HTTP Status 404 – Not Found</title><style type="text/css">body {font-family:Tahoma,Arial,sans-serif;} h1, h2, h3, b {color:white;background-color:#525D76;} h1 {font-size:22px;} h2 {font-size:16px;} h3 {font-size:14px;} p {font-size:12px;} a {color:black;} .line {height:1px;background-color:#525D76;border:none;}</style></head><body><h1>HTTP Status 404 – Not Found</h1><hr class="line" /><p><b>Type</b> Status Report</p><p><b>Description</b> The origin server did not find a current representation for the target resource or is not willing to disclose that one exists.</p><hr class="line" /><h3>Apache Tomcat/10.0.14</h3></body></html>root@0722-deployment-7d65c6488d-ql9wd:/usr/local/tomcat# curl service-1543:8080
如果寻找service对应的Pod
kubectl get service -o yaml
kubectl get pod -l labels=0722-pod
1、如何分析Pod是哪个服务的
kubectl get pods -A 查看所有的Pod
[root@master ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default 0719-run-pod 1/1 Running 0 6d1h
default 0719-yamlpod 1/1 Running 0 6d1h
default 0720-1411 1/1 Running 0 5d2h
default 0722-deployment-7d65c6488d-ql9wd 1/1 Running 0 3d3h
default 0722-deployment-7d65c6488d-wzwzn 1/1 Running 0 3d3h
default 0722-deployment-xs-84f945bdd6-kpmtc 1/1 Running 0 3d2h
default 0722-deployment-xs-84f945bdd6-s4mv7 1/1 Running 0 3d2h
default 0722-deployment-xs-84f945bdd6-shp7s 1/1 Running 0 3d2h
kube-system calico-kube-controllers-6b77fff45-hpwl8 1/1 Running 0 38d
kube-system calico-node-dp5gj 1/1 Running 0 38d
kube-system calico-node-xblsj 1/1 Running 0 38d
kube-system coredns-6d8c4cb4d-crrl9 1/1 Running 0 38d
kube-system coredns-6d8c4cb4d-l6msv 1/1 Running 0 38d
kube-system etcd-master 1/1 Running 1 38d
kube-system kube-apiserver-master 1/1 Running 1 38d
kube-system kube-controller-manager-master 1/1 Running 1 38d
kube-system kube-proxy-sk9kq 1/1 Running 0 38d
kube-system kube-proxy-x8b6p 1/1 Running 0 38d
kube-system kube-scheduler-master 1/1 Running 1 38d
以其中一个为例:0722-deployment-7d65c6488d-ql9wd
kubectl describe pod 0722-deployment-7d65c6488d-ql9wd -n default|grep By
Controlled By: ReplicaSet/0722-deployment-7d65c6488d
有By关键字,而且有ReplicaSet说明是deployment控制的
[root@master ~]# kubectl get deployment 0722-deployment
NAME READY UP-TO-DATE AVAILABLE AGE
0722-deployment 2/2 2 2 3d4
案例:查kube-proxy这个pod是来自什么资源
[root@master ~]# kubectl describe pod kube-proxy-sk9kq -n kube-system|grep By
Controlled By: DaemonSet/kube-proxy
------------------------------------------------
由此可见这是个daemonset资源
------------------------------------------------
kubectl get daemonsets -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system calico-node 2 2 2 2 2 kubernetes.io/os=linux 38d
kube-system kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 38d
-------------------------------------------------
[root@master ~]# kubectl get daemonset kube-proxy -n kube-system
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 38d
八、daemonset
一个node只能运行一个Pod
案例:
创建一个daemonset,名字是daemonset-1711,
namespace test,daemonset的labels:daemonset=1711,pod的labels,
daemonset-pod=1711,pod的名字daemonset-image,image=nginx
[root@master 0718]# cat daemonset-1711.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: test
name: daemonset-1711
labels:
daemonset: "1711" //daemonset的labels
spec:
selector:
matchLabels:
daemonset-1711: "1711" //pod的labels,这里的matchLabels值要跟下面的labels的值一致
template:
metadata:
labels:
daemonset-1711: "1711"
spec:
containers:
- name: daemonset-image
image: nginx
成功了!
-----------------------------------------------------------------------------------------
[root@master 0718]# kubectl get daemonset01711 daemonset -A
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
dev dev-test 1 1 1 1 1 <none> 19h
kube-system calico-node 2 2 2 2 2 kubernetes.io/os=linux 39d
kube-system kube-proxy 2 2 2 2 2 kubernetes.io/os=linux 39d
test daemonset-1711 1 1 1 1 1 <none> 15s
[root@master 0718]# kubectl get daemonset -o yaml -n test
apiVersion: v1
items:
- apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "1"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"apps/v1","kind":"DaemonSet","metadata":{"annotations":{},"name":"daemonset-1711","namespace":"test"},"spec":{"selector":{"matchLabels":{"daemonset-1711":"1711"}},"template":{"metadata":{"labels":{"daemonset-1711":"1711"}},"spec":{"containers":[{"image":"nginx","name":"daemonset-image"}]}}}}
creationTimestamp: "2022-07-26T04:22:14Z"
generation: 1
name: daemonset-1711
namespace: test
resourceVersion: "110641"
uid: d9175cb2-9d81-440a-b772-79353a4c5a4d
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
daemonset-1711: "1711"
template:
metadata:
creationTimestamp: null
labels:
daemonset-1711: "1711"
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: daemonset-image
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
status:
currentNumberScheduled: 1
desiredNumberScheduled: 1
numberAvailable: 1
numberMisscheduled: 0
numberReady: 1
observedGeneration: 1
updatedNumberScheduled: 1
kind: List
metadata:
resourceVersion: ""
selfLink: ""
问题:
为什么只有在node创建pod,没在master里面创建。
[root@master ~]# kubectl describe node master|grep T
projectcalico.org/IPv4IPIPTunnelAddr: 10.244.219.64
CreationTimestamp: Fri, 17 Jun 2022 10:41:03 +0800
Taints: node-role.kubernetes.io/master:NoSchedule
AcquireTime: <unset>
RenewTime: Wed, 27 Jul 2022 11:42:09 +0800
Type Status LastHeartbeatTime LastTransitionTime Reason Message
Ready True Wed, 27 Jul 2022 11:38:56 +0800 Fri, 17 Jun 2022 11:37:26 +0800 KubeletReady kubelet is posting ready status
(Total limits may be over 100 percent, i.e., overcommitted.)
Taints这里是说master:NoSchedule
可以执行:
kubectl taint node master node-role.kubernetes.io/master:NoSchedule-
-----------------------------------------------------------------------
[root@master ~]# kubectl get daemonset -n test
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset-1711 2 2 1 2 1 <none> 23h
更多推荐
所有评论(0)