在这里插入图片描述
查看当前默认名称空间下有多无状态应用的pod


kubectl get deployment
创建deployment语法并测试
kubectl create deoloyment demoapp --image=ikubernetes/demoapp:v1.0 --replicas=3 --dry-run -o yaml
实际上生成的yaml文件如下
apiVersion: apps/v1
kind: deployment
metadata:
  creationTimestamp: null
  labels:
    app: demoapp
  name: demoapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: demoapp
    strategy: {}
    template:
      metadata:
        creationTimestamp: null
        lables:
          app: demoapp
        spec:
          containers:
          - image ikubernetes/demoapp:v1.0
            name: demoapp
            resources: {}
   status: {}

我们可以查看下详细信息

root@master:~/K8S_learing# kubectl  get  pods
NAME                       READY   STATUS    RESTARTS   AGE
demoapp-5f7d8f9847-kt4tw   1/1     Running   0          4s
demoapp-5f7d8f9847-n5p8j   1/1     Running   0          4s
demoapp-5f7d8f9847-wqvwc   1/1     Running   0          4s

查看到默认名称空间的启动的无状态应用的个数、

root@master:~# kubectl  get  deployment
dNAME      READY   UP-TO-DATE   AVAILABLE   AGE
demoapp   3/3     3            3           10d

查看Pod具体被调度到那台节点

root@master:~# kubectl   get  pods   -o wide
NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
demoapp-5f7d8f9847-kt4tw   1/1     Running   0          10d   10.244.1.78   node1   <none>           <none>
demoapp-5f7d8f9847-n5p8j   1/1     Running   0          10d   10.244.2.84   node2   <none>           <none>
demoapp-5f7d8f9847-wqvwc   1/1     Running   0          10d   10.244.2.83   node2   <none>           <none>

接下来我们可以curl pod所在节点的ip

root@master:~# curl   10.244.1.78
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-5f7d8f9847-kt4tw, ServerIP: 10.244.1.78!
root@master:~# curl   10.244.2.84
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-5f7d8f9847-n5p8j, ServerIP: 10.244.2.84!
root@master:~# curl   10.244.2.83
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-5f7d8f9847-wqvwc, ServerIP: 10.244.2.83!

具体查看某个pod并以json格式输出

root@master:~# kubectl   get pod  demoapp-5f7d8f9847-kt4tw  -o json

具体查看某个Pod并且以yaml格式输出

root@master:~# kubectl   get pod  demoapp-5f7d8f9847-kt4tw  -o yaml

查看某个名称空间下的pod资源

root@master:~# kubectl  get  pods -n  kube-system

删除某个pod

root@master:~# kubectl  delete  pod  demoapp-5f7d8f9847-kt4tw

因为此Pod本身就是无状态的所以删除后重新创建一个pod

root@master:~# kubectl  get  pods
NAME                       READY   STATUS    RESTARTS   AGE
demoapp-5f7d8f9847-n5p8j   1/1     Running   0          10d
demoapp-5f7d8f9847-qm4xk   1/1     Running   0          64s
demoapp-5f7d8f9847-wqvwc   1/1     Running   0          10d

显示Pod上附加上的lable标签

root@master:~# kubectl  get pod   --show-labels
NAME                       READY   STATUS    RESTARTS   AGE     LABELS
demoapp-5f7d8f9847-n5p8j   1/1     Running   0          10d     app=demoapp,pod-template-hash=5f7d8f9847
demoapp-5f7d8f9847-qm4xk   1/1     Running   0          5m37s   app=demoapp,pod-template-hash=5f7d8f9847
demoapp-5f7d8f9847-wqvwc   1/1     Running   0          10d     app=demoapp,pod-template-hash=5f7d8f9847

进入某个pod 的交互式接口

root@master:~# kubectl exec  demoapp-5f7d8f9847-n5p8j  -it  -- /bin/sh

进入pod中可以curl其他pod节点

iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.84, ServerName: demoapp-5f7d8f9847-n5p8j, ServerIP: 10.244.2.84!
[root@demoapp-5f7d8f9847-n5p8j /]# curl  10.244.1.79
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.84, ServerName: demoapp-5f7d8f9847-qm4xk, ServerIP: 10.244.1.79!

创建services

root@master:~# kubectl  create service  --help
Create a service using a specified subcommand.

Aliases:
service, svc

Available Commands:
  clusterip    Create a ClusterIP service
  externalname Create an ExternalName service
  loadbalancer Create a LoadBalancer service
  nodeport     Create a NodePort service

下面先介绍常用的量
ClusterIP 可以被集群自己分配,我们也可以自己指定,此类型的svc只能在集群 内部被访问。
nodeport 在集群外部,通过节点端口映射来访问。

下面测试创建一个svc

root@master:~# kubectl   create svc clusterip  demoapp --tcp=80:80  --dry-run=client

此处的clusterip 所对应的name是不能够随意的写。应该与deployment控制器的名称一致,因为他们name相同—》app标签选择器相同----》然后就会建立关联。

root@master:~# kubectl   create svc clusterip  demoapp  --tcp=80:80  --dry-run=client  -o yaml
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: demoapp
  name: demoapp
spec:
  ports:
  - name: 80-80
    port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: demoapp
  type: ClusterIP
status:
  loadBalancer: {}

查看详细信息

root@master:~# kubectl   get  svc  demoapp
NAME      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
demoapp   ClusterIP   10.109.201.174   <none>        80/TCP    20s


root@master:~# kubectl  describe services demoapp
Name:              demoapp
Namespace:         default
Labels:            app=demoapp
Annotations:       <none>
Selector:          app=demoapp
Type:              ClusterIP
IP Family Policy:  SingleStack
IP Families:       IPv4
IP:                10.109.201.174
IPs:               10.109.201.174
Port:              80-80  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.79:80,10.244.2.83:80,10.244.2.84:80 自动发现pod已经存在的节点并且自动关联
Session Affinity:  None
Events:            <none>
 

接下来我们就可以使用该svc地址测试是否能够做到负载均衡的效果

iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-5f7d8f9847-qm4xk, ServerIP: 10.244.1.79!
root@master:~# curl  10.109.201.174
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-5f7d8f9847-n5p8j, ServerIP: 10.244.2.84!
root@master:~# curl  10.109.201.174
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-5f7d8f9847-wqvwc, ServerIP: 10.244.2.83!
root@master:~# curl  10.109.201.174

在pod中同样能够互相的curl

[root@demoapp-5f7d8f9847-n5p8j /]# curl  10.109.201.174
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.1, ServerName: demoapp-5f7d8f9847-n5p8j, ServerIP: 10.244.2.84!
[root@demoapp-5f7d8f9847-n5p8j /]# curl  10.109.201.174
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.84, ServerName: demoapp-5f7d8f9847-wqvwc, ServerIP: 10.244.2.83!
[root@demoapp-5f7d8f9847-n5p8j /]# curl  10.109.201.174
iKubernetes demoapp v1.0 !! ClientIP: 10.244.2.84, ServerName: demoapp-5f7d8f9847-qm4xk, ServerIP: 10.244.1.79!

接下来我们删除两个pod 进行测试是否能够自动创建pod并且关联service

root@master:~# kubectl  delete pods demoapp-5f7d8f9847-n5p8j demoapp-5f7d8f9847-wqvwc

继续curl demoapp service

iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-5f7d8f9847-5cp7c, ServerIP: 10.244.2.85!
root@master:~# curl   10.109.201.174
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-5f7d8f9847-lzsjr, ServerIP: 10.244.1.80!
root@master:~# curl   10.109.201.174
iKubernetes demoapp v1.0 !! ClientIP: 10.244.0.0, ServerName: demoapp-5f7d8f9847-qm4xk, ServerIP: 10.244.1.79!

扩缩容

格式
  # Scale stateful set named 'web' to 3
  kubectl scale --replicas=3 statefulset/web

将demoapp 这个deployment控制器扩到4个pod

root@master:~# kubectl  scale deployment  demoapp --replicas=4

查看该depoyment 对应的pod数量

root@master:~# kubectl   get  deployment demoapp
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
demoapp   4/4     4            4           11d

缩容也是同样如此只需要减少replicas对应的数值

root@master:~# kubectl  scale deployment  demoapp --replicas=2 
deployment.apps/demoapp scaled
root@master:~# kubectl   get  deployment demoapp
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
demoapp   2/2     2            2           11d

当然此处我们仅仅是手动的进行扩缩容,k8s 中的HPA(Horizontal automatic scale)可以实现自动扩缩容
还有一个VPA

VPA: 垂直扩容: 指的是升级器服务器本身的硬件资源。
HAP:水平扩容: 指的是增加节点数,达到均衡负载的效果

HPA可以自动的根据检查pod 的系统资源的利用率来进行自动的扩缩容。

创建名称空间

root@master:~# kubectl create namespace  test   --dry-run=client  -o yaml
apiVersion: v1
kind: Namespace
metadata:
  creationTimestamp: null
  name: test
spec: {}
status: {}

基于yaml文件配置文件创建namespace

root@master:~/K8S_learing# cat namespace_dev.yml 
apiVersion: v1
kind: Namespace
metadata:
  name: test
root@master:~/K8S_learing# kubectl  apply  -f  namespace_dev.yml 
namespace/test created
root@master:~/K8S_learing# kubectl   get ns   test
NAME   STATUS   AGE
test   Active   8s

在同一个名称空间下,不能重复创建相同名称的deployment 但是可以在不同的名称空间创建相同的deployment

root@master:~/K8S_learing# kubectl create deployment demoapp  --image=ikubernetes/demoapp:v1.0 --replicas=2 
error: failed to create deployment: deployments.apps "demoapp" already exists
root@master:~/K8S_learing# kubectl create deployment demoapp  --image=ikubernetes/demoapp:v1.0 --replicas=2 -n test
deployment.apps/demoapp created
root@master:~/K8S_learing# kubectl  get deployment -n test
NAME      READY   UP-TO-DATE   AVAILABLE   AGE
demoapp   2/2     2            2           24s

删除名称空间

删除某个namespace下的deployment 那么整个deployment 所对应的pod将全部会被删除

root@master:~/K8S_learing#  kubectl delete deployment demoapp -n test
deployment.apps "demoapp" deleted

基于yml 文件创建指定namespace下的deployment

root@master:~/K8S_learing# kubectl  create -f  demoapp_test.yml 

root@master:~/K8S_learing# cat demoapp_test.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: demoapp02
  name: demoapp02
  namespace: test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: demoapp02
  strategy: {}
  template:
    metadata:
      labels:
        app: demoapp02
    spec:
      containers:
      - image: ikubernetes/demoapp:v1.0
        name: demoapp02
        resources: {}
status: {}

由此我们可以查看到该ns下的资源

root@master:~/K8S_learing# kubectl   get pods  -n  test
NAME                         READY   STATUS    RESTARTS   AGE
demoapp02-674df64cb7-g7ft4   1/1     Running   0          2m11s
demoapp02-674df64cb7-jfccz   1/1     Running   0          2m11s

基于yml 文件删除指定namespace下的deployment

root@master:~/K8S_learing# kubectl  delete  -f  demoapp_test.yml 
deployment.apps "demoapp02" deleted

此时我们可以将两个文件整合到一起

root@master:~/K8S_learing# cat demoapp_total.yml 
apiVersion: v1
kind: Namespace
metadata:
  name: test
---  资源隔离器
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: demoapp02
  name: demoapp02
  namespace: test
spec:
  replicas: 2
  selector:
    matchLabels:
      app: demoapp02
  strategy: {}
  template:
    metadata:
      labels:
        app: demoapp02
    spec:
      containers:
      - image: ikubernetes/demoapp:v1.0
        name: demoapp02
        resources: {}
status: {}

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐