CNI网络插件
kubernetes设计了网络模型,但是pod之间通信的具体实现交给了CNI往插件。常用的CNI网络插件有:Flannel 、Calico、Canal、Contiv等,其中Flannel和Calico占比接近80%,Flannel占比略多于Calico。本次部署使用Flannel作为网络插件

1.k8s的CNI网络插件-Flannel[21,22]

github地址:https://github.com/coreos/flannel/releases
在这里插入图片描述
注意:这里部署文档以k8s7-21.host.com主机为例,另外一台运算节点安装部署方法类似

# wget https://github.com/coreos/flannel/releases/download/v0.11.0/flannel-v0.11.0-linux-amd64.tar.gz
mkdir /opt/flannel-v0.11.0  # 因为flannel压缩包内部没有套目录
tar xf flannel-v0.11.0-linux-amd64.tar.gz -C /opt/flannel-v0.11.0/
ln -s /opt/flannel-v0.11.0 /opt/flannel
1.1 拷贝证书
# flannel 需要以客户端的身份访问etcd,需要相关证书
[root@k8s7-21 src]# mkdir /opt/flannel/certs
[root@k8s7-200 certs]# scp ca.pem client-key.pem client.pem k8s7-21:/opt/flannel/certs/
1.2 创建启动脚本
[root@k8s7-21 flannel]# cat subnet.env # 创建子网信息,7-22的subnet需要修改
FLANNEL_NETWORK=172.7.0.0/16
FLANNEL_SUBNET=172.7.21.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=false

操作etcd,增加host-gw
[root@k8s7-21 src]# /opt/etcd/etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}'

VxLAN模式
#etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "VxLAN"}}'

直接路由模型
#etcdctl set /coreos.com/network/config '{"Network": "172.7.0.0/16", "Backend": {"Type": "VxLAN","Directrouting": true}}'

[root@k8s7-21 src]# /opt/etcd/etcdctl get /coreos.com/network/config # 只需要在一台etcd机器上设置就可以了
{"Network": "172.7.0.0/16", "Backend": {"Type": "host-gw"}}
# public-ip 为本机IP,iface 为当前宿主机对外网卡
[root@k8s7-21 flannel]# cat flannel-startup.sh 
#!/bin/sh

WORK_DIR=$(dirname $(readlink -f $0))
[ $? -eq 0 ] && cd $WORK_DIR || exit

/opt/flannel/flanneld \
    --public-ip=10.4.7.21 \
    --etcd-endpoints=https://10.4.7.12:2379,https://10.4.7.21:2379,https://10.4.7.22:2379 \
    --etcd-keyfile=./certs/client-key.pem \
    --etcd-certfile=./certs/client.pem \
    --etcd-cafile=./certs/ca.pem \
    --iface=eth0 \
    --subnet-file=./subnet.env \
    --healthz-port=2401
    
chmod +x flannel-startup.sh
[root@k8s7-21 flannel]# cat /etc/supervisord.d/flannel.ini 
[program:flanneld-7-21]
command=/opt/flannel/flannel-startup.sh                      ; the program (relative uses PATH, can take args)
numprocs=1                                                   ; number of processes copies to start (def 1)
directory=/opt/flannel                                       ; directory to cwd to before exec (def no cwd)
autostart=true                                               ; start at supervisord start (default: true)
autorestart=true                                             ; retstart at unexpected quit (default: true)
startsecs=30                                                 ; number of secs prog must stay running (def. 1)
startretries=3                                               ; max # of serial start failures (default 3)
exitcodes=0,2                                                ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT                                              ; signal used to kill process (default TERM)
stopwaitsecs=10                                              ; max num secs to wait b4 SIGKILL (default 10)
user=root                                                    ; setuid to this UNIX account to run the program
redirect_stderr=true                                         ; redirect proc stderr to stdout (default false)
stdout_logfile=/data/logs/flanneld/flanneld.stdout.log       ; stderr log path, NONE for none; default AUTO
stdout_logfile_maxbytes=64MB                                 ; max # logfile bytes b4 rotation (default 50MB)
stdout_logfile_backups=5                                     ; # of stdout logfile backups (default 10)
stdout_capture_maxbytes=1MB                                  ; number of bytes in 'capturemode' (default 0)
stdout_events_enabled=false                                  ; emit events on stdout writes (default false)
killasgroup=true
stopasgroup=true

[root@k8s7-21 src]# mkdir -p /data/logs/flanneld/

[root@k8s7-21 src]# supervisorctl update
flanneld-7-21: added process group

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

1.3 验证跨网络访问
[root@k8s7-21 src]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE   IP           NODE                NOMINATED NODE   READINESS GATES
nginx-ds-7db29   1/1     Running   1          2d    172.7.22.2   k8s7-22.host.com   <none>           <none>
nginx-ds-vvsz7   1/1     Running   1          2d    172.7.21.2   k8s7-21.host.com   <none>           <none>

[root@k8s7-21 src]# curl -I 172.7.22.2
HTTP/1.1 200 OK
Server: nginx/1.17.6
Date: Thu, 09 Jan 2020 14:55:21 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 19 Nov 2019 12:50:08 GMT
Connection: keep-alive
ETag: "5dd3e500-264"
Accept-Ranges: bytes


操作记录:
[root@k8s7-21 ~]# ls
curl-7.20.0.tar.gz  nginx-ds.yaml
[root@k8s7-21 ~]# kubectl apply -f nginx-ds.yaml 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset.extensions/nginx-ds configured
[root@k8s7-21 ~]# kubectl get pods -n defautl
No resources found.
[root@k8s7-21 ~]# kubectl get pods -n default
NAME             READY   STATUS    RESTARTS   AGE
nginx-ds-dk9k2   1/1     Running   5          27d
nginx-ds-gl5nq   1/1     Running   7          27d
[root@k8s7-21 ~]# kubectl delete pod nginx-ds-dk9k2
pod "nginx-ds-dk9k2" deleted
[root@k8s7-21 ~]# kubectl delete pod nginx-ds-gl5nq
pod "nginx-ds-gl5nq" deleted
[root@k8s7-21 ~]# kubectl get pods -o wide
NAME             READY   STATUS    RESTARTS   AGE     IP           NODE                NOMINATED NODE   READINESS GATES
nginx-ds-4nd7m   1/1     Running   0          2m43s   172.7.21.2   k8s7-21.host.com   <none>           <none>
nginx-ds-l98h4   1/1     Running   0          5m13s   172.7.22.2   k8s7-22.host.com   <none>           <none>

1.5 解决pod间IP透传问题

所有Node上操作,即优化NAT网络

# 从pod a跨宿主机访问pod b时,在pod b中能看到的地址为 pod a 宿主机地址
[root@nginx-ds-jdp7q /]# tail -f /usr/local/nginx/logs/access.log
10.4.7.22 - - [13/Jan/2020:13:13:39 +0000] "GET / HTTP/1.1" 200 12 "-" "curl/7.29.0"
10.4.7.22 - - [13/Jan/2020:13:14:27 +0000] "GET / HTTP/1.1" 200 12 "-" "curl/7.29.0"
10.4.7.22 - - [13/Jan/2020:13:54:20 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"
10.4.7.22 - - [13/Jan/2020:13:54:25 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"

[root@k8s7-21 ~]# iptables-save |grep POSTROUTING|grep docker # 引发问题的规则
-A POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE

两个节点都要操作

1.安装iptables服务
[root@k8s7-21 ~]# yum install -y iptables-services
[root@k8s7-21 ~]# systemctl start iptables.service ; systemctl enable iptables.service

# 需要处理的规则:
[root@k8s7-21 ~]# iptables-save |grep POSTROUTING|grep docker
-A POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE


[root@k8s7-21 ~]# iptables-save | grep -i reject
-A INPUT -j REJECT --reject-with icmp-host-prohibited
-A FORWARD -j REJECT --reject-with icmp-host-prohibited

# 先删除规则
[root@k8s7-21 ~]# iptables -t nat -D POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE
[root@k8s7-21 ~]# iptables -t nat -I POSTROUTING -s 172.7.21.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE
源地址172.7.21.0 目标地址不是172.7.0.0/16这个网段,也不是从docker0出网的--》才去做SNAT转换
10.4.7.21主机上的,来源是172.7.21.0/24段的docker的ip,目标ip不是172.7.0.0/16段,网络发包不从docker0桥设备出站的,才进行SNAT转换


优化好的效果:
[root@k8s7-21 ~]# iptables-save |grep -i postrouting
:POSTROUTING ACCEPT [1:60]
:KUBE-POSTROUTING - [0:0]
-A POSTROUTING -s 172.7.21.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE


[root@k8s7-21 ~]# iptables -t filter -D INPUT -j REJECT --reject-with icmp-host-prohibited
[root@k8s7-21 ~]# iptables -t filter -D FORWARD -j REJECT --reject-with icmp-host-prohibited

SNAT规则优化 iptables -t nat -D POSTROUTING -s 172.7.21.0/24 ! -o docker0 -j MASQUERADE 
iptables -t nat -I POSTROUTING -s 172.7.21.0/24 ! -d 172.7.0.0/16 ! -o docker0 -j MASQUERADE 
永久保存 service iptables save 1

# 此时跨宿主机访问pod时,显示pod的IP
[root@nginx-ds-jdp7q /]# tail -f /usr/local/nginx/logs/access.log  
172.7.22.2 - - [13/Jan/2020:14:15:39 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"
172.7.22.2 - - [13/Jan/2020:14:15:47 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"
172.7.22.2 - - [13/Jan/2020:14:15:48 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"
172.7.22.2 - - [13/Jan/2020:14:15:48 +0000] "HEAD / HTTP/1.1" 200 0 "-" "curl/7.29.0"

2.k8s的服务发现插件-CoreDNS

k8s的服务发现

简单来说,服务发现就是服务(应用)之间相互定位的过程。
服务发现并非云计算时代独有的,传统的单体架构时代也会用到。以下应用场景下更需要服务发现
·服务(应用)的动态性强
·服务(应用)更新发布频繁
·服务(应用)支持自动伸缩
在K8S集群里,POD的IP是不断变化的,如何“以不变应万变”呢?
·抽象出了Service资源,通过标签选择器,关联一组pod
·抽象出了集群网络,通过相对固定的“集群IP”,使服务接入点固定
那么如何自动关联Service资源的“名称”和“集群网络IP”,从而达到服务被集群自动发现的目的呢?
·考虑传统DNS的模型:k8s7-21.host.com →10.4.7.21
·能否在K8S里建立这样的模型: nginx-ds 192.168.0.5

.K8S里服务发现的方式—DNS
实现K8S里DNS功能的插件(软件)
Kube-dnskubernetes-v1.2至kubernetes-v1.10
Coredns-kubernetes-v1.11至今
注意∶
K8S里的DNS不是万能的!它应该只负责自动维护“服务名”→“集群网络IP”之间的关系

1、配置yaml,后期通过Http方式去使用yaml清单文件。
• 配置nginx虚拟主机( k8s7-200 )
/etc/nginx/conf.d/k8s-yaml.od.com.conf
server {
    listen       80;
    server_name  k8s-yaml.od.com;

    location / {
        autoindex on;
        default_type text/plain;
        root /data/k8s-yaml;
    }
}
[root@k8s7-200 ~]# mkdir /data/k8s-yaml;
[root@k8s7-200 ~]# nginx -t && nginx -s reload


2、配置dns解析(k8s7-11)
[root@k8s7-11 ~]# cat /var/named/od.com.zone
$ORIGIN od.com.
$TTL 600	; 10 minutes
@   		IN SOA	dns.od.com. dnsadmin.od.com. (
				2020011301 ; serial
				10800      ; refresh (3 hours)
				900        ; retry (15 minutes)
				604800     ; expire (1 week)
				86400      ; minimum (1 day)
				)
				NS   dns.od.com.
$TTL 60	; 1 minute
dns                A    10.4.7.11
harbor             A    10.4.7.200
k8s-yaml           A    10.4.7.200

[root@k8s7-11 ~]# systemctl restart named
[root@k8s7-11 ~]# dig -t A k8s-yaml.od.com @10.4.7.11 +short
10.4.7.200

3、交付coredns到K8s
# 准备镜像
[root@k8s7-200 ~]# docker pull coredns/coredns:1.6.1
[root@k8s7-200 ~]# docker tag coredns/coredns:1.6.1 harbor.od.com/public/coredns:v1.6.1
[root@k8s7-200 ~]# docker push harbor.od.com/public/coredns:v1.6.1

准备资源配置清单 /data/k8s-yaml/coredns/
#3.1 rabc.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system

3.2 configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        log
        health
        ready
        kubernetes cluster.local 192.168.0.0/16
        forward . 10.4.7.11
        cache 30
        loop
        reload
        loadbalance
    }

#3.3 deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/name: "CoreDNS"
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: coredns
  template:
    metadata:
      labels:
        k8s-app: coredns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      containers:
      - name: coredns
        image: harbor.od.com/public/coredns:v1.6.1
        args:
        - -conf
        - /etc/coredns/Corefile
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile


#3.4 service.yaml

apiVersion: v1
kind: Service
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: coredns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: coredns
  clusterIP: 192.168.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
  - name: metrics
    port: 9153
    protocol: TCP


依次执行创建
浏览器打开:http://k8s-yaml.od.com/coredns 检查资源配置清单文件是否正确创建
在任意运算节点上应用资源配置清单

在这里插入图片描述

2.1测试dns
# 创建service
[root@k8s7-21 ~]# kubectl create deployment nginx-web --image=harbor.od.com/public/nginx:src_1.14.2
[root@k8s7-21 ~]# kubectl expose deployment nginx-web --port=80 --target-port=80 
[root@k8s7-21 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP        EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   192.168.0.1       <none>        443/TCP   8d
nginx-web    ClusterIP   192.168.164.230   <none>        80/TCP    8s
# 测试DNS,集群外必须使用FQDN(Fully Qualified Domain Name),全域名
[root@k8s7-21 ~]# dig -t A nginx-web.default.svc.cluster.local @192.168.0.2 +short # 内网解析OK
192.168.164.230
[root@k8s7-21 ~]# dig -t A www.baidu.com @192.168.0.2 +short # 外网解析OK
www.a.shifen.com.
180.101.49.11
180.101.49.12
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐