F5在K8S环境下的4、7层应用统一发布
F5在K8S环境下的4、7层应用统一发布F5在K8S环境下的4、7层应用统一发布(By Jeremy文轩)一、实验拓扑二、K8S环境搭建三、开始前的准备1.F5上安装AS32.取消阿里云的源四、创建VXLAN1.创建vxlan的Tunnel Profile2.创建vxlan的Tunnel3.创建面向Pod的接口地址4.保存配置5.在创建伪Node之前,先查看tunnel的MAC地址,并更新到big
F5在K8S环境下的4、7层应用统一发布
F5在K8S环境下的4、7层应用统一发布(By Jeremy文轩)
该文档主要为了说明在K8S使用Flannel VXLAN的情况下,CIS应该如何部署。 文档中包含了如何使用Flannel vxlan部署K8S集群,并记录了一些简单的排错过程,希望能够对您有所帮助。
一、实验拓扑
二、K8S环境搭建
角色 | IP |
---|---|
master | 192.168.137.2 |
node | 192.168.137.3 |
1.关闭防火墙
systemctl stop forewalld
systemctl disable firewalld
2.关闭selinux:
sed -i 's/enforcing/disabled/' /etc/selinux/config #永久
setenforce 0 #临时
3.关闭swap:
swapoff -a #临时
sed -ri 's/.*swap.*/#&/' /etc/fstab #永久
4.设置主机名:
hostnamectl set-hostname <hostname>
5.在所有节点中添加hosts:
cat >> /etc/hosts << EOF
192.168.137.2 master
192.168.137.3 node
192.168.137.4 bigip1
EOF
6.将桥接的IPv4流量传递到iptables的链:
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system # 生效
cat > /proc/sys/net/ipv4/ip_forward << EOF
1
EOF
7.时间同步:
yum install ntpdate -y
ntpdate time.windows.com
8.安装Docker
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum -y install docker-ce
systemctl enable docker && systemctl start docker
docker --version
Docker version 18.06.1-ce,buile e68fc7a
9.添加Docker源
mkdir /etc/docker
cat > /etc/docker/daemon.json << EOF
{
"registry-mirrors": ["https://b9pmyelo.mirror.aliyuncd.com"]
}
EOF
systemctl restart docker
10.添加阿里云YUM源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
11.安装kubeadm,kubectl,kubelet
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet
12.部署kubernetes Master
kubeadm init \
--apiserver-advertise-address=192.168.137.2 \
--image-repository registry.aliyuncs.com/google_containers \
#如果不指定就是从国外拉
--service-cidr=10.96.0.0/12 \
#尽量不要与现有的网络有冲突
--pod-network-cidr=10.244.0.0/16
回车,部署的是master中的组件
如果部署有问题,可是使用--v=5来看实时拉取镜像的日志
如果还是无法安装,可以使用如下命令:
docker pull gotok8s/coredns:v1.8.0
docker tag 296a6d5035e2 registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0
# 因为新版本中的coredns改名为google_containers/coredns,如果直接拉会有找不到镜像的报错
13.完善kubectl补全命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
14.将Node加入到master中
kubeadm join 192.168.137.2:6443 --token f0vp6k.cm22l1qj7naui0k3 \
--discovery-token-ca-cert-hash sha256:f614575511d70e98c9cb5864e393000a062de43e7375e5429994d6ebaeed9c8c
15.部署Flannel
kubectl create -f kube-flannel.yaml
查看接口变化:
可以看到,在Master上起了flannel.1的接口,而在node上起了cni0,flannel.1,veth等接口,这就要涉及到Flannel vxlan的工作原理了
查看路由:
16.部署nginx进行测试
kubectl create -f nginx-test.yaml
三、开始前的准备
1.F5上安装AS3
touch /var/config/rest/iapps/enable # 该命令适用于v14.0之前的版本
将rpm包导入到F5中
开始前先测试AS3是否安装完成
https://clouddocs.f5.com/products/extensions/f5-appsvcs-extension/latest/userguide/quick-start.html#quick-start-example-declaration
2.取消阿里云的源
mv /etc/yum.repos.d/kubernetes.repo kubernetes.repo.bak
mv /etc/docker/daemon.json daemon.json.bak
mv /etc/yum.repos.d/docker-ce.repo docker-ce.repo.bak
四、创建VXLAN
1.创建vxlan的Tunnel Profile
tmsh create auth partition kubernetes
create net tunnels vxlan fl-vxlan port 8472 flooding-type none #因为在flannel vxlan中使用的端口是8472
2.创建vxlan的Tunnel
tmsh create net tunnels tunnel fl-vxlan key 1 local-address 192.168.137.4 profile fl-vxlan #这里的local-address是F5的管理地址
3.创建面向Pod的接口地址
tmsh create net self pod_ipaddr address 10.244.137.4/16 vlan fl-vxlan allow-service all #这里的地址是创建的面向Pod的地址,这个地址首先不能够和K8S集群中的地址冲突,然后在后面的Pod yaml中会声明所使用的网段,所以这里可以尽量选择大一点的网段来配置地址
4.保存配置
tmsh save sys config
5.在创建伪Node之前,先查看tunnel的MAC地址,并更新到bigip-node.yaml中
tmsh show net tunnels tunnel fl-vxlan all-properties
6.创建bigip的node
kubectl create -f bigip-node.yaml
bigip-node.yaml
apiVersion: v1
kind: Node
metadata:
name: bigip1
annotations:
#Replace IP with Self-IP for your deployment
flannel.alpha.coreos.com/public-ip: "192.168.137.4"
#Replace MAC with your BIG-IP Flannel VXLAN Tunnel MAC
flannel.alpha.coreos.com/backend-data: '{"VtepMAC":"00:0c:29:03:2e:22"}'
flannel.alpha.coreos.com/backend-type: "vxlan"
flannel.alpha.coreos.com/kube-subnet-manager: "true"
spec:
#Replace Subnet with your BIG-IP Flannel Subnet
podCIDR: "10.244.137.0/24" #这是向k8s集群宣告这个node上会分配哪一段的地址,这样就不会有分配的Pod和创建的fl-vxlan的IP冲突的情况
7.添加K8S Secret,使用的是F5的用户名密码
kubectl create secret generic bigip-login -n kube-system --from-literal=username=admin --from-literal=password=admin
8.创建用来部署CIS的serviceaccount
kubectl create serviceaccount bigip-ctlr -n kube-system
9.创建rbac,这个里面会调用前面的seret和serviceaccount
kubectl apply -f k8s_rbac.yaml
10.创建bigip-ctrl
kubectl apply -f cis_deploy.yaml
cis_deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-bigip-ctlr-deployment
namespace: kube-system
spec:
# DO NOT INCREASE REPLICA COUNT
replicas: 1
selector:
matchLabels:
app: k8s-bigip-ctlr-deployment
template:
metadata:
labels:
app: k8s-bigip-ctlr-deployment
spec:
# Name of the Service Account bound to a Cluster Role with the required
# permissions
containers:
- name: k8s-bigip-ctlr
image: "f5networks/k8s-bigip-ctlr:2.3.0"
env:
- name: BIGIP_USERNAME
valueFrom:
secretKeyRef:
# Replace with the name of the Secret containing your login
# credentials
name: bigip-login
key: username
- name: BIGIP_PASSWORD
valueFrom:
secretKeyRef:
# Replace with the name of the Secret containing your login
# credentials
name: bigip-login
key: password
command: ["/app/bin/k8s-bigip-ctlr"]
args: [
# See the k8s-bigip-ctlr documentation for information about
# all config options
# https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest
"--bigip-username=$(BIGIP_USERNAME)",
"--bigip-password=$(BIGIP_PASSWORD)",
"--bigip-url=https://192.168.137.4:8443",
"--bigip-partition=kubernetes",
"--pool-member-type=cluster",
"--insecure=true",
"--log-as3-response=true", #这个参数是log as3响应
"--log-level=DEBUG", #这个参数时设置bigip-ctrl的日志等级,默认是INFO,能够看到的日志很少,不利于排错
"--flannel-name=/Common/fl-vxlan", #这个参数时在使用flannel vxlan的时候要带上,其中/Common/fl-vxlan是在F5中设置的vxlan tunnel的全路径的名称,如果不加上这个参数,F5上的FDP是不能够自动学到的
# for secure communication provide the internal ca certificates using config-map with below option and remove insecure parameter
#"--trusted-certs-cfgmap=<namespace/configmap>",
]
serviceAccount: bigip-ctlr
serviceAccountName: bigip-ctlr
#imagePullSecrets:
# Secret that gives access to a private docker registry
#- name: f5-docker-images
# Secret containing the BIG-IP system login credentials
#- name: bigip-login
如果ctrl长时间没有建立起来,可以尝试手动拉取镜像
docker pull f5networks/k8s-bigip-ctlr:2.3.0
查看ctrl的部署情况
查看ctrl的日志
kubectl logs -n kube-system k8s-bigip-ctlr-deployment-75d66b8bb5-mwjmx -f
我们在F5上查看FDB的情况
五、部署应用测试
kubectl create -f l7_test.yaml
l7_test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: f5-app
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: f5-app
template:
metadata:
labels:
app: f5-app
spec:
containers:
- image: nginx:latest
imagePullPolicy: Always
name: nginx
ports:
- containerPort: 80
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: f5-app
namespace: default
labels:
app: f5-app
cis.f5.com/as3-tenant: l7_vs_demo
cis.f5.com/as3-app: App1
cis.f5.com/as3-pool: web_pool
spec:
ports:
- name: f5-app
port: 80
protocol: TCP
targetPort: 80
type: ClusterIP
selector:
app: f5-app
---
kind: ConfigMap
apiVersion: v1
metadata:
name: my-config-map
namespace: default
labels:
f5type: virtual-server
as3: "true"
data:
template: |
{
"class":"AS3",
"action":"deploy",
"persist":true,
"declaration":{
"class":"ADC",
"schemaVersion":"3.10.0",
"id":"l7_vs_demo",
"label":"l7_vs_demo",
"remark":"HTTP application",
"l7_vs_demo":{
"class":"Tenant",
"App1":{
"class":"Application",
"template":"http",
"serviceMain":{
"class":"Service_HTTP",
"virtualAddresses":[
"192.168.137.139"
],
"virtualPort":80,
"pool":"web_pool"
},
"web_pool":{
"class":"Pool",
"monitors":[
"http"
],
"members":[
{
"servicePort":80,
"serverAddresses":[]
}
]
}
}
}
}
}
做到这里,发现和后端的pod的健康检查不通过,在F5上也ping不通pod
到Master上ping发现也不通
但是在node上是通的,因为这个pod是node自己上面的
查看pod的状态
发现coredns的状态为ImagePullBackOff,并没有起来,coredns是在k8s中负责跨Node Pod通信的组件,所以从Master去ping是不通的
可以看到,部署coredns是用的registry.aliyuncs.com/google_containers/coredns/coredns:v1.8.0的镜像,但是在node上查看是没有这个镜像的
所以,在node上手动去拉取镜像并修改tag
再查看pod的状态,发现都正常了
重启Master和Node,Master上ping Pod也能够通信了
此时,F5上的健康检查也通过了
访问VS进行测试
写在最后
本文主要为介绍如何在K8S集群中使用CIS,做到L4/L7的统一应用发布,通过F5 CIS能够大大增加CI/CD的自动化程度,减轻应用发布及运维的压力。如果您对F5或者F5 CIS感兴趣,欢迎联系。
Jeremy文轩,一个又懒又好奇的Boy。An engineer, but more than an engineer.
更多推荐
所有评论(0)