calico.yml详解
前言k8s创建calico网络见文档: 《calico网络安装和删除》1. 概述1.1 ulr官方下载地址如下:#wget https://docs.projectcalico.org/manifests/calico.yaml1.2 所需对象概述主要创建calico-node和calico-kube-controllers两个服务。需要创建如下资源:ConfigMap calico-config
·
文章目录
前言
k8s创建calico网络见文档: 《calico网络安装和删除》
1. 概述
1.1 ulr
官方下载地址如下:
#wget https://docs.projectcalico.org/manifests/calico.yaml
1.2 所需对象概述
主要创建calico-node
和calico-kube-controllers
两个服务。需要创建如下资源:
- ConfigMap
calico-config
作用:Calico所需的配置及CNI网络配置 - DaemoSet
Clico-Node
作用:初始化node节点的网络,保证pod节点的网络互通。 - Deployment
calico-kube-controllers
作用:理k8s集群中网络策略 - RBAC规则
- CRD(自定义资源)
2. ConfigMap
作用:Calico所需的配置及CNI网络配置
kind: ConfigMap
apiVersion: v1
metadata:
name: calico-config
namespace: kube-system
data:
# Typha is disabled.
typha_service_name: "calico-typha"
# Configure the backend to use.
calico_backend: "bird"
# Configure the MTU to use for workload interfaces and tunnels.
# By default, MTU is auto-detected, and explicitly setting this field should not be required.
# You can override auto-detection by providing a non-zero value.
veth_mtu: "0"
# The CNI network configuration to install on each node. The special
# values in this config will be automatically populated.
cni_network_config: |-
{
"name": "k8s-pod-network",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "calico",
"log_level": "info",
"log_file_path": "/var/log/calico/cni/cni.log",
"datastore_type": "kubernetes",
"nodename": "__KUBERNETES_NODE_NAME__",
"mtu": __CNI_MTU__,
"ipam": {
"type": "calico-ipam"
},
"policy": {
"type": "k8s"
},
"kubernetes": {
"kubeconfig": "__KUBECONFIG_FILEPATH__"
}
},
{
"type": "portmap",
"snat": true,
"capabilities": {"portMappings": true}
},
{
"type": "bandwidth",
"capabilities": {"bandwidth": true}
}
]
}
typha_service_name
typha模式,k8s数据存储模式超过50各节点推荐启用typha。
Typha组件可以帮助Calico扩展到大量的节点,而不会对Kubernetes API服务器造成过度的影响。
修改typha_service_name "none"
改为"calico-typha"
veth_mtu
网络接口的MTU(Maximum Transmission Unit )值,即最大传输单元,它的单位是字节)calico_backend
calico的后端,默认为"bird"
。cni_network_config
符合CNI规范的网络配置。
将在 /etc/cni/net.d目录下生成CNI网络配置文件。
[root@calico-master ~]# ll /etc/cni/net.d/
总用量 8
-rw-r--r-- 1 root root 664 7月 14 15:32 10-calico.conflist
-rw------- 1 root root 2778 7月 14 15:32 calico-kubeconfig
"type": "calico"
表示kubelet将在/opt/cni/bin
目录下搜索名为calico的可执行文件并调用它完成容器网络设置。
[root@calico-master ~]# ll /opt/cni/bin/
总用量 181900
-rwxr-xr-x 1 root root 4159518 7月 14 15:32 bandwidth
-rwxr-xr-x 1 root root 3581192 9月 10 2020 bridge
-rwxr-xr-x 1 root root 41472000 7月 14 15:32 calico
-rwxr-xr-x 1 root root 41472000 7月 14 15:32 calico-ipam
-rwxr-xr-x 1 root root 9837552 9月 10 2020 dhcp
-rwxr-xr-x 1 root root 4699824 9月 10 2020 firewall
-rwxr-xr-x 1 root root 3069556 7月 14 15:32 flannel
-rwxr-xr-x 1 root root 3274160 9月 10 2020 host-device
-rwxr-xr-x 1 root root 3614480 7月 14 15:32 host-local
-rwxr-xr-x 1 root root 41472000 7月 14 15:32 install
-rwxr-xr-x 1 root root 3377272 9月 10 2020 ipvlan
-rwxr-xr-x 1 root root 3209463 7月 14 15:32 loopback
-rwxr-xr-x 1 root root 3440168 9月 10 2020 macvlan
-rwxr-xr-x 1 root root 3939867 7月 14 15:32 portmap
-rwxr-xr-x 1 root root 3528800 9月 10 2020 ptp
-rwxr-xr-x 1 root root 2849328 9月 10 2020 sbr
-rwxr-xr-x 1 root root 2503512 9月 10 2020 static
-rwxr-xr-x 1 root root 3356587 7月 14 15:32 tuning
-rwxr-xr-x 1 root root 3377120 9月 10 2020 vlan
3. Clico-Node
3.1 Pod和容器
以DaemonSet形式在每个Node上运行一个calico-node的Pod
- 包含三个Init Containers:
分别为:upgrade-ipam
、install-cni
、flexvor-driver
用describe查看该pod,结果如下:
Controlled By: DaemonSet/calico-node
upgrade-ipam:
Container ID: docker://5896e83786f452577a7cd4b8a9e9a10c2f7f4950a5072af61f9f3ea7922232cc
Image: docker.io/calico/cni:v3.19.1
……
install-cni:
Container ID: docker://86fd9456c263746a61c43d519c01abe4e75f42cbd9976d985bd06d22d1733ebc
Image: docker.io/calico/cni:v3.19.1
……
flexvol-driver:
Container ID: docker://3ab9f73561d95e90ef00c9b19f3276c030da447c09501b013f558189ea8e8ff5
Image: docker.io/calico/pod2daemon-flexvol:v3.19.1
……
- 主容器
calico-node
作用:管理Pod的网络配置,保证Pod网络与各node的互通
Containers:
calico-node:
Container ID: docker://222e79bf84bb3c58da0fe6ff0e71b14d537c68c39e49d1cd44843bafcfb90e46
Image: docker.io/calico/node:v3.19.1
3.2 yml文件
# Source: calico/templates/calico-node.yaml
# This manifest installs the calico-node container, as well
# as the CNI plugins and network config on
# each master and worker node in a Kubernetes cluster.
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: calico-node
namespace: kube-system
labels:
k8s-app: calico-node
spec:
selector:
matchLabels:
k8s-app: calico-node
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
labels:
k8s-app: calico-node
spec:
nodeSelector:
kubernetes.io/os: linux
hostNetwork: true
tolerations:
# Make sure calico-node gets scheduled on all nodes.
- effect: NoSchedule
operator: Exists
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- effect: NoExecute
operator: Exists
serviceAccountName: calico-node
# Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force
# deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods.
terminationGracePeriodSeconds: 0
priorityClassName: system-node-critical
initContainers:
# This container performs upgrade from host-local IPAM to calico-ipam.
# It can be deleted if this is a fresh installation, or if you have already
# upgraded to use calico-ipam.
- name: upgrade-ipam
image: docker.io/calico/cni:v3.19.1
command: ["/opt/cni/bin/calico-ipam", "-upgrade"]
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
volumeMounts:
- mountPath: /var/lib/cni/networks
name: host-local-net-dir
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
securityContext:
privileged: true
# This container installs the CNI binaries
# and CNI network config file on each node.
- name: install-cni
image: docker.io/calico/cni:v3.19.1
command: ["/opt/cni/bin/install"]
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
# Name of the CNI config file to create.
- name: CNI_CONF_NAME
value: "10-calico.conflist"
# The CNI network config to install on each node.
- name: CNI_NETWORK_CONFIG
valueFrom:
configMapKeyRef:
name: calico-config
key: cni_network_config
# Set the hostname based on the k8s node name.
- name: KUBERNETES_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# CNI MTU Config variable
- name: CNI_MTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Prevents the container from sleeping forever.
- name: SLEEP
value: "false"
volumeMounts:
- mountPath: /host/opt/cni/bin
name: cni-bin-dir
- mountPath: /host/etc/cni/net.d
name: cni-net-dir
securityContext:
privileged: true
# Adds a Flex Volume Driver that creates a per-pod Unix Domain Socket to allow Dikastes
# to communicate with Felix over the Policy Sync API.
- name: flexvol-driver
image: docker.io/calico/pod2daemon-flexvol:v3.19.1
volumeMounts:
- name: flexvol-driver-host
mountPath: /host/driver
securityContext:
privileged: true
containers:
# Runs calico-node container on each Kubernetes node. This
# container programs network policy and routes on each
# host.
- name: calico-node
image: docker.io/calico/node:v3.19.1
envFrom:
- configMapRef:
# Allow KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT to be overridden for eBPF mode.
name: kubernetes-services-endpoint
optional: true
env:
# Use Kubernetes API as the backing datastore.
- name: DATASTORE_TYPE
value: "kubernetes"
# Wait for the datastore.
- name: WAIT_FOR_DATASTORE
value: "true"
# Set based on the k8s node name.
- name: NODENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Choose the backend to use.
- name: CALICO_NETWORKING_BACKEND
valueFrom:
configMapKeyRef:
name: calico-config
key: calico_backend
# Cluster type to identify the deployment type
- name: CLUSTER_TYPE
value: "k8s,bgp"
# Auto-detect the BGP IP address.
- name: IP
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
#value: "Always"
value: "off"
#value: "Always"
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
value: "Never"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the VXLAN tunnel device.
- name: FELIX_VXLANMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# Set MTU for the Wireguard tunnel device.
- name: FELIX_WIREGUARDMTU
valueFrom:
configMapKeyRef:
name: calico-config
key: veth_mtu
# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# Set Felix endpoint to host default action to ACCEPT.
- name: FELIX_DEFAULTENDPOINTTOHOSTACTION
value: "ACCEPT"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
- name: FELIX_HEALTHENABLED
value: "true"
securityContext:
privileged: true
resources:
requests:
cpu: 250m
livenessProbe:
exec:
command:
- /bin/calico-node
- -felix-live
- -bird-live
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /bin/calico-node
- -felix-ready
- -bird-ready
periodSeconds: 10
volumeMounts:
- mountPath: /lib/modules
name: lib-modules
readOnly: true
- mountPath: /run/xtables.lock
name: xtables-lock
readOnly: false
- mountPath: /var/run/calico
name: var-run-calico
readOnly: false
- mountPath: /var/lib/calico
name: var-lib-calico
readOnly: false
- name: policysync
mountPath: /var/run/nodeagent
# For eBPF mode, we need to be able to mount the BPF filesystem at /sys/fs/bpf so we mount in the
# parent directory.
- name: sysfs
mountPath: /sys/fs/
# Bidirectional means that, if we mount the BPF filesystem at /sys/fs/bpf it will propagate to the host.
# If the host is known to mount that filesystem already then Bidirectional can be omitted.
mountPropagation: Bidirectional
- name: cni-log-dir
mountPath: /var/log/calico/cni
readOnly: true
volumes:
# Used by calico-node.
- name: lib-modules
hostPath:
path: /lib/modules
- name: var-run-calico
hostPath:
path: /var/run/calico
- name: var-lib-calico
hostPath:
path: /var/lib/calico
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
- name: sysfs
hostPath:
path: /sys/fs/
type: DirectoryOrCreate
# Used to install CNI.
- name: cni-bin-dir
hostPath:
path: /opt/cni/bin
- name: cni-net-dir
hostPath:
path: /etc/cni/net.d
# Used to access CNI logs.
- name: cni-log-dir
hostPath:
path: /var/log/calico/cni
# Mount in the directory for host-local IPAM allocations. This is
# used when upgrading from host-local to calico-ipam, and can be removed
# if not using the upgrade-ipam init container.
- name: host-local-net-dir
hostPath:
path: /var/lib/cni/networks
# Used to create per-pod Unix Domain Sockets
- name: policysync
hostPath:
type: DirectoryOrCreate
path: /var/run/nodeagent
# Used to install Flex Volume Driver
- name: flexvol-driver-host
hostPath:
type: DirectoryOrCreate
path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
3.3 常用配置说明
DATASTORE_TYPE
数据后端存储,默认"kubernetes"
,本质也是k8s的etcd。也可以使用"etcd"
,则使用自建的etcd。CALICO_IPV4POOL_CIDR
calio IPAM(IP Address Management)的地址池,注意要配置为k8s集群的pod网段。CALICO_IPV4POOL_IPIP
是否启用IPIP模式,如果启用则会在node上创建一个tunl0的虚拟隧道。
不起用则直接使用物理机作为虚拟路由器(vRouter),不再创建额外的tunnel。
见 《k8s网络基础》中 “4.4 IP Pool 的两种模式”
4. calico-kube-controllers
用于管理k8s集群中网络策略,yml文件如下:
kind: Deployment
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
# The controllers can only have a single active instance.
replicas: 1
selector:
matchLabels:
k8s-app: calico-kube-controllers
strategy:
type: Recreate
template:
metadata:
name: calico-kube-controllers
namespace: kube-system
labels:
k8s-app: calico-kube-controllers
spec:
nodeSelector:
kubernetes.io/os: linux
tolerations:
# Mark the pod as a critical add-on for rescheduling.
- key: CriticalAddonsOnly
operator: Exists
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: calico-kube-controllers
priorityClassName: system-cluster-critical
containers:
- name: calico-kube-controllers
image: docker.io/calico/kube-controllers:v3.19.1
env:
# Choose which controllers to run.
- name: ENABLED_CONTROLLERS
value: node
- name: DATASTORE_TYPE
value: kubernetes
livenessProbe:
exec:
command:
- /usr/bin/check-status
- -l
periodSeconds: 10
initialDelaySeconds: 10
failureThreshold: 6
readinessProbe:
exec:
command:
- /usr/bin/check-status
- -r
periodSeconds: 10
5. 其他
RBAC规则和自定义资源对象CRD请参考官方文档,此处不做说明。
更多推荐
已为社区贡献26条内容
所有评论(0)