• 1、部署完成calico后,nodeport与其他节点不通
    问题:iptables规则问题
    解决:所有节点执行:iptables -P FORWARD ACCEPT
  • 2、kubectl 可以get,但无法查看pod的日志,进入pod
    问题:apiserver没有进去node 的pod的权限
    解决:
	cat > apiserver-to-kubelet-rbac.yaml << EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes
EOF

kubectl apply -f apiserver-to-kubelet-rbac.yaml
  • 3、安装caclico后,pod总是准备不好
    问题:有br开头(docker网卡)的网卡故障(NO-CARRIER)
    解决:删除无用的docker网卡
    sudo ifconfig br-59ec53121ef6 down
    sudo brctl delbr br-59ec53121ef6
    
  • 4、创建登录docker仓库的秘钥
kubectl create secret docker-registry registry-harbor --namespace=default \
    --docker-server=registry.com --docker-username=cluster \
    --docker-password=1234567 --docker-email=cluster@qq.com
  • 5、storageclass使用
    deployment:先创建pvc,再创volume
    statefulset:直接用volumeClaimTemplates
  • 6、几个配置ip详解
     kube-controller-manager:
    --cluster-cidr=10.244.0.0/16 \\pod的ip
    --service-cluster-ip-range=10.0.0.0/24 \\sevice的ip
  kube-apiserver:
    service-cluster-ip-range :service的ip
    pod-network-cidr: pod的ip
  kube-proxy:
    clusterCIDR: pod的ip
  calico:
    CALICO_IPV4POOL_CIDR: pod的ip
  • 7、calico的ip池配置错误后如何修改
    直接修改yaml文件后,重新apply不生效
    需要通过calico管理工具修改:
    参考:calico更换ip地址池-k8s
    注意:calico地址池配置:
  在Kubernetes中,以下所有三个参数必须等于或包含Calico IP池CIDR:
  kube-apiserver: --pod-network-cidr
  kube-prxoy: --cluster-cidr
  kube-controller-manager: --cluster-cidr
  • 8、重启node节点,kube-proxy报错
 Failed to delete stale service IP 11.0.0.2 connections, error: error deleting connection tracking state for UDP service IP: 11.0.0.2, error: error looking for path of conntrack: exec: "conntrack": executable file not found in $PATH

yum -y install conntrack 后重启kube-proxy
参考:https://blog.csdn.net/mayifan0/article/details/80731507

参数详解

    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule    # 表示K8S将不会把Pod调度到具有该污点的Node节点上
      - key: "node.kubernetes.io/unreachable"
        operator: "Exists"  # operator的值为Exists时,将会忽略value;只要有key和effect就行
        effect: "NoExecute"   # 表示K8S将不会把Pod调度到具有该污点的Node节点上,同时会将Node上已经存在的Pod驱逐出去
        # PreferNoSchedule:表示K8S将尽量避免把Pod调度到具有该污点的Node节点上
        tolerationSeconds: 2 # tolerationSeconds:表示pod 能够容忍 effect 值为 NoExecute 的 taint;当指定了 tolerationSeconds【容忍时间】,则表示 pod 还能在这个节点上继续运行的时间长度。
      - key: "node.kubernetes.io/not-ready"
        operator: "Exists"
        effect: "NoExecute"
        tolerationSeconds: 2
      containers:
      - image: busybox

  • k8s pod调度选择:
    https://lvjianzhao.gitee.io/lvjianzhao/2020/07/24/k8s%E4%B9%8Bpod%E8%B0%83%E5%BA%A6/
    pod亲和度:https://www.jianshu.com/p/61725f179223
  • 部署calico,node总是没准备好,发现有废弃的br网卡影响,删除网卡后恢复
    calico、flannel部署时,如果服务器是多网卡,需要指定网卡
    参考
    calico:
spec:
containers:
- env:
  - name: DATASTORE_TYPE
    value: kubernetes
  - name: IP_AUTODETECTION_METHOD  # DaemonSet中添加该环境变量
    value: interface=eth0    # 指定内网网卡
  - name: WAIT_FOR_DATASTORE
    value: "true"

flannel

  	containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.10.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth0
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐