新增flink HA配置

  flink-conf.yaml: |+
    jobmanager.rpc.address: flink-jobmanager
    taskmanager.numberOfTaskSlots: 50
    blob.server.port: 6124
    jobmanager.rpc.port: 6123
    taskmanager.rpc.port: 6122
    jobmanager.heap.size: 1524m
    taskmanager.memory.process.size: 4096m
    execution.target: kubernetes-session
    state.backend: filesystem
    state.checkpoints.dir: hdfs://192.168.5.131:25305/flink/cp
    state.savepoints.dir: hdfs://192.168.5.131:25305/flink/sp
    state.backend.incremental: true
    classloader.resolve-order: parent-first
    kubernetes.cluster-id: fat-bigdata-cluster-k8s-id
    high-availability: org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory
    high-availability.storageDir: hdfs://192.168.5.131:25305/flink/recovery
    #restart-strategy: fixed-delay
    #restart-strategy.fixed-delay.attempts: 10
    kubernetes.namespace: fat-bigdata-cluster
    kubernetes.service-account: flink-bigdata-cluster

新增serviceaccount

apiVersion: v1
kind: ServiceAccount
metadata:
  name: fat-bigdata-cluster
  namespace: fat-bigdata-cluster
automountServiceAccountToken: false

新增ClusterRole

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
   namespace: fat-bigdata-cluster
   name: configmaps-reader
rules:
 - apiGroups: [""]
   resources: ["configmaps"]
   verbs: ["update","create","get", "watch", "list"]

绑定ClusterRole与serviceaccount

k delete clusterrolebinding flink-reader-binding
k create clusterrolebinding flink-reader-binding --clusterrole=configmaps-reader --serviceaccount=fat-bigdata-cluster:default 

运行SQL失败问题

Could not get the rest endpoint of flink-cluster-4764f7e28a290cbfd9bff24ce67b05f
  1. 修改kubernetes.cluster-id => 没有效果
  2. 移除rest/deplyment中的无效gateWay端口配置 => 成功解决
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐