一、原理图

二、集群修改配置

1.新增启动参数(仅限于 kubeadm 安装的 k8s 集群)

(每个 master 节点都需要修改)

1.1kube-apiserver 修改路径:

/etc/kubernetes/manifests

1.2 新增 kube-apiserver 参数(配置完自动重启)

kube-apiserver.yaml

- --cloud-provider=external
 #在这个下面加
spec:
  containers:
  - command:

2.新增 kube-controller-manager (配置完自动重启)

(每个 master 节点都需要修改)

kube-controller-manager 修改路径:

/etc/kubernetes/manifests

2.1 新增 kube-controller-manager 参数

- --cloud-provider=external
 #在这个下面加
spec:
  containers:
  - command:

3.kubelet 修改路径:

(所有节点都需要修改并重启 master 和 node)

/etc/systemd/system/kubelet.service.d/10-kubeadm.conf

获取–provider-id=cn-hangzhou.i-bp1auxi4brx5qtsvf019" (获取到后添加到下面的参数中)

echo `curl -s http://100.100.100.200/latest/meta-data/region-id`.`curl -s http://100.100.100.200/latest/meta-data/instan--provider-id=cn-hangzhou.i-bp1auxi4brx5qtsvf019"ce-id`
Environment="KUBELET_CLOUD_PROVIDER_ARGS=--cloud-provider=external --provider-id=cn-hangzhou.i-bp1auxi4brx5qtsvf019"

新增末尾参数

$KUBELET_CLOUD_PROVIDER_ARGS

二、创建 configmap

1.阿里云上创建子用户(获取 AKSK)

2.单独创建 RAM 权限策略给到子用户

{
  "Version": "1",
  "Statement": [
    {
      "Action": [
        "ecs:Describe*",
        "ecs:AttachDisk",
        "ecs:CreateDisk",
        "ecs:CreateSnapshot",
        "ecs:CreateRouteEntry",
        "ecs:DeleteDisk",
        "ecs:DeleteSnapshot",
        "ecs:DeleteRouteEntry",
        "ecs:DetachDisk",
        "ecs:ModifyAutoSnapshotPolicyEx",
        "ecs:ModifyDiskAttribute",
        "ecs:CreateNetworkInterface",
        "ecs:DescribeNetworkInterfaces",
        "ecs:AttachNetworkInterface",
        "ecs:DetachNetworkInterface",
        "ecs:DeleteNetworkInterface",
        "ecs:DescribeInstanceAttribute"
      ],
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    },
    {
      "Action": [
        "cr:Get*",
        "cr:List*",
        "cr:PullRepository"
      ],
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    },
    {
      "Action": [
        "slb:*"
      ],
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    },
    {
      "Action": [
        "cms:*"
      ],
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    },
    {
      "Action": [
        "vpc:*"
      ],
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    },
    {
      "Action": [
        "log:*"
      ],
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    },
    {
      "Action": [
        "nas:*"
      ],
      "Resource": [
        "*"
      ],
      "Effect": "Allow"
    }
  ]
}

3.赋权

4.创建 configmap

注意:填写刚才创建的 AKSK 以 BASE64 版本(可以百度转换一下)

apiVersion: v1
kind: ConfigMap
metadata:
  name: cloud-config
  namespace: kube-system
data:
  cloud-config.conf: |-
    {
        "Global": {
            "accessKeyID": "TFRBSTV0SGV0S1k1MXFoRWd0bTluWExM",
            "accessKeySecret": "d0JjYk5ZTXNZeXlPN280T3ZYcTJvcWdsWmNrQU1q"
        }
    }
kubectl apply -f cloud-config.yaml

5、创建 cloud-controller-manager-configmap

vim /etc/kubernetes/cloud-controller-manager.conf

获取内网 ip 地址在 server 处进行更改

kind: Config
contexts:
- context:
    cluster: kubernetes
    user: system:cloud-controller-manager
  name: system:cloud-controller-manager@kubernetes
current-context: system:cloud-controller-manager@kubernetes
users:
- name: system:cloud-controller-manager
  user:
    tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: $CA_DATA
    server: https://192.168.1.76:6443
  name: kubernetes

查询替换 CA_DATA

cat /etc/kubernetes/pki/ca.crt|base64 -w 0

三、部署阿里云负载均衡插件

1.查询目前集群的 cidr 地址,并修改到 yaml 文件中

kubectl cluster-info dump |grep cidr

2.获取阿里云内部 hostname 的名字

echo `curl -s http://100.100.100.200/latest/meta-data/region-id`.`curl -s http://100.100.100.200/latest/meta-data/instance-id`

3.修改集群中 hostname 的名字(所有节点必须修改)

hostnamectl set-hostname <阿里云内部hostname>

4.部署阿里云插件(修改 cidr 地址)

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: system:cloud-controller-manager
rules:
  - apiGroups:
      - ""
    resources:
      - persistentvolumes
      - services
      - secrets
      - endpoints
      - serviceaccounts
    verbs:
      - get
      - list
      - watch
      - create
      - update
      - patch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
      - list
      - watch
      - delete
      - patch
      - update
  - apiGroups:
      - ""
    resources:
      - services/status
    verbs:
      - update
      - patch
  - apiGroups:
      - ""
    resources:
      - nodes/status
    verbs:
      - patch
      - update
  - apiGroups:
      - ""
    resources:
      - events
      - endpoints
    verbs:
      - create
      - patch
      - update
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: cloud-controller-manager
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:cloud-controller-manager
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:cloud-controller-manager
subjects:
  - kind: ServiceAccount
    name: cloud-controller-manager
    namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:shared-informers
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:cloud-controller-manager
subjects:
  - kind: ServiceAccount
    name: shared-informers
    namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:cloud-node-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:cloud-controller-manager
subjects:
  - kind: ServiceAccount
    name: cloud-node-controller
    namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:pvl-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:cloud-controller-manager
subjects:
  - kind: ServiceAccount
    name: pvl-controller
    namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: system:route-controller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:cloud-controller-manager
subjects:
  - kind: ServiceAccount
    name: route-controller
    namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  labels:
    app: cloud-controller-manager
    tier: control-plane
  name: cloud-controller-manager
  namespace: kube-system
spec:
  selector:
    matchLabels:
      app: cloud-controller-manager
      tier: control-plane
  template:
    metadata:
      labels:
        app: cloud-controller-manager
        tier: control-plane
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
    spec:
      serviceAccountName: cloud-controller-manager
      tolerations:
        - effect: NoSchedule
          operator: Exists
          key: node-role.kubernetes.io/master
        - effect: NoSchedule
          operator: Exists
          key: node.cloudprovider.kubernetes.io/uninitialized
      nodeSelector:
        node-role.kubernetes.io/master: ""
      containers:
        - command:
          -  /cloud-controller-manager
          - --kubeconfig=/etc/kubernetes/cloud-controller-manager.conf
          - --address=127.0.0.1
          - --allow-untagged-cloud=true
          - --leader-elect=true
          - --cloud-provider=alicloud
          - --use-service-account-credentials=true
          - --cloud-config=/etc/kubernetes/config/cloud-config.conf
          - --configure-cloud-routes=true
          - --allocate-node-cidrs=true
          - --route-reconciliation-period=3m
          # replace ${cluster-cidr} with your own cluster cidr
          - --cluster-cidr=172.20.0.0/16
          image: registry.cn-hangzhou.aliyuncs.com/acs/cloud-controller-manager-amd64:v1.9.3.339-g9830b58-aliyun
          livenessProbe:
            failureThreshold: 8
            httpGet:
              host: 127.0.0.1
              path: /healthz
              port: 10258
              scheme: HTTP
            initialDelaySeconds: 15
            timeoutSeconds: 15
          name: cloud-controller-manager
          resources:
            requests:
              cpu: 200m
          volumeMounts:
            - mountPath: /etc/kubernetes/
              name: k8s
            - mountPath: /etc/ssl/certs
              name: certs
            - mountPath: /etc/pki
              name: pki
            - mountPath: /etc/kubernetes/config
              name: cloud-config
      hostNetwork: true
      volumes:
        - hostPath:
            path: /etc/kubernetes
          name: k8s
        - hostPath:
            path: /etc/ssl/certs
          name: certs
        - hostPath:
            path: /etc/pki
          name: pki
        - configMap:
            defaultMode: 420
            items:
              - key: cloud-config.conf
                path: cloud-config.conf
            name: cloud-config
          name: cloud-config
kubectl apply -f cloud-controller-manager.yml

四、kubesphere 平台创建相关服务

1.创建工作负载

1.1 选择集群管理

1.2 创建工作负载

kind: Deployment
apiVersion: apps/v1
metadata:
  name: kubesphere-router-rukou
  namespace: kubesphere-controls-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kubesphere
      component: ks-router
      project: kubesphere-controls-system
      tier: backend
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: kubesphere
        component: ks-router
        project: kubesphere-controls-system
        tier: backend
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
        sidecar.istio.io/inject: 'false'
    spec:
      containers:
        - name: nginx-ingress-controller
          image: >-
            registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v0.35.0
          args:
            - /nginx-ingress-controller
            - '--default-backend-service=$(POD_NAMESPACE)/default-http-backend'
            - '--annotations-prefix=nginx.ingress.kubernetes.io'
            - '--update-status'
            - '--update-status-on-shutdown'
            - '--configmap=$(POD_NAMESPACE)/kubesphere-router-rukou-nginx'
            - '--watch-namespace=kubesphere-controls-system'
            - '--election-id=kubesphere-router-rukou'
            - '--publish-service=kubesphere-controls-system/rukou'
            - '--publish-service=kubesphere-controls-system/rukou'
            - '--publish-service=kubesphere-controls-system/rukou'
            - '--publish-service=kubesphere-controls-system/rukou'
            - '--publish-service=kubesphere-controls-system/rukou'
            - '--publish-service=kubesphere-controls-system/rukou'
            - '--publish-service=kubesphere-controls-system/rukou'
            - '--report-node-internal-ip-address'
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
          resources: {}
          livenessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            timeoutSeconds: 1
            periodSeconds: 10
            successThreshold: 1
            failureThreshold: 3
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          securityContext:
            runAsNonRoot: false
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      serviceAccountName: kubesphere-router-serviceaccount
      serviceAccount: kubesphere-router-serviceaccount
      securityContext: {}
      affinity: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

2.创建服务

kind: Service
apiVersion: v1
metadata:
  name: rukou
  namespace: kubesphere-controls-system
  annotations:
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners true
    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id SLB实例ID
spec:
  ports:
    - name: http-80
      protocol: TCP
      port: 80
      targetPort: 80
    - name: https-443
      protocol: TCP
      port: 443
      targetPort: 443
  selector:
    app: kubesphere
    component: ks-router
    project: kubesphere-controls-system
    tier: backend
  type: LoadBalancer
  sessionAffinity: None
  externalTrafficPolicy: Cluster

注意:工作负载 yaml 文件中的 args 下面的 publish-service 必须和刚才创建的服务的 namesapce 和 name 强关联,否则失败!

注意:服务配置文件中的 SLBID 请替换成自己的

3 针对需要对外暴露的项目创建网关

3.1 创建应用路由

如果要设置泛域名请打开编辑模式参照下图修改

五、最终测试访问


掏出手机微信扫一扫哦
​​​​​​

Logo

开源、云原生的融合云平台

更多推荐