k8s安装elasticsearch集群

参考官网

https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html

安装elasticsearch

  • 节点规划

master node1 node2部署三台elasticsearch master节点作为es的master主节点

由于master有默认污点,无法将es节点调度在master上面,所以需要祛除污点

kubectl describe node master | grep taint

kubectl taint nodes master node-role.kubernetes.io/master-

  • 存储规划

Localpath本地存储 /data/es路径

  • 创建pv
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-volume-1
  labels:
    type: local
spec:
  capacity:
    storage: 20Gi
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Recycle
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/es/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-volume-2
  labels:
    type: local
spec:
  capacity:
    storage: 20Gi
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Recycle
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/es/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-volume-3
  labels:
    type: local
spec:
  capacity:
    storage: 20Gi
  volumeMode: Filesystem
  persistentVolumeReclaimPolicy: Recycle
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/es/data"
kubectl create -f https://download.elastic.co/downloads/eck/2.6.1/crds.yaml

kubectl apply -f https://download.elastic.co/downloads/eck/2.6.1/operator.yaml

kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

  • 编排es-cluster.yaml (按照官网整理,可自行修改)
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: local-es
  namespace: es-cluster
spec:
  version: 8.6.1
  nodeSets:
  - name: default
    count: 3
    config:
      node.roles: ["master", "data"]
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name : elasticsearch-data
      spec:
        volumeMode: Filesystem
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 20G
    podTemplate:
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
            runAsUser: 0
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        affinity:
          podAntiAffinity:
            requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchLabels:
                  elasticsearch.k8s.elastic.co/cluster-name: local-es
              topologyKey: kubernetes.io/hostname

affinity 亲和性保证每个副本调度到不同的node节点(因为pv用的是相同路径)

  • 验证elasticsearch
//获取密码
kubectl get secret local-es-es-elastic-user -n es-cluster -o go-template='{{.data.elastic | base64decode}}'
//开启转发
kubectl port-forward --address 0.0.0.0 -n es-cluster service/local-es-es-http 9200
//http验证
curl -u "elastic:获取的密码" -k "https://你的ip:9200"

安装kibana
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
  name: local-kibana
  namespace: es-cluster
spec:
  version: 8.6.1
  count: 1
  elasticsearchRef:
    name: local-es
    namespace: es-cluster
  podTemplate:
    spec:
      containers:
      - name: kibana
        env:
          - name: NODE_OPTIONS
            value: "--max-old-space-size=2048"
        resources:
          requests:
            memory: 1Gi
            cpu: 1
          limits:
            memory: 1.5Gi
            cpu: 2
  • 验证kibana
kubectl port-forward --address 0.0.0.0 -n es-cluster service/local-kibana-kb-http 5601
集成Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  namespace: es-cluster
  name: kibana-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: local-kibana-kb-http
            port:
              number: 5601
    host: kibana.test.local
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐