Kubernetes LocalPV搭建Elasticsearch7.6.2-非ECK方式
最近在折腾K8S下安装ES,ECK很是方便,但是也屏蔽了不少细节,还是希望多从其他方式怼怼,记录一次使用LocalPV安装ES的过程。环境Kubernetes 1.18.2 集群 - 2节点:1master + 1node推荐使用一下Kuboard的教程:https://kuboard.cn/install/install-k8s.htmlES 7.6.2版本关于LocalPV只需要在volume
·
最近在折腾K8S下安装ES,ECK很是方便,但是也屏蔽了不少细节,还是希望多从其他方式怼怼,记录一次使用LocalPV安装ES的过程。
环境
- Kubernetes 1.18.2 集群 - 2节点:1master + 1node
推荐使用一下Kuboard的教程:
https://kuboard.cn/install/install-k8s.html - ES 7.6.2版本
关于LocalPV
- 只需要在volumeClaimTemplates中注明storageClassName,采用 kubernetes.io/no-provisioner的方式,根据匹配的名称就会自动的去申请可用的PV
准备
- node节点的hostname为zqw02
- 登陆zqw02,创建本地卷
mkdir -p /mnt/localpv/es7-0 /mnt/localpv/es7-1 /mnt/localpv/es7-2
- 目录授权
chmod -R 777 /mnt/localpv/
创建ES集群的命名空间
- ns.yaml
[root@zqw01 es1]# cat ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: logging
- 执行文件
kubectl create -f ns.yml
创建storage provisioner
- stc.yaml
[root@zqw01 es1]# cat stc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
namespace: logging
name: local-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
- 执行文件
kubectl create -f stc.yaml
创建PV资源池(可以满足需求个数)
- 创建PV资源文件
[root@zqw01 es1]# cat lp0.yaml lp1.yaml lp2.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-storage-pv-0
labels:
name: local-storage-pv-0
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/localpv/es7-0
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- zqw02
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-storage-pv-1
labels:
name: local-storage-pv-1
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/localpv/es7-1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- zqw02
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-storage-pv-2
labels:
name: local-storage-pv-2
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/localpv/es7-2
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- zqw02
- 执行PV资源文件
kubectl create -f lp0.yml
kubectl create -f lp1.yml
kubectl create -f lp2.yml
创建ES集群Service
[root@zqw01 es1]# cat svc.yaml
kind: Service
apiVersion: v1
metadata:
name: elasticsearch7
namespace: logging
labels:
app: elasticsearch7
spec:
selector:
app: elasticsearch7
type: NodePort
ports:
- port: 9200
name: rest
nodePort: 32000
- port: 9300
name: inter-node
- 执行文件
kubectl create -f svc.yaml
创建ES StatefulSet
[root@zqw01 es1]# cat sts.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es7-cluster
namespace: logging
spec:
serviceName: elasticsearch7
replicas: 3
selector:
matchLabels:
app: elasticsearch7
template:
metadata:
labels:
app: elasticsearch7
spec:
containers:
- name: elasticsearch7
image: elasticsearch:7.6.2
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es7-cluster-0.elasticsearch7,es7-cluster-1.elasticsearch7,es7-cluster-2.elasticsearch7"
- name: cluster.initial_master_nodes
value: "es7-cluster-0,es7-cluster-1,es7-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms1g -Xmx1g"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-storage"
resources:
requests:
storage: 20Gi
- 执行文件
kubectl create -f sts.yaml
检验
- 查看Service
[root@zqw01 es1]# kubectl get svc -n logging
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch7 NodePort 10.96.55.170 <none> 9200:32000/TCP,9300:31147/TCP 41m
- 集群状态
[root@zqw01 es1]# curl 10.96.55.170:9200
{
"name" : "es7-cluster-1",
"cluster_name" : "k8s-logs",
"cluster_uuid" : "R-OGPazVTBaemL0M6jRJ1g",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
- 查看PVC/PV的情况
[root@zqw01 es1]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-storage-pv-0 20Gi RWO Retain Bound logging/data-es7-cluster-1 local-storage 47m
local-storage-pv-1 20Gi RWO Retain Bound logging/data-es7-cluster-0 local-storage 47m
local-storage-pv-2 20Gi RWO Retain Bound logging/data-es7-cluster-2 local-storage 47m
[root@zqw01 es1]#
[root@zqw01 es1]# kubectl get pvc -n logging
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-es7-cluster-0 Bound local-storage-pv-1 20Gi RWO local-storage 42m
data-es7-cluster-1 Bound local-storage-pv-0 20Gi RWO local-storage 41m
data-es7-cluster-2 Bound local-storage-pv-2 20Gi RWO local-storage 40m
可以看到PV-PVC的对应关系还是有一定的随机选择性,当然node是固定的了
参考
更多推荐
已为社区贡献1条内容
所有评论(0)