....前言

本文选用Stolon的方式搭建Postgresql高可用方案,主要为Harbor提供高可用数据库,Harbor搭建可查看kubernetes搭建Harbor无坑及Harbor仓库同步,之后会提供redis高可用及Harbor高可用方案搭建

方案比较

几种postgresql高可用方案简单比较:

首先repmgr这种方案的算法有明显缺陷,非主流分布式算法,直接pass;

Stolon和Patroni相对于Crunchy更加Cloud Native, 后者是基于pgPool实现。

Crunchy和Patroni相对于Stolon有更多的使用者,并且提供了Operator对于以后的管理和扩容

根据上面简单的比较,最终选择的stolon,作者选择的是Patroni,感觉实际区别并不大。

一、Stolon概述:

keeper:他负责管理PostgreSQL的实例汇聚到由sentinel(s)提供的clusterview。

sentinel:it负责发现并且监控keeper,并且计算最理想的clusterview。

proxy:客户端的接入点。它强制连接到右边PostgreSQL的master并且强制关闭连接到由非选举产生的master。

Stolon 用etcd或者consul作为主要的集群状态存储。

二、Installation

git clone https://github.com/sorintlab/stolon.git

cd XXX/stolon/examples/kubernetes

如图所示

f3db76c534d8?utm_campaign=hugo

stolon

如有兴趣可查看官网搭建:https://github.com/sorintlab/stolon/blob/master/examples/kubernetes/README.md

如下为yaml中注意修改的地方

stolon-keeper.yaml 中设置Postgresql用户名

- name: STKEEPER_PG_SU_USERNAME

value: "postgres"

stolon-keeper.yaml 中设置stolon挂载卷

volumeClaimTemplates:

- metadata:

name: data

spec:

accessModes:

- "ReadWriteOnce"

resources:

requests:

storage: "512Mi"

storageClassName: nfs

secret.yaml中设置用户密码

apiVersion: v1

kind: Secret

metadata:

name: stolon

type: Opaque

data:

password: cGFzc3dvcmQx

如下是作者整理的完整的stolon的编排文件,可直接修改使用

# This is an example and generic rbac role definition for stolon. It could be

# fine tuned and split per component.

# The required permission per component should be:

# keeper/proxy/sentinel: update their own pod annotations

# sentinel/stolonctl: get, create, update configmaps

# sentinel/stolonctl: list components pods

# sentinel/stolonctl: get components pods annotations

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: Role

metadata:

name: stolon

namespace: default

rules:

- apiGroups:

- ""

resources:

- pods

- configmaps

- events

verbs:

- "*"

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: RoleBinding

metadata:

name: stolon

namespace: default

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: stolon

subjects:

- kind: ServiceAccount

name: default

namespace: default

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: stolon-sentinel

spec:

replicas: 2

template:

metadata:

labels:

component: stolon-sentinel

stolon-cluster: kube-stolon

annotations:

prometheus.io/scrape: "true"

prometheus.io/port: "8080"

spec:

containers:

- name: stolon-sentinel

image: sorintlab/stolon:master-pg10

command:

- "/bin/bash"

- "-ec"

- |

exec gosu stolon stolon-sentinel

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: STSENTINEL_CLUSTER_NAME

valueFrom:

fieldRef:

fieldPath: metadata.labels['stolon-cluster']

- name: STSENTINEL_STORE_BACKEND

value: "kubernetes"

- name: STSENTINEL_KUBE_RESOURCE_KIND

value: "configmap"

- name: STSENTINEL_METRICS_LISTEN_ADDRESS

value: "0.0.0.0:8080"

## Uncomment this to enable debug logs

#- name: STSENTINEL_DEBUG

# value: "true"

ports:

- containerPort: 8080

---

apiVersion: v1

kind: Secret

metadata:

name: stolon

type: Opaque

data:

password: cGFzc3dvcmQx

---

# PetSet was renamed to StatefulSet in k8s 1.5

# apiVersion: apps/v1alpha1

# kind: PetSet

apiVersion: apps/v1beta1

kind: StatefulSet

metadata:

name: stolon-keeper

spec:

serviceName: "stolon-keeper"

replicas: 2

template:

metadata:

labels:

component: stolon-keeper

stolon-cluster: kube-stolon

annotations:

pod.alpha.kubernetes.io/initialized: "true"

prometheus.io/scrape: "true"

prometheus.io/port: "8080"

spec:

terminationGracePeriodSeconds: 10

containers:

- name: stolon-keeper

image: sorintlab/stolon:master-pg10

command:

- "/bin/bash"

- "-ec"

- |

# Generate our keeper uid using the pod index

IFS='-' read -ra ADDR <<< "$(hostname)"

export STKEEPER_UID="keeper${ADDR[-1]}"

export POD_IP=$(hostname -i)

export STKEEPER_PG_LISTEN_ADDRESS=$POD_IP

export STOLON_DATA=/stolon-data

chown stolon:stolon $STOLON_DATA

exec gosu stolon stolon-keeper --data-dir $STOLON_DATA

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: STKEEPER_CLUSTER_NAME

valueFrom:

fieldRef:

fieldPath: metadata.labels['stolon-cluster']

- name: STKEEPER_STORE_BACKEND

value: "kubernetes"

- name: STKEEPER_KUBE_RESOURCE_KIND

value: "configmap"

- name: STKEEPER_PG_REPL_USERNAME

value: "repluser"

# Or use a password file like in the below supersuser password

- name: STKEEPER_PG_REPL_PASSWORD

value: "replpassword"

- name: STKEEPER_PG_SU_USERNAME

value: "postgres"

- name: STKEEPER_PG_SU_PASSWORDFILE

value: "/etc/secrets/stolon/password"

- name: STKEEPER_METRICS_LISTEN_ADDRESS

value: "0.0.0.0:8080"

# Uncomment this to enable debug logs

#- name: STKEEPER_DEBUG

# value: "true"

ports:

- containerPort: 5432

- containerPort: 8080

volumeMounts:

- mountPath: /stolon-data

name: data

- mountPath: /etc/secrets/stolon

name: stolon

volumes:

- name: stolon

secret:

secretName: stolon

# Define your own volumeClaimTemplate. This example uses dynamic PV provisioning with a storage class named "standard" (so it will works by default with minikube)

# In production you should use your own defined storage-class and configure your persistent volumes (statically or dynamically using a provisioner, see related k8s doc).

volumeClaimTemplates:

- metadata:

name: data

spec:

accessModes:

- "ReadWriteOnce"

resources:

requests:

storage: "512Mi"

storageClassName: nfs

---

apiVersion: v1

kind: Service

metadata:

name: stolon-proxy-service

spec:

ports:

- port: 5432

targetPort: 5432

selector:

component: stolon-proxy

stolon-cluster: kube-stolon

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: stolon-proxy

spec:

replicas: 2

template:

metadata:

labels:

component: stolon-proxy

stolon-cluster: kube-stolon

annotations:

prometheus.io/scrape: "true"

prometheus.io/port: "8080"

spec:

containers:

- name: stolon-proxy

image: sorintlab/stolon:master-pg10

command:

- "/bin/bash"

- "-ec"

- |

exec gosu stolon stolon-proxy

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: STPROXY_CLUSTER_NAME

valueFrom:

fieldRef:

fieldPath: metadata.labels['stolon-cluster']

- name: STPROXY_STORE_BACKEND

value: "kubernetes"

- name: STPROXY_KUBE_RESOURCE_KIND

value: "configmap"

- name: STPROXY_LISTEN_ADDRESS

value: "0.0.0.0"

- name: STPROXY_METRICS_LISTEN_ADDRESS

value: "0.0.0.0:8080"

## Uncomment this to enable debug logs

#- name: STPROXY_DEBUG

# value: "true"

ports:

- containerPort: 5432

- containerPort: 8080

readinessProbe:

tcpSocket:

port: 5432

initialDelaySeconds: 10

timeoutSeconds: 5

三、部署stolon

kubectl applay -f stolon.yaml

Initialize the cluster(大概意思是stolon初始化k8s集群,可以大概看下官网解释)

All the stolon components wait for an existing clusterdata entry in the store. So the first time you have to initialize a new cluster. For more details see the cluster initialization doc. You can do this step at every moment, now or after having started the stolon components.

You can execute stolonctl in different ways:

as a one shot command executed inside a temporary pod:

kubectl run -i -t stolonctl --image=sorintlab/stolon:master-pg10 --restart=Never --rm -- /usr/local/bin/stolonctl --cluster-name=kube-stolon --store-backend=kubernetes --kube-resource-kind=configmap init

from a machine that can access the store backend:

stolonctl --cluster-name=kube-stolon --store-backend=kubernetes --kube-resource-kind=configmap init

later from one of the pods running the stolon components.

执行

kubectl run -i -t stolonctl --image=sorintlab/stolon:master-pg10 --restart=Never --rm -- /usr/local/bin/stolonctl --cluster-name=kube-stolon --store-backend=kubernetes --kube-resource-kind=configmap init

如图所示,部署成功

f3db76c534d8?utm_campaign=hugo

stolon pod

四、卸载Postgresql数据库

kubectl delete -f stolon.yaml

kubectl delete pvc data-stolon-keeper-0 data-stolon-keeper-1

五、验证Postgresql安装成功(也可简单测试下)

1、验证数据同步

连接master并且建立test表

psql --host --port 30543 postgres -U stolon -W

postgres=# create table test (id int primary key not null,

value text not null);

CREATE TABLE

postgres=# insert into test values (1, 'value1');

INSERT 0 1

postgres=# select * from test;

id | value

---- --------

1 | value1

(1 row)

也可进入Pod执行postgresql命令

kubectl exec -ti stolon-proxy-5977cdbcfc-csnkq bash

#登入sql

psql --host localhost --port 5432 postgres -U postgres

\l #列出所有数据库

\c dbname #切换数据库

CREATE TABLE

insert into test values (1, 'value1');

INSERT 0 1

select * from test;

\d #列出当前数据库的所有表

\q #退出数据库

连接slave并且检查数据。你可以写一些信息以便确认请求已经被slave处理了。

psql --host --port 30544 postgres -U stolon -W

postgres=# select * from test;

id | value

---- --------

1 | value1

(1 row)

2、测试failover

这个案例是官方代码库中statefullset的一个例子。

简单的说,就是为模拟了master挂掉,我们先删除了master的statefulset又删除了master的pod。

kubectl delete statefulset stolon-keeper --cascade=false

kubectl delete pod stolon-keeper-0

然后,在sentinel的log中我们可以看到新的master被选举出来了。

no keeper info available db=cb96f42d keeper=keeper0

no keeper info available db=cb96f42d keeper=keeper0

master db is failed db=cb96f42d keeper=keeper0

trying to find a standby to replace failed master

electing db as the new master db=087ce88a keeper=keeper1

现在,在刚才的那两个终端中如果我们重复上一个命令,我们可以看到如下输出。

postgres=# select * from test;

server closed the connection unexpectedly

This probably means the server terminated abnormally

before or while processing the request.

The connection to the server was lost. Attempting reset:

Succeeded.

postgres=# select * from test;

id | value

---- --------

1 | value1

(1 row)

Kubernetes的service把不可用的pod去掉,把请求转到可用的pod上。所以新的读取连接被路由到了健康的pod上。

.也可用chaoskube模拟随机的pod挂掉(准生产可以测试下)

另一个测试集群弹性(resilience)的好方法是用chaoskube。Chaoskube是一个小的服务程序,它可以周期性的在集群里随机的kill掉一些的pod。它也可以用helm charts部署。

helm install --set labels="release=factualcrocodile,

component!=factual-crocodine-etcd" --set

interval=5m stable/chaoskube

这条命令会运行chaoskube,它会每5分钟删除一个pod。它会选择label中release=factual-crocodile的pod,但是会忽略etcd的pod。

本文按照官网搭建,主要为之后的Harbor高可用做准备,有情趣的伙伴点个赞,之后会续写redis、Harbor高可用

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐