k8s MySQL 备份_使用k8s && minio 进行 postgres 数据库自动备份
通过k8s 的定时任务job,我们可以方便的进行定时任务应用的开发,通过minio s3 兼容的cloud native 存储我们可以方便的通过http 请求进行数据文件的备份,以下简单演示下如何进行集成环境准备docker 镜像修改自 https://github.com/Remigius2011/pg-dump ,主要是去除了schema 同时添加了gzip 压缩处理dockerfileFRO
通过k8s 的定时任务job,我们可以方便的进行定时任务应用的开发,通过minio s3 兼容的cloud native 存储
我们可以方便的通过http 请求进行数据文件的备份,以下简单演示下如何进行集成
环境准备
docker 镜像修改自 https://github.com/Remigius2011/pg-dump ,主要是去除了schema 同时添加了gzip 压缩处理
dockerfile
FROM remigius65/pg-dump
COPY backup.sh /usr/bin/backup.sh
RUN chmod +x /usr/bin/backup.sh
backup.sh
主要修改了dump 逻辑,说明备份使用了.pgpass 进行账户的处理,省去输入密码的问题,mc 客户端账户信息通过
环境变量处理
#!/bin/sh
. /usr/bin/setpwd.sh
export DUMP_FILE="$BACKUP_DIR/$DB_ENV-$(date +"%F-%H%M%S").dump"
if [ ! -d "$BACKUP_DIR" ]; then
echo mkdir -p "$BACKUP_DIR"
mkdir -p "$BACKUP_DIR"
fi
echo "pg_dump -h $PG_HOST -p $PG_PORT -U $PG_USER $PG_DB -f $DUMP_FILE"
pg_dump -h $PG_HOST -p $PG_PORT -U $PG_USER $PG_DB | gzip > $DUMP_FILE
if [ -n "S3_HOST" ]; then
export MC_HOSTS_store="$S3_PROTOCOL://$S3_ACCESS_KEY:$S3_SECRET_KEY@$S3_HOST"
echo "mc cp $DUMP_FILE store/$S3_BUCKET"
mc cp $DUMP_FILE store/$S3_BUCKET
fi
docker-compose 文件
通过docker-compose 运行依赖的服务
version: "3"
services:
postgres:
image: postgres:10.7
ports:
- "5432:5432"
environment:
- "POSTGRES_PASSWORD:dalong"
volumes:
- ./db_data:/var/lib/postgresql/data
backup:
image: dalongrong/pg-dump
environment:
- "PG_HOST=postgres"
- "PG_DB=postgres"
- "PG_PASSWORD=dalong"
- "S3_HOST=s3:9000"
- "S3_ACCESS_KEY=dalongdemo"
- "S3_SECRET_KEY=dalongdemo"
- "S3_PROTOCOL=http"
s3:
image: minio/minio
command: server /export
ports:
- "9000:9000"
volumes:
- ./data:/export
- ./config:/root/.minio
environment:
- "MINIO_ACCESS_KEY=dalongdemo"
- "MINIO_SECRET_KEY=dalongdemo"
测试
启动pg && minio
docker-compose up -d s3 postgres
添加以下测试数据
这个可以根据自己的喜好,自由添加
测试数据备份
docker-compose up backup
测试效果
docker-compose up backup
Starting pg-s3-backup_backup_1 ... done
Attaching to pg-s3-backup_backup_1
backup_1 | pg_dump -h postgres -p 5432 -U postgres postgres -f /pgbackup/prod-2019-03-18-103712.dump
backup_1 | mc cp /pgbackup/prod-2019-03-18-103712.dump store/pgbackup
backup_1 | `/pgbackup/prod-2019-03-18-103712.dump` -> `store/pgbackup/prod-2019-03-18-103712.dump`
backup_1 | Total: 1.03 KB, Transferred: 1.03 KB, Speed: 33.52 KB/s
pg-s3-backup_backup_1 exited with code 0
minio 界面
k8s cronjob
json 格式文件
{
"kind": "CronJob",
"apiVersion": "batch/v1beta1",
"metadata": {
"name": "pg-backup-job"
},
"spec": {
"schedule": "0 0 1 * *",
"concurrencyPolicy": "Replace",
"suspend": false,
"jobTemplate": {
"metadata": {
"creationTimestamp": null
},
"spec": {
"template": {
"metadata": {
"creationTimestamp": null,
"labels": {
"apprepositories.kubeapps.com/repo-name": "pg-backup-job"
}
},
"spec": {
"containers": [
{
"name": "gitlab-pg-backup",
"image": "dalongrong/pg-dump-gzip",
"env": [
{
"name": "PG_DB",
"value": "postgres"
},
{
"name": "PG_HOST",
"value": "postgres"
},
{
"name": "PG_PASSWORD",
"value": "dalong"
},
{
"name": "PG_PORT",
"value": "5432"
},
{
"name": "S3_ACCESS_KEY",
"value": "dalongdemo"
},
{
"name": "S3_HOST",
"value": "s3:9000"
},
{
"name": "S3_PROTOCOL",
"value": "http"
},
{
"name": "S3_SECRET_KEY",
"value": "dalongdemo"
}
],
"imagePullPolicy": "IfNotPresent"
}
],
"restartPolicy": "OnFailure"
}
}
}
},
"successfulJobsHistoryLimit": 3,
"failedJobsHistoryLimit": 1
}
}
yaml 格式
yaml 格式因为是使用json-> yaml 的工具,格式很难看
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: gitlab-backup-job
spec:
schedule: "0 0 1 * *"
concurrencyPolicy: Replace
suspend: false
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
labels:
apprepositories.kubeapps.com/repo-name: gitlab-backup-job
spec:
containers:
- name: gitlab-pg-backup
image: dalongrong/pg-dump-gzip
env:
- name: PG_DB
value: postgres
- name: PG_HOST
value: postgres
- name: PG_PASSWORD
value: dalong
- name: PG_PORT
value: "5432"
- name: S3_ACCESS_KEY
value: dalongdemo
- name: S3_HOST
value: "s3:9000"
- name: S3_PROTOCOL
value: http
- name: S3_SECRET_KEY
value: dalongdemo
imagePullPolicy: IfNotPresent
restartPolicy: OnFailure
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
运行效果
因为k8s 定时备份的不是应用全部通过k8s 部署的,只有备份程序,pg 以及s3 都是外部的
容器日志
说明
结合k8s 的定时任务以及minio 的s3 能力,我们可以方便的制作一个备份以及恢复的方案,而且很简单
参考资料
原文:https://www.cnblogs.com/rongfengliang/p/10554058.html
更多推荐
所有评论(0)