tidb-operator实战
k8s和tidb都是目前开源社区中活跃的开源产品,tidb-operator项目是一个在k8s上编排管理tidb集群的项目。本文详细记录了部署k8s及install tidb-operator的详细实施过程,希望能对刚入坑的同学有所帮助环境ubuntu 16.04k8s 1.14.1kubespray安装k8s配置免密登录yum -y install expectvi /tm...
k8s和tidb都是目前开源社区中活跃的开源产品,tidb-operator项目是一个在k8s上编排管理tidb集群的项目。本文详细记录了部署k8s及install tidb-operator的详细实施过程,希望能对刚入坑的同学有所帮助
环境
- ubuntu 16.04
- k8s 1.14.1
kubespray安装k8s
配置免密登录
yum -y install expect
- vi /tmp/autocopy.exp
#!/usr/bin/expect
set timeout
set user_hostname [lindex $argv ]
set password [lindex $argv ]
spawn ssh-copy-id $user_hostname
expect {
"(yes/no)?"
{
send "yes\n"
expect "*assword:" { send "$password\n"}
}
"*assword:"
{
send "$password\n"
}
}
expect eof
ssh-keyscan addedip >> ~/.ssh/known_hosts
ssh-keygen -t rsa -P ''
for i in 10.0.0.{31,32,33,40,10,20,50}; do ssh-keyscan $i >> ~/.ssh/known_hosts ; done
/tmp/autocopy.exp root@addeip
ssh-copy-id addedip
/tmp/autocopy.exp root@10.0.0.31
/tmp/autocopy.exp root@10.0.0.32
/tmp/autocopy.exp root@10.0.0.33
/tmp/autocopy.exp root@10.0.0.40
/tmp/autocopy.exp root@10.0.0.10
/tmp/autocopy.exp root@10.0.0.20
/tmp/autocopy.exp root@10.0.0.50
配置kubespray
pip install -r requirements.txt
cp -rfp inventory/sample inventory/mycluster
inventory/mycluster/inventory.ini
# ## Configure 'ip' variable to bind kubernetes services on a
# ## different ip than the default iface
# ## We should set etcd_member_name for etcd cluster. The node that is not a etcd member do not need to set the value, or can set the empty string value.
[all]
# node1 ansible_host=95.54.0.12 # ip=10.3.0.1 etcd_member_name=etcd1
# node2 ansible_host=95.54.0.13 # ip=10.3.0.2 etcd_member_name=etcd2
# node3 ansible_host=95.54.0.14 # ip=10.3.0.3 etcd_member_name=etcd3
# node4 ansible_host=95.54.0.15 # ip=10.3.0.4 etcd_member_name=etcd4
# node5 ansible_host=95.54.0.16 # ip=10.3.0.5 etcd_member_name=etcd5
# node6 ansible_host=95.54.0.17 # ip=10.3.0.6 etcd_member_name=etcd6
etcd1 ansible_host=10.0.0.31 etcd_member_name=etcd1
etcd2 ansible_host=10.0.0.32 etcd_member_name=etcd2
etcd3 ansible_host=10.0.0.33 etcd_member_name=etcd3
master1 ansible_host=10.0.0.40
node1 ansible_host=10.0.0.10
node2 ansible_host=10.0.0.20
node3 ansible_host=10.0.0.50
# ## configure a bastion host if your nodes are not directly reachable
# bastion ansible_host=x.x.x.x ansible_user=some_user
[kube-master]
# node1
# node2
master1
[etcd]
# node1
# node2
# node3
etcd1
etcd2
etcd3
[kube-node]
# node2
# node3
# node4
# node5
# node6
node1
node2
node3
[k8s-cluster:children]
kube-master
kube-node
节点所需镜像的文件
由于某些镜像国内无法访问需要现将镜像通过代理下载到本地然后上传到本地镜像仓库或dockerhub,同时修改配置文件,个别组件存放位置https://storage.googleapis.com,需要新建nginx服务器分发文件
-
建立nginx服务器
- 安装docker及docker-compose
apt-get install \ apt-transport-https \ ca-certificates \ curl \ gnupg-agent \ software-properties-common curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" apt-get update apt-get install docker-ce docker-ce-cli containerd.io chmod +x /usr/local/bin/docker-compose sudo curl -L "https://github.com/docker/compose/releases/download/1.24.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- 新建nginx docker-compose.yml
mkdir ~/distribution vi ~/distribution/docker-compose.yml
- ~/distribution/docker-compose.yml
# distribute version: '2' services: distribute: image: nginx:1.15.12 volumes: - ./conf.d:/etc/nginx/conf.d - ./distributedfiles:/usr/share/nginx/html network_mode: "host" container_name: nginx_distribute
- 创建文件目录及nginx配置文件目录
mkdir ~/distribution/distributedfiles mkdir ~/distribution/ mkdir ~/distribution/conf.d vi ~/distribution/conf.d/open_distribute.conf
- ~/distribution/conf.d/open_distribute.conf
#open_distribute.conf server { #server_name distribute.search.leju.com; listen 8888; root /usr/share/nginx/html; add_header Access-Control-Allow-Origin *; add_header Access-Control-Allow-Headers X-Requested-With; add_header Access-Control-Allow-Methods GET,POST,OPTIONS; location / { # index index.html; autoindex on; } expires off; location ~ .*\.(gif|jpg|jpeg|png|bmp|swf|eot|ttf|woff|woff2|svg)$ { expires -1; } location ~ .*\.(js|css)?$ { expires -1 ; } } # end of public static files domain : [ distribute.search.leju.com ]
- 启动
docker-compose up -d
- 下载并上传所需文件 具体版本号参考roles/download/defaults/main.yml文件中kubeadm_version、kube_version、image_arch参数
wget https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubeadm scp /tmp/kubeadm 10.0.0.60:/root/distribution/distributedfiles wget https://storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/hyperkube
- 安装docker及docker-compose
-
需要下载并上传到私有仓库的镜像
docker pull k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.4.0 docker tag k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.4.0 jiashiwen/cluster-proportional-autoscaler-amd64:1.4.0 docker push jiashiwen/cluster-proportional-autoscaler-amd64:1.4.0 docker pull k8s.gcr.io/k8s-dns-node-cache:1.15.1 docker tag k8s.gcr.io/k8s-dns-node-cache:1.15.1 jiashiwen/k8s-dns-node-cache:1.15.1 docker push jiashiwen/k8s-dns-node-cache:1.15.1 docker pull gcr.io/google_containers/pause-amd64:3.1 docker tag gcr.io/google_containers/pause-amd64:3.1 jiashiwen/pause-amd64:3.1 docker push jiashiwen/pause-amd64:3.1 docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1 docker tag gcr.io/google_containers/kubernetes-dashboard-amd64:v1.10.1 jiashiwen/kubernetes-dashboard-amd64:v1.10.1 docker push jiashiwen/kubernetes-dashboard-amd64:v1.10.1 docker pull gcr.io/google_containers/kube-apiserver:v1.14.1 docker tag gcr.io/google_containers/kube-apiserver:v1.14.1 jiashiwen/kube-apiserver:v1.14.1 docker push jiashiwen/kube-apiserver:v1.14.1 docker pull gcr.io/google_containers/kube-controller-manager:v1.14.1 docker tag gcr.io/google_containers/kube-controller-manager:v1.14.1 jiashiwen/kube-controller-manager:v1.14.1 docker push jiashiwen/kube-controller-manager:v1.14.1 docker pull gcr.io/google_containers/kube-scheduler:v1.14.1 docker tag gcr.io/google_containers/kube-scheduler:v1.14.1 jiashiwen/kube-scheduler:v1.14.1 docker push jiashiwen/kube-scheduler:v1.14.1 docker pull gcr.io/google_containers/kube-proxy:v1.14.1 docker tag gcr.io/google_containers/kube-proxy:v1.14.1 jiashiwen/kube-proxy:v1.14.1 docker push jiashiwen/kube-proxy:v1.14.1 docker pull gcr.io/google_containers/pause:3.1 docker tag gcr.io/google_containers/pause:3.1 jiashiwen/pause:3.1 docker push jiashiwen/pause:3.1 docker pull gcr.io/google_containers/coredns:1.3.1 docker tag gcr.io/google_containers/coredns:1.3.1 jiashiwen/coredns:1.3.1 docker push jiashiwen/coredns:1.3.1
-
用于下载上传镜像的脚本
#!/bin/bash privaterepo=jiashiwen k8sgcrimages=( cluster-proportional-autoscaler-amd64:1.4.0 k8s-dns-node-cache:1.15.1 ) gcrimages=( pause-amd64:3.1 kubernetes-dashboard-amd64:v1.10.1 kube-apiserver:v1.14.1 kube-controller-manager:v1.14.1 kube-scheduler:v1.14.1 kube-proxy:v1.14.1 pause:3.1 coredns:1.3.1 ) for k8sgcrimageName in ${k8sgcrimages[@]} ; do echo $k8sgcrimageName docker pull k8s.gcr.io/$k8sgcrimageName docker tag k8s.gcr.io/$k8sgcrimageName $privaterepo/$k8sgcrimageName docker push $privaterepo/$k8sgcrimageName done for gcrimageName in ${gcrimages[@]} ; do echo $gcrimageName docker pull gcr.io/google_containers/$gcrimageName docker tag gcr.io/google_containers/$gcrimageName $privaterepo/$gcrimageName docker push $privaterepo/$gcrimageName done
-
修改文件inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml,修改k8s镜像仓库
# kube_image_repo: "gcr.io/google-containers" kube_image_repo: "jiashiwen"
-
修改roles/download/defaults/main.yml
#dnsautoscaler_image_repo: "k8s.gcr.io/cluster-proportional-autoscaler-{{ image_arch }}" dnsautoscaler_image_repo: "jiashiwen/cluster-proportional-autoscaler-{{ image_arch }}" #kube_image_repo: "gcr.io/google-containers" kube_image_repo: "jiashiwen" #pod_infra_image_repo: "gcr.io/google_containers/pause-{{ image_arch }}" pod_infra_image_repo: "jiashiwen/pause-{{ image_arch }}" #dashboard_image_repo: "gcr.io/google_containers/kubernetes-dashboard-{{ image_arch }}" dashboard_image_repo: "jiashiwen/kubernetes-dashboard-{{ image_arch }}" #nodelocaldns_image_repo: "k8s.gcr.io/k8s-dns-node-cache" nodelocaldns_image_repo: "jiashiwen/k8s-dns-node-cache" #kubeadm_download_url: "https://storage.googleapis.com/kubernetes-release/ release/{{ kubeadm_version }}/bin/linux/{{ image_arch }}/kubeadm" kubeadm_download_url: "http://10.0.0.60:8888/kubeadm" #hyperkube_download_url: "https://storage.googleapis.com/ kubernetes-release/release/{{ kube_version }}/bin/linux/{{ image_arch }}/ hyperkube" hyperkube_download_url: "http://10.0.0.60:8888/hyperkube"
执行安装
-
安装命令
ansible-playbook -i inventory/mycluster/inventory.ini cluster.yml
-
重置命令
ansible-playbook -i inventory/mycluster/inventory.ini reset.yml
验证k8s集群
- 安装kubectl
-
ubuntu
sudo snap install kubectl --classic
-
centos
- 本地浏览器打开https://storage.googleapis.com/kubernetes-release/release/stable.txt得到最新版本为v1.14.1
- 用上一步得到的最新版本号v1.7.1替换下载地址中的$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)得到真正的下载地址https:// storage.googleapis.com/kubernetes-release/release/v1.14.1/bin/linux/amd64/kubectl
- 上传下载好的kubectl
scp /tmp/kubectl root@xxx:/root
- 修改属性
chmod +x ./kubectl mv ./kubectl /usr/local/bin/kubectl
-
- 将master节点上的~/.kube/config 文件复制到你需要访问集群的客户端上即可
scp 10.0.0.40:/root/.kube/config ~/.kube/config
- 执行命令验证集群
kubectl get nodes kubectl cluster-info
tidb-operaor部署
安装helm
https://blog.csdn.net/bbwangj/article/details/81087911
-
安装helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get > get_helm.sh chmod 700 get_helm.sh ./get_helm.sh
-
查看helm版本
helm version
-
初始化
helm init --upgrade -i registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:v2.13.1 --stable-repo-url https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts
为k8s提供 local volumes
-
参考文档https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md
tidb-operator启动会为pd和tikv绑定pv,需要在discovery directory下创建多个目录 -
格式化并挂载磁盘
mkfs.ext4 /dev/vdb DISK_UUID=$(blkid -s UUID -o value /dev/vdb) mkdir /mnt/$DISK_UUID mount -t ext4 /dev/vdb /mnt/$DISK_UUID
-
/etc/fstab持久化mount
echo UUID=`sudo blkid -s UUID -o value /dev/vdb` /mnt/$DISK_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab
-
创建多个目录并mount到discovery directory
for i in $(seq 1 10); do sudo mkdir -p /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i} sudo mount --bind /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i} done
-
/etc/fstab持久化mount
for i in $(seq 1 10); do echo /mnt/${DISK_UUID}/vol${i} /mnt/disks/${DISK_UUID}_vol${i} none bind 0 0 | sudo tee -a /etc/fstab done
-
为tidb-operator创建local-volume-provisioner
$ kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/local-dind/local-volume-provisioner.yaml
$ kubectl get po -n kube-system -l app=local-volume-provisioner
$ kubectl get pv --all-namespaces | grep local-storage
Install TiDB Operator
- 项目中使用了gcr.io/google-containers/hyperkube,国内访问不了,简单的办法是把镜像重新push到dockerhub然后修改charts/tidb-operator/values.yaml
scheduler: # With rbac.create=false, the user is responsible for creating this account # With rbac.create=true, this service account will be created # Also see rbac.create and clusterScoped serviceAccount: tidb-scheduler logLevel: 2 replicas: 1 schedulerName: tidb-scheduler resources: limits: cpu: 250m memory: 150Mi requests: cpu: 80m memory: 50Mi # kubeSchedulerImageName: gcr.io/google-containers/hyperkube kubeSchedulerImageName: yourrepo/hyperkube # This will default to matching your kubernetes version # kubeSchedulerImageTag: latest
- TiDB Operator使用CRD扩展Kubernetes,因此要使用TiDB Operator,首先应该创建TidbCluster自定义资源类型。
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/master/manifests/crd.yaml kubectl get crd tidbclusters.pingcap.com
- 安装tidb-operator
$ git clone https://github.com/pingcap/tidb-operator.git $ cd tidb-operator $ helm install charts/tidb-operator --name=tidb-operator --namespace=tidb-admin $ kubectl get pods --namespace tidb-admin -l app.kubernetes.io/ instance=tidb-operator
部署tidb
helm install charts/tidb-cluster --name=demo --namespace=tidb
watch kubectl get pods --namespace tidb -l app.kubernetes.io/instance=demo -o wide
验证
安装mysql客户端
-
参考文档https://dev.mysql.com/doc/refman/8.0/en/linux-installation.html
-
centos安装
wget https://dev.mysql.com/get/mysql80-community-release-el7-3.noarch.rpm yum localinstall mysql80-community-release-el7-3.noarch.rpm -y yum repolist all | grep mysql yum-config-manager --disable mysql80-community yum-config-manager --enable mysql57-community yum install mysql-community-client
-
ubuntu安装
wget https://dev.mysql.com/get/mysql-apt-config_0.8.13-1_all.deb dpkg -i mysql-apt-config_0.8.13-1_all.deb apt update # 选择mysql版本 dpkg-reconfigure mysql-apt-config apt install mysql-client -y
映射tidb端口
- 查看tidb serveice
kubectl get svc --all-namespaces
- 映射tidb端口
# 仅本地访问 kubectl port-forward svc/demo-tidb 4000:4000 --namespace=tidb # 其他主机访问 kubectl port-forward --address 0.0.0.0 svc/demo-tidb 4000:4000 --namespace=tidb
- 首次登录mysql
mysql -h 127.0.0.1 -P 4000 -u root -D test
- 修改tidb密码
SET PASSWORD FOR 'root'@'%' = 'wD3cLpyO5M'; FLUSH PRIVILEGES;
趟坑小记
-
k8s国内安装
k8s镜像多在gcr.io国内访问不到,基本做法是把镜像导入dockerhub或者私有镜像,这一点在k8s部署章节有详细过程就不累述了。 -
tidb-operator 本地存储配置
operator在启动集群时pd和tikv需要绑定本地存储如果挂载点不足会导致pod启动过程中找不到可已bond的pv始终处于pending或createing状态,详细配请参阅https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/blob/master/docs/operations.md中“Sharing a disk filesystem by multiple filesystem PVs”一节,同一块磁盘绑定多个挂载目录,为operator提供足够的bond -
mysql客户端版本问题
目前tidb只支持mysql5.7版本客户端8.0会报ERROR 1105 (HY000): Unknown charset id 255
更多推荐
所有评论(0)