kubespray部署k8s
# kubespray部署k8s## 环境配置### 网络环境 操作系统:CentOS 7 x64操作账户:root (为了方便执行)kubespray部署k8s环境配置网络环境操作系统:CentOS 7 x64操作账户:root (为了方便执行)节点名称IP分配备注k8s-node0...
# kubespray部署k8s
## 环境配置
### 网络环境 操作系统:CentOS 7 x64
操作账户:root (为了方便执行)
kubespray部署k8s环境配置网络环境操作系统:CentOS 7 x64 操作账户:root (为了方便执行)
设置主机名关闭防火墙设置主机名: hostnamectl --static set-hostname k8s-node01 hostnamectl --static set-hostname k8s-node02 hostnamectl --static set-hostname k8s-node03 关闭防火墙: systemctl disable firewalld systemctl stop firewalld sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config 基础软件配置配置阿里Docker源 163 yum源
由于CentOS 为最小化安装,在每个节点上先安装wget yum install -y wget 配置Docker阿里云镜像源配置好阿里云加速器 mkdir -p /etc/docker tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://j4ckfrfn.mirror.aliyuncs.com"] } EOF 下载脚本安装docker 1.13.x curl https://releases.rancher.com/install-docker/1.13.sh | sh ansible安装ansible介绍kubespray基于ansible编写,ansible是新出现的自动化运维工具,基于Python开发,集合了众多运维工具(puppet、cfengine、chef、func、fabric)的优点,实现了批量系统配置、批量程序部署、批量运行命令等功能。 ansible是基于模块工作的,本身没有批量部署的能力。真正具有批量部署的是ansible所运行的模块,ansible只是提供一种框架。 安装ansible前的配置ansible包安装 本教程装在k8s-node1节点上,托管节点python的版本需要大于2.5。ansible依赖netaddr Jinja2.8
# 安装 epel yum install -y epel-release # 安装 ansible 及 python git (必须先安装 epel 源再安装 ansible) yum install -y python-pip python34 python34-pip deltarpm ansible git 配置python国内源 修改 ~/.pip/pip.conf mkdir ~/.pip cat >> ~/.pip/pip.conf <<-'EOF' [global] index-url = https://pypi.tuna.tsinghua.edu.cn/simple EOF python依赖包安装 pip install --upgrade pip pip install netaddr pip install --upgrade Jinja2 设置免密登录 在ansible-client执行 ssh-keygen -t rsa 生成密钥对 ssh-keygen -t rsa -P '' 节点ip同步密钥 IP=(10.0.0.8 10.0.0.9 10.0.0.10) for x in ${IP[*]}; do ssh-copy-id -i ~/.ssh/id_rsa.pub $x; done 配置kuberspray下载kuberspray源码并解压wget https://github.com/kubernetes-incubator/kubespray/archive/v2.3.0.tar.gz tar -zxvf v2.3.0.tar.gz mv kubespray-2.3.0/ kubespray/ ###将grc.io和quay.io的镜像上传到阿里云
镜像替换
在kuberspray源码源代码中搜索包含 gcr.io/google_containers 和 quay.io 镜像的文件,并替换为之前已经上传到阿里云的进行,替换脚本如下: touch ~/replaceimg.sh tee ~/replaceimg.sh <<-'EOF' grc_image_files=( ./kubespray/extra_playbooks/roles/dnsmasq/templates/dnsmasq-autoscaler.yml.j2 ./kubespray/extra_playbooks/roles/download/defaults/main.yml ./kubespray/extra_playbooks/roles/kubernetes-apps/ansible/defaults/main.yml ./kubespray/roles/download/defaults/main.yml ./kubespray/roles/dnsmasq/templates/dnsmasq-autoscaler.yml.j2 ./kubespray/roles/kubernetes-apps/ansible/defaults/main.yml ) for file in ${grc_image_files[@]} ; do sed -i 's/gcr.io\/google_containers/registry.cn-hangzhou.aliyuncs.com\/szss_k8s/g' $file done quay_image_files=( ./kubespray/extra_playbooks/roles/download/defaults/main.yml ./kubespray/roles/download/defaults/main.yml ) for file in ${quay_image_files[@]} ; do sed -i 's/quay.io\/coreos\//registry.cn-hangzhou.aliyuncs.com\/szss_quay_io\/coreos-/g' $file sed -i 's/quay.io\/calico\//registry.cn-hangzhou.aliyuncs.com\/szss_quay_io\/calico-/g' $file sed -i 's/quay.io\/l23network\//registry.cn-hangzhou.aliyuncs.com\/szss_quay_io\/l23network-/g' $file done EOF 执行脚本 sh ~/replaceimg.sh
配置文件内容配置文件位于~/kubespray/inventory/group_vars/k8s-cluster.yml # Kubernetes configuration dirs and system namespace. # Those are where all the additional config stuff goes # the kubernetes normally puts in /srv/kubernets. # This puts them in a sane location and namespace. # Editting those values will almost surely break something. #k8s配置文件目录 kube_config_dir: /etc/kubernetes #k8s脚本目录 kube_script_dir: "{{ bin_dir }}/kubernetes-scripts" #k8s manifests目录 kube_manifest_dir: "{{ kube_config_dir }}/manifests" #命名空间 system_namespace: kube-system # Logging directory (sysvinit systems) #k8s日志目录 kube_log_dir: "/var/log/kubernetes" # This is where all the cert scripts and certs will be located #k8s证书文件目录 kube_cert_dir: "{{ kube_config_dir }}/ssl" # This is where all of the bearer tokens will be stored #k8s tockens目录 kube_token_dir: "{{ kube_config_dir }}/tokens" # This is where to save basic auth file #k8s 用户认证文件目录 kube_users_dir: "{{ kube_config_dir }}/users" #允许匿名访问 kube_api_anonymous_auth: false ## Change this to use another Kubernetes version, e.g. a current beta release #k8s版本 kube_version: v1.6.7 # Where the binaries will be downloaded. # Note: ensure that you've enough disk space (about 1G) #下载目录地址 local_release_dir: "/tmp/releases" # Random shifts for retrying failed ops like pushing/downloading #下载镜像重试次数 retry_stagger: 5 # This is the group that the cert creation scripts chgrp the # cert files to. Not really changable... kube_cert_group: kube-cert # Cluster Loglevel configuration #k8s日志登记 kube_log_level: 2 # Users to create for basic auth in Kubernetes API via HTTP # Optionally add groups for user #用户配置 kube_api_pwd: "changeme" kube_users: kube: pass: "{{kube_api_pwd}}" role: admin root: pass: "{{kube_api_pwd}}" role: admin # groups: # - system:masters
## It is possible to activate / deactivate selected authentication methods (basic auth, static token auth) #kube_oidc_auth: false #kube_basic_auth: false #kube_token_auth: false
## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/ ## To use OpenID you have to deploy additional an OpenID Provider (e.g Dex, Keycloak, ...) # kube_oidc_url: https:// ... # kube_oidc_client_id: kubernetes ## Optional settings for OIDC # kube_oidc_ca_file: {{ kube_cert_dir }}/ca.pem # kube_oidc_username_claim: sub # kube_oidc_groups_claim: groups
# Choose network plugin (calico, weave or flannel) # Can also be set to 'cloud', which lets the cloud provider setup appropriate routing #网络组件 kube_network_plugin: calico # weave's network password for encryption # if null then no network encryption # you can use --extra-vars to pass the password in command line weave_password: EnterPasswordHere # Weave uses consensus mode by default # Enabling seed mode allow to dynamically add or remove hosts # https://www.weave.works/docs/net/latest/ipam/ weave_mode_seed: false # This two variable are automatically changed by the weave's role, do not manually change these values # To reset values : # weave_seed: uninitialized # weave_peers: uninitialized weave_seed: uninitialized weave_peers: uninitialized # Enable kubernetes network policies enable_network_policy: false # Kubernetes internal network for services, unused block of space. kube_service_addresses: 10.233.0.0/18 # internal network. When used, it will assign IP # addresses from this range to individual pods. # This network must be unused in your network infrastructure! kube_pods_subnet: 10.233.64.0/18 # internal network node size allocation (optional). This is the size allocated # to each node on your network. With these defaults you should have # room for 4096 nodes with 254 pods per node. kube_network_node_prefix: 24 # The port the API Server will be listening on. kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}" kube_apiserver_port: 6443 # (https) kube_apiserver_insecure_port: 8080 # (http) # DNS configuration. # Kubernetes cluster name, also will be used as DNS domain cluster_name: cluster.local # Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods ndots: 2 # Can be dnsmasq_kubedns, kubedns or none dns_mode: kubedns # Can be docker_dns, host_resolvconf or none resolvconf_mode: docker_dns # Deploy netchecker app to verify DNS resolve as an HTTP service deploy_netchecker: false # Ip address of the kubernetes skydns service skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}" dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}" dns_domain: "{{ cluster_name }}" # Path used to store Docker data docker_daemon_graph: "/var/lib/docker" ## A string of extra options to pass to the docker daemon. ## This string should be exactly as you wish it to appear. ## An obvious use case is allowing insecure-registry access ## to self hosted registries like so: docker_options: "--insecure-registry={{ kube_service_addresses }} --graph={{ docker_daemon_graph }} {{ docker_log_opts }}" docker_bin_dir: "/usr/bin" # Settings for containerized control plane (etcd/kubelet/secrets) etcd_deployment_type: docker kubelet_deployment_type: docker cert_management: script vault_deployment_type: docker # K8s image pull policy (imagePullPolicy) k8s_image_pull_policy: IfNotPresent # Monitoring apps for k8s efk_enabled: false # Helm deployment helm_enabled: false # dnsmasq # dnsmasq_upstream_dns_servers: # - /resolvethiszone.with/10.0.4.250 # - 8.8.8.8 # Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created. (default true) # kubelet_cgroups_per_qos: true # A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. # Acceptible options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "". # kubelet_enforce_node_allocatable: pods 生成集群配置# 定义集群IP IP=( 10.0.0.88 10.0.0.90 10.0.0.91 ) # 利用kubespray自带的python脚本生成配置 CONFIG_FILE=./kubespray/inventory/inventory.cfg python3 ./kubespray/contrib/inventory_builder/inventory.py ${IP[*]} 检查集群配置
###安装集群 cd ~/kubespray ansible-playbook -i inventory/inventory.cfg cluster.yml -b -v --private-key=~/.ssh/id_rsa 增加新节点如果已经有了一个k8s搭建的集群,那么扩展其极其容易;只需要修改集群 inventory 配置,加入新节点重新运行命令加个参数即可,如下新增一个 node6 节点 vim inventory/inventory.cfg # 在 Kubernetes node 组中加入新的 node6 节点 [all] node1 ansible_host=192.168.1.11 ip=192.168.1.11 node2 ansible_host=192.168.1.12 ip=192.168.1.12 node3 ansible_host=192.168.1.13 ip=192.168.1.13 node4 ansible_host=192.168.1.14 ip=192.168.1.14 node5 ansible_host=192.168.1.15 ip=192.168.1.15 node6 ansible_host=192.168.1.16 ip=192.168.1.16 [kube-master] node1 node2 node3 node5 [kube-node] node1 node2 node3 node4 node5 node6 [etcd] node1 node2 node3 [k8s-cluster:children] kube-node kube-master [calico-rr]
然后重新运行集群命令,注意增加 --limit 参数 ansible-playbook -i inventory/inventory.cfg cluster.yml -b -v --private-key=~/.ssh/id_rsa --limit node6 稍等片刻 node6 节点便加入现有集群,如果有多个节点加入,只需要以逗号分隔即可,如 --limit node5,node6;在此过程中只会操作新增的 node 节点,不会影响现有集群,可以实现动态集群扩容(master 也可以扩展)
附录:下载镜像命令列表镜像需要修改的文件
#etcd docker pull quay.io/coreos/etcd:v3.2.4 #flannel docker pull quay.io/coreos/flannel:v0.8.0 #flannel-cni docker pull quay.io/coreos/flannel-cni:v0.2.0 #calico ctl docker pull quay.io/calico/ctl:v1.5.0 #calico node docker pull quay.io/calico/node:v2.5.0 #calico cni docker pull quay.io/calico/cni:v1.10.0 #calico_policy_image docker pull quay.io/calico/kube-policy-controller:v0.7.0 #calico_rr docker pull quay.io/calico/routereflector:v0.4.0 #hyperkube_image_repo docker pull quay.io/coreos/hyperkube:v1.8.1_coreos.0 #pod_infra docker pull gcr.io/google_containers/pause-amd64:3.0 #netcheck docker pull quay.io/l23network/k8s-netchecker-agent:v1.0 #netcheck_server docker pull quay.io/l23network/k8s-netchecker-server:v1.0 #k8s-dns-kube-dns-amd64 docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 #k8s-dns-dnsmasq-nanny-amd64 docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 #kubednsautoscaler docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1 #elasticsearch docker pull gcr.io/google_containers/elasticsearch:v2.4.1 #fluentd docker pull gcr.io/google_containers/fluentd-elasticsearch:1.22 #kibana docker pull gcr.io/google_containers/kibana:v4.6.1 #tiller_image_repo docker pull gcr.io/kubernetes-helm/tiller:v2.2.2 #cluster-proportional-autoscaler docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1 ----------- #kubedns_image_repo docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 #dnsmasq_nanny_image_repo docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 #dnsmasq_sidecar_image_repo docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 #kubednsautoscaler_image_repo docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1 # Dashboard docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:1.6.3
#etcd docker pull quay.io/coreos/etcd:3.2.4 #flannel docker pull quay.io/coreos/flannel:v0.8.0 #flannel-cni docker pull quay.io/coreos/flannel-cni:v0.2.0 #calico ctl docker pull quay.io/calico/ctl:v0.2.0 #calico node docker pull quay.io/calico/node:v2.5.0 #calico cni docker pull quay.io/calico/cni:v1.10.0 #calico_policy_image docker pull quay.io/calico/kube-policy-controller:v0.7.0 #calico_rr docker pull quay.io/calico/routereflector:v0.4.0 #hyperkube_image_repo docker pull quay.io/coreos/hyperkube:v1.8.1_coreos.0 #pod_infra docker pull gcr.io/google_containers/pause-amd64:3.0 #netcheck docker pull quay.io/l23network/k8s-netchecker-agent:v1.0 #netcheck_server docker pull quay.io/l23network/k8s-netchecker-server:v1.0 #k8s-dns-kube-dns-amd64 docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 #k8s-dns-dnsmasq-nanny-amd64 docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 #kubednsautoscaler docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1 #elasticsearch docker pull gcr.io/google_containers/elasticsearch:v2.4.1 #fluentd docker pull gcr.io/google_containers/fluentd-elasticsearch:1.22 #kibana docker pull gcr.io/google_containers/kibana:v4.6.1 #tiller_image_repo docker pull gcr.io/kubernetes-helm/tiller:v2.2.2 #cluster-proportional-autoscaler docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1
| ||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
##设置主机名关闭防火墙 设置主机名:
hostnamectl --static set-hostname k8s-node01 hostnamectl --static set-hostname k8s-node02 hostnamectl --static set-hostname k8s-node03
关闭防火墙:
systemctl disable firewalld systemctl stop firewalld sed -i 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/selinux/config
##基础软件配置 ###配置阿里Docker源 163 yum源
以下操作需在每个机器上执行
由于CentOS 为最小化安装,在每个节点上先安装wget
yum install -y wget
###配置Docker阿里云镜像源
配置好阿里云加速器
mkdir -p /etc/docker tee /etc/docker/daemon.json <<-'EOF' { "registry-mirrors": ["https://j4ckfrfn.mirror.aliyuncs.com"] } EOF
下载脚本安装docker 1.13.x
curl https://releases.rancher.com/install-docker/1.13.sh | sh
##ansible安装 ###ansible介绍 kubespray基于ansible编写,ansible是新出现的自动化运维工具,基于Python开发,集合了众多运维工具(puppet、cfengine、chef、func、fabric)的优点,实现了批量系统配置、批量程序部署、批量运行命令等功能。 ansible是基于模块工作的,本身没有批量部署的能力。真正具有批量部署的是ansible所运行的模块,ansible只是提供一种框架。 ###安装ansible前的配置
####ansible包安装 本教程装在k8s-node1节点上,托管节点python的版本需要大于2.5。ansible依赖netaddr Jinja2.8
选择一台主机作为ansible控制器,ansible可与k8s不同,但不能是Windows(不支持)
# 安装 epel yum install -y epel-release # 安装 ansible 及 python git (必须先安装 epel 源再安装 ansible) yum install -y python-pip python34 python34-pip deltarpm ansible git
####配置python国内源 修改 ~/.pip/pip.conf
mkdir ~/.pip cat >> ~/.pip/pip.conf <<-'EOF' [global] index-url = https://pypi.tuna.tsinghua.edu.cn/simple EOF
python依赖包安装
pip install --upgrade pip pip install netaddr pip install --upgrade Jinja2
设置免密登录
在ansible-client执行 ssh-keygen -t rsa 生成密钥对
ssh-keygen -t rsa -P ''
节点ip同步密钥
IP=(10.0.0.8 10.0.0.9 10.0.0.10) for x in ${IP[*]}; do ssh-copy-id -i ~/.ssh/id_rsa.pub $x; done
##配置kuberspray
###下载kuberspray源码并解压
wget https://github.com/kubernetes-incubator/kubespray/archive/v2.3.0.tar.gz tar -zxvf v2.3.0.tar.gz mv kubespray-2.3.0/ kubespray/
###将grc.io和quay.io的镜像上传到阿里云
省略 请参照将grc.io和quay.io的镜像上传到阿里云
本文采用该文章阿里云仓库地址,省略科学上网步骤 科学上网请参照shawdowsocks的docker部署方式
###镜像替换
因gcr.io以及quay.io被GFW所屏蔽,安装kuberspray会失败,需要从阿里云仓库中转。
在kuberspray源码源代码中搜索包含 gcr.io/google_containers 和 quay.io 镜像的文件,并替换为之前已经上传到阿里云的进行,替换脚本如下:
touch ~/replaceimg.sh tee ~/replaceimg.sh <<-'EOF' grc_image_files=( ./kubespray/extra_playbooks/roles/dnsmasq/templates/dnsmasq-autoscaler.yml.j2 ./kubespray/extra_playbooks/roles/download/defaults/main.yml ./kubespray/extra_playbooks/roles/kubernetes-apps/ansible/defaults/main.yml ./kubespray/roles/download/defaults/main.yml ./kubespray/roles/dnsmasq/templates/dnsmasq-autoscaler.yml.j2 ./kubespray/roles/kubernetes-apps/ansible/defaults/main.yml ) for file in ${grc_image_files[@]} ; do sed -i 's/gcr.io\/google_containers/registry.cn-hangzhou.aliyuncs.com\/szss_k8s/g' $file done quay_image_files=( ./kubespray/extra_playbooks/roles/download/defaults/main.yml ./kubespray/roles/download/defaults/main.yml ) for file in ${quay_image_files[@]} ; do sed -i 's/quay.io\/coreos\//registry.cn-hangzhou.aliyuncs.com\/szss_quay_io\/coreos-/g' $file sed -i 's/quay.io\/calico\//registry.cn-hangzhou.aliyuncs.com\/szss_quay_io\/calico-/g' $file sed -i 's/quay.io\/l23network\//registry.cn-hangzhou.aliyuncs.com\/szss_quay_io\/l23network-/g' $file done EOF
执行脚本
sh ~/replaceimg.sh
如果在mac os x执行脚本,需要在sed -i 后面添加一个空字符串,例如sed -i '' 's/a/b/g' file
###配置文件内容
配置文件位于~/kubespray/inventory/group_vars/k8s-cluster.yml
# Kubernetes configuration dirs and system namespace. # Those are where all the additional config stuff goes # the kubernetes normally puts in /srv/kubernets. # This puts them in a sane location and namespace. # Editting those values will almost surely break something. #k8s配置文件目录 kube_config_dir: /etc/kubernetes #k8s脚本目录 kube_script_dir: "{{ bin_dir }}/kubernetes-scripts" #k8s manifests目录 kube_manifest_dir: "{{ kube_config_dir }}/manifests" #命名空间 system_namespace: kube-system # Logging directory (sysvinit systems) #k8s日志目录 kube_log_dir: "/var/log/kubernetes" # This is where all the cert scripts and certs will be located #k8s证书文件目录 kube_cert_dir: "{{ kube_config_dir }}/ssl" # This is where all of the bearer tokens will be stored #k8s tockens目录 kube_token_dir: "{{ kube_config_dir }}/tokens" # This is where to save basic auth file #k8s 用户认证文件目录 kube_users_dir: "{{ kube_config_dir }}/users" #允许匿名访问 kube_api_anonymous_auth: false ## Change this to use another Kubernetes version, e.g. a current beta release #k8s版本 kube_version: v1.6.7 # Where the binaries will be downloaded. # Note: ensure that you've enough disk space (about 1G) #下载目录地址 local_release_dir: "/tmp/releases" # Random shifts for retrying failed ops like pushing/downloading #下载镜像重试次数 retry_stagger: 5 # This is the group that the cert creation scripts chgrp the # cert files to. Not really changable... kube_cert_group: kube-cert # Cluster Loglevel configuration #k8s日志登记 kube_log_level: 2 # Users to create for basic auth in Kubernetes API via HTTP # Optionally add groups for user #用户配置 kube_api_pwd: "changeme" kube_users: kube: pass: "{{kube_api_pwd}}" role: admin root: pass: "{{kube_api_pwd}}" role: admin # groups: # - system:masters
## It is possible to activate / deactivate selected authentication methods (basic auth, static token auth) #kube_oidc_auth: false #kube_basic_auth: false #kube_token_auth: false
## Variables for OpenID Connect Configuration https://kubernetes.io/docs/admin/authentication/ ## To use OpenID you have to deploy additional an OpenID Provider (e.g Dex, Keycloak, ...) # kube_oidc_url: https:// ... # kube_oidc_client_id: kubernetes ## Optional settings for OIDC # kube_oidc_ca_file: {{ kube_cert_dir }}/ca.pem # kube_oidc_username_claim: sub # kube_oidc_groups_claim: groups
# Choose network plugin (calico, weave or flannel) # Can also be set to 'cloud', which lets the cloud provider setup appropriate routing #网络组件 kube_network_plugin: calico # weave's network password for encryption # if null then no network encryption # you can use --extra-vars to pass the password in command line weave_password: EnterPasswordHere # Weave uses consensus mode by default # Enabling seed mode allow to dynamically add or remove hosts # https://www.weave.works/docs/net/latest/ipam/ weave_mode_seed: false # This two variable are automatically changed by the weave's role, do not manually change these values # To reset values : # weave_seed: uninitialized # weave_peers: uninitialized weave_seed: uninitialized weave_peers: uninitialized # Enable kubernetes network policies enable_network_policy: false # Kubernetes internal network for services, unused block of space. kube_service_addresses: 10.233.0.0/18 # internal network. When used, it will assign IP # addresses from this range to individual pods. # This network must be unused in your network infrastructure! kube_pods_subnet: 10.233.64.0/18 # internal network node size allocation (optional). This is the size allocated # to each node on your network. With these defaults you should have # room for 4096 nodes with 254 pods per node. kube_network_node_prefix: 24 # The port the API Server will be listening on. kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}" kube_apiserver_port: 6443 # (https) kube_apiserver_insecure_port: 8080 # (http) # DNS configuration. # Kubernetes cluster name, also will be used as DNS domain cluster_name: cluster.local # Subdomains of DNS domain to be resolved via /etc/resolv.conf for hostnet pods ndots: 2 # Can be dnsmasq_kubedns, kubedns or none dns_mode: kubedns # Can be docker_dns, host_resolvconf or none resolvconf_mode: docker_dns # Deploy netchecker app to verify DNS resolve as an HTTP service deploy_netchecker: false # Ip address of the kubernetes skydns service skydns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(3)|ipaddr('address') }}" dns_server: "{{ kube_service_addresses|ipaddr('net')|ipaddr(2)|ipaddr('address') }}" dns_domain: "{{ cluster_name }}" # Path used to store Docker data docker_daemon_graph: "/var/lib/docker" ## A string of extra options to pass to the docker daemon. ## This string should be exactly as you wish it to appear. ## An obvious use case is allowing insecure-registry access ## to self hosted registries like so: docker_options: "--insecure-registry={{ kube_service_addresses }} --graph={{ docker_daemon_graph }} {{ docker_log_opts }}" docker_bin_dir: "/usr/bin" # Settings for containerized control plane (etcd/kubelet/secrets) etcd_deployment_type: docker kubelet_deployment_type: docker cert_management: script vault_deployment_type: docker # K8s image pull policy (imagePullPolicy) k8s_image_pull_policy: IfNotPresent # Monitoring apps for k8s efk_enabled: false # Helm deployment helm_enabled: false # dnsmasq # dnsmasq_upstream_dns_servers: # - /resolvethiszone.with/10.0.4.250 # - 8.8.8.8 # Enable creation of QoS cgroup hierarchy, if true top level QoS and pod cgroups are created. (default true) # kubelet_cgroups_per_qos: true # A comma separated list of levels of node allocatable enforcement to be enforced by kubelet. # Acceptible options are 'pods', 'system-reserved', 'kube-reserved' and ''. Default is "". # kubelet_enforce_node_allocatable: pods
###生成集群配置
# 定义集群IP IP=( 10.0.0.88 10.0.0.90 10.0.0.91 ) # 利用kubespray自带的python脚本生成配置 CONFIG_FILE=./kubespray/inventory/inventory.cfg python3 ./kubespray/contrib/inventory_builder/inventory.py ${IP[*]}
###检查集群配置
cat ./kubespray/inventory/inventory.cfg
###安装集群
cd ~/kubespray ansible-playbook -i inventory/inventory.cfg cluster.yml -b -v --private-key=~/.ssh/id_rsa
###增加新节点
如果已经有了一个k8s搭建的集群,那么扩展其极其容易;只需要修改集群 inventory 配置,加入新节点重新运行命令加个参数即可,如下新增一个 node6 节点
vim inventory/inventory.cfg # 在 Kubernetes node 组中加入新的 node6 节点 [all] node1 ansible_host=192.168.1.11 ip=192.168.1.11 node2 ansible_host=192.168.1.12 ip=192.168.1.12 node3 ansible_host=192.168.1.13 ip=192.168.1.13 node4 ansible_host=192.168.1.14 ip=192.168.1.14 node5 ansible_host=192.168.1.15 ip=192.168.1.15 node6 ansible_host=192.168.1.16 ip=192.168.1.16 [kube-master] node1 node2 node3 node5 [kube-node] node1 node2 node3 node4 node5 node6 [etcd] node1 node2 node3 [k8s-cluster:children] kube-node kube-master [calico-rr]
然后重新运行集群命令,注意增加 --limit 参数
ansible-playbook -i inventory/inventory.cfg cluster.yml -b -v --private-key=~/.ssh/id_rsa --limit node6
稍等片刻 node6 节点便加入现有集群,如果有多个节点加入,只需要以逗号分隔即可,如 --limit node5,node6;在此过程中只会操作新增的 node 节点,不会影响现有集群,可以实现动态集群扩容(master 也可以扩展)
附录:下载镜像命令列表
镜像需要修改的文件
-
kubespray/roles/kubernetes-apps/ansible/defaults/main.yml
-
kubespray/roles/download/defaults/main.yml
-
kubespray/extra_playbooks/roles/download/defaults/main.yml
-
kubespray/inventory/group_vars/k8s-cluster.yml
-
kubespray/roles/dnsmasq/templates/dnsmasq-autoscaler.yml
#kubedns_image_repo docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5
#dnsmasq_nanny_image_repo docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 #dnsmasq_sidecar_image_repo docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 #kubednsautoscaler_image_repo docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1 # Dashboard docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:1.6.3
#etcd docker pull quay.io/coreos/etcd:v3.2.4 #flannel docker pull quay.io/coreos/flannel:v0.8.0 #flannel-cni docker pull quay.io/coreos/flannel-cni:v0.2.0 #calico ctl docker pull quay.io/calico/ctl:v1.5.0 #calico node docker pull quay.io/calico/node:v2.5.0 #calico cni docker pull quay.io/calico/cni:v1.10.0 #calico_policy_image docker pull quay.io/calico/kube-policy-controller:v0.7.0 #calico_rr docker pull quay.io/calico/routereflector:v0.4.0 #hyperkube_image_repo docker pull quay.io/coreos/hyperkube:v1.8.1_coreos.0 #pod_infra docker pull gcr.io/google_containers/pause-amd64:3.0 #netcheck docker pull quay.io/l23network/k8s-netchecker-agent:v1.0 #netcheck_server docker pull quay.io/l23network/k8s-netchecker-server:v1.0 #k8s-dns-kube-dns-amd64 docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 #k8s-dns-dnsmasq-nanny-amd64 docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 #kubednsautoscaler docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1 #elasticsearch docker pull gcr.io/google_containers/elasticsearch:v2.4.1 #fluentd docker pull gcr.io/google_containers/fluentd-elasticsearch:1.22 #kibana docker pull gcr.io/google_containers/kibana:v4.6.1 #tiller_image_repo docker pull gcr.io/kubernetes-helm/tiller:v2.2.2 #cluster-proportional-autoscaler docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1 ----------- #kubedns_image_repo docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 #dnsmasq_nanny_image_repo docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 #dnsmasq_sidecar_image_repo docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 #kubednsautoscaler_image_repo docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1 # Dashboard docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:1.6.3
#etcd docker pull quay.io/coreos/etcd:3.2.4 #flannel docker pull quay.io/coreos/flannel:v0.8.0 #flannel-cni docker pull quay.io/coreos/flannel-cni:v0.2.0 #calico ctl docker pull quay.io/calico/ctl:v0.2.0 #calico node docker pull quay.io/calico/node:v2.5.0 #calico cni docker pull quay.io/calico/cni:v1.10.0 #calico_policy_image docker pull quay.io/calico/kube-policy-controller:v0.7.0 #calico_rr docker pull quay.io/calico/routereflector:v0.4.0 #hyperkube_image_repo docker pull quay.io/coreos/hyperkube:v1.8.1_coreos.0 #pod_infra docker pull gcr.io/google_containers/pause-amd64:3.0 #netcheck docker pull quay.io/l23network/k8s-netchecker-agent:v1.0 #netcheck_server docker pull quay.io/l23network/k8s-netchecker-server:v1.0 #k8s-dns-kube-dns-amd64 docker pull gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.5 #k8s-dns-dnsmasq-nanny-amd64 docker pull gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.5 docker pull gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.5 #kubednsautoscaler docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1 #elasticsearch docker pull gcr.io/google_containers/elasticsearch:v2.4.1 #fluentd docker pull gcr.io/google_containers/fluentd-elasticsearch:1.22 #kibana docker pull gcr.io/google_containers/kibana:v4.6.1 #tiller_image_repo docker pull gcr.io/kubernetes-helm/tiller:v2.2.2 #cluster-proportional-autoscaler docker pull gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.1.1
更多推荐
所有评论(0)