基于 kubeadm 的 K8s集群搭建


环境

环境
OS:ubuntu18.04
Docker:19.03.8
K8s:1.18

机器:一台Master Node,两台 Worker Node

生产化方案

多节点etcd、多节点Master Node、多节点 Worker Node:

etcd 被设置成了集群模式,并且在 kubernetes 集群之外。所有的 Node 都会连接到它上面去。所有的 master node 都被设置为 HA 模式,并且连接到所有的 worker node 上

安装方案

kubeadm:基于任何环境;
kubespray:基于ansible
kops:aws 和 gce

准备工作(三台机器)

1、如果安装了防火墙,则需要关闭防火墙

sudo ufw disable
# or
systemctl stop ufw

2、如果安装了SELINUX模块,则需要禁用SELINUX

sudo vi /etc/selinux/config
SELINUX=permissive 

3、关闭swap

sudo swapoff -a

sudo vim /etc/fstab  通过 "#" 注释掉带有swap的那行

#为了防止开机自动挂载 swap 分区,可以注释 /etc/fstab 中相应的条目
sed -i 's/.*swap.*/#&/' /etc/fstab

通过 free -h 确认swap是否为0

4、修改hostname 与 hosts

hostnamectl
hostnamectl set-hostname k8s-master
cat /etc/hostname # k8s-master

sudo vim /etc/hosts
xxx.xxx.17.174 k8s-master 

5、修改cloud.cfg(可选)

sudo vim /etc/cloud/cloud.cfg

preserve_hostname: true
APT方式安装kubeadm、kubelet、kubectl(三台机器)

1、创建repo

sudo apt autoremove && sudo apt-get update && sudo apt-get install -y apt-transport-https curl

sudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -

# 在/etc/apt/sources.list.d/kubernetes.list中添加aliyun的镜像地址

sudo tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main
EOF

sudo apt-get update

2、安装

# 查看可安装版本
apt-cache madison kubeadm

# 安装 kubelet kubeadm kubectl
sudo apt-get install -y kubelet kubeadm kubectl

# 阻止apt-upgrade时更新安装包
sudo apt-mark hold kubelet kubeadm kubectl

# 查看 kubelet版本 和 kubeadm版本
kubelet --version
#Kubernetes v1.18.2
kubeadm version
# kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-16T11:54:15Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Master Node初始化

1、查看部署kubenetesv1.18需要的镜像

kubeadm config images list --kubernetes-version=v1.18.0
# 显示
W0515 15:52:03.567384   10706 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
k8s.gcr.io/kube-scheduler:v1.18.0
k8s.gcr.io/kube-proxy:v1.18.0
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

2、通过 docker info 命令 查看docker服务的 Cgroup Driver,如果值为 cgroupfs 则需要修改

在daemon.json中加入 “exec-opts”: [“native.cgroupdriver=systemd”]

sudo vim /etc/docekr/daemon.json
{
	"registry-mirrors": ["https://mfrhehq4.mirror.aliyuncs.com"],
	"insecure-registries": ["172.17.0.2:5000"],
	"exec-opts": ["native.cgroupdriver=systemd"],
	"log-driver": "json-file",
	"log-opts": {
		"max-size": "10m",
		"max-file": "1"
	}

}
#systemctl daemon-reload
#systemctl restart docker
#systemctl status docker

3、kubeadm 初始化 集群

通过命令:

sudo kubeadm init \
		--kubernetes-version=1.18.0  \
		--apiserver-advertise-address=120.133.xx.xxx  \
		--image-repository registry.aliyuncs.com/google_containers  \
		--service-cidr=10.10.0.0/16 \
		--pod-network-cidr=10.122.0.0/16
  • kubernetes-version:指定k8s的版本
  • apiserver-advertise-address:API的地址,写master机器的IP
  • image-repository:镜像仓库写阿里云的仓库地址
  • service-cidr:指定service的网段
  • pod-network-cidr:指定pod的网段

或通过配置文件:

kubeadm config print init-defaults --kubeconfig ClusterConfiguration > kubeadm.yml
# 修改如下内容
advertiseAddress:120.133.xx.xxx
imageRepository: registry.aliyuncs.com/google_containers
kubernetesVersion: v1.18.0
networking:


kubeadm config images list --config kubeadm.yml
kubeadm config images pull --config kubeadm.yml

如果初始化报错,则可以通过如下命令重置并再次初始化:

sudo kubeadm reset

执行初始化命令后,可以看到初始化的步骤:

[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.10.0.1 120.133.xx.xxx]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [120.133.xx.xxx 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [120.133.xx.xxx 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0519 16:03:48.470893   11406 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0519 16:03:48.471779   11406 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.002150 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 1jyhoi.xxx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 120.133.xx.xxx:6443 --token 1jyhoi.x8yvhnfizayfjn03 \
    --discovery-token-ca-cert-hash sha256:xxx

大概的步骤是:

准备工作(init,preflight):

  • 指定k8s版本为1.18
  • 启动前检查,并拉取建立k8s集群所需的镜像

kubelet-start:

  • 写入 kubelet 环境文件到 /var/lib/kubelet/kubeadm-flags.env
  • 写入 kubelet 配置文件到 /var/lib/kubelet/config.yaml
  • 启动 kubelet

certs:

生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中

  • 证书路径是 /etc/kubernetes/pki
  • 生成 ca、apiserver、apiserver-kubelet-client、front-proxy-client、etcd 等的证书和密钥
  • apiserver、etcd 注册到 DNS和 IPs中

kubeconfig:

生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件

  • kubeconfig 的路径是 /etc/kubernetes
  • 写入 admin.conf
  • 写入 kubelet.conf
  • 写入 controller-manager.conf
  • 写入 scheduler.conf

control-plane:

使用/etc/kubernetes/manifest目录下的YAML文件,安装Master 组件

  • manifest的路径是 /etc/kubernetes/manifests
  • 为 kube-apiserver 创建静态pod
  • 为 kube-controller-manager 创建静态pod
  • 为 kube-scheduler 创建静态pod
  • 为etcd 创建静态pod,在 /etc/kubernetes/manifests
  • kubelet 启动 contral plane 作为静态pod

etcd:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。

wait-control-plane:等待control-plan部署的Master组件通过 kubelet 启动。

apiclient:检查Master组件服务状态。

upload-config:更新配置。

kubelet:使用 configMap “kubelet-config-1.18” 配置kubelet。

mark-control-plane:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。

bootstrap-token:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到。

addons:安装附加组件CoreDNS和kube-proxy。

初始化成功后的输出如下,根据这些输出继续进行配置。

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 120.133.xx.xxx:6443 --token 1jyhoi.x8yvhnfizayfjn03 \
    --discovery-token-ca-cert-hash sha256:xxx

然后根据输出的提示,执行命令:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看节点:

kubectl get node
# NAME         STATUS   ROLES    AGE     VERSION
# k8s-master   NotReady   master   3d1h   v1.18.2

kubectl get pod --all-namespaces
# kube-system   coredns-7ff77c879f-9cf2s             0/1     Pending   0          3d1h
# kube-system   coredns-7ff77c879f-mm9lf             0/1     Pending   0          3d1h
# kube-system   etcd-k8s-master                      1/1     Running   0          3d1h
# kube-system   kube-apiserver-k8s-master            1/1     Running   0          3d1h
# kube-system   kube-controller-manager-k8s-master   1/1     Running   0          3d1h
# kube-system   kube-proxy-7trp7                     1/1     Running   0          3d1h
# kube-system   kube-scheduler-k8s-master            1/1     Running   0          3d1h

kubeadm init 命令最后几行的输出:

kubeadm join 120.133.xx.xxx:6443 --token 1jyhoi.x8yvhnfizayfjn03 \
    --discovery-token-ca-cert-hash sha256:xxx

是节点加入集群需要执行的命令,这个命令中的token有效期是24小时,可以通过

kubeadm token list

查看,如果过期,则可以再次执行如下命令生成新的 token:

kubeadm token create

discovery-token-ca-cert-hash的值是不会变的,可以通过如下命令查看:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

4、安装网络组件

kubectl apply -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml

kubectl get node
# NAME         STATUS   ROLES    AGE     VERSION
# k8s-master   Ready    master   6d23h   v1.18.2

kubectl get pod --all-namespaces

NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-789f6df884-cw8k6   1/1     Running   0          6m51s
kube-system   calico-node-fn2vr                          1/1     Running   0          6m51s
kube-system   coredns-7ff77c879f-9cf2s                   1/1     Running   0          6d23h
kube-system   coredns-7ff77c879f-mm9lf                   1/1     Running   0          6d23h
kube-system   etcd-k8s-master                            1/1     Running   0          6d23h
kube-system   kube-apiserver-k8s-master                  1/1     Running   0          6d23h
kube-system   kube-controller-manager-k8s-master         1/1     Running   0          6d23h
kube-system   kube-proxy-7trp7                           1/1     Running   0          6d23h
kube-system   kube-scheduler-k8s-master                  1/1     Running   0          6d23h

kubectl get cs
#NAME                 STATUS    MESSAGE             ERROR
#controller-manager   Healthy   ok                  
#scheduler            Healthy   ok                  
#etcd-0               Healthy   {"health":"true"}  

此时集群状态正常。

Worker Node 加入集群

如果之前有加入过,想移除掉,需要如下步骤:

# 在 master执行
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
# 在被删除节点的机器执行
kubeadm reset

在两台Worker Node上分别执行:

sudo kubeadm join 120.133.xx.xxx:6443 --token s7q162.xxx     --discovery-token-ca-cert-hash sha256:xxx

然后在Master Node上执行:

kubectl get nodes

#NAME          STATUS   ROLES    AGE    VERSION
#k8s-master    Ready    master   7d1h   v1.18.2
#k8s-node172   Ready    <none>   10m    v1.18.3
#k8s-node173   Ready    <none>   70s    v1.18.3

# 更详细的输出
kubectl get pod -n kube-system -o wide


# 某个节点的详细信息
kubectl describe node nodename

# 某个pod的详细信息
kubectl  describe pods podname -n  kube-system
测试

1、验证kube-apiserver, kube-controller-manager, kube-scheduler, pod network

# 部署一个 Nginx Deployment,包含两个Pod
kubectl create deployment nginx --image=nginx:alpine
# 扩容
kubectl scale deployment nginx --replicas=5
# 缩容
kubectl scale deployment nginx --replicas=3

# 检查是否成功
kubectl get pods -l app=nginx -o wide
kubectl get deployments
kubectl get componentstatus

# 删除应用
kubectl delete deployment nginx
kubectl delete svc nginx

2、验证kube-proxy

kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get services nginx
# NAME    TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
# nginx   NodePort   10.10.146.91   <none>        80:30146/TCP   4s

curl http://120.133.xx.xxx:30146

3、验证dns、pod、network

# 运行Busybox并进入交互模式
kubectl run -it curl --image=radial/busyboxplus:curl

# 出现[ root@curl:/ ]$ 后,通过 nslookup nginx 验证DNS
nslookup nginx
# Server:    10.10.0.10
# Address 1: 10.10.0.10 kube-dns.kube-system.svc.cluster.local

# Name:      nginx
# Address 1: 10.10.146.91 nginx.default.svc.cluster.local

# 通过服务名进行访问,验证kube-proxy是否正常
curl http://nginx/
# <!DOCTYPE html>
# <html>
# ...

ctrl + p + q 退出
解除master隔离

解除集群不可以在master节点部署pods的限制

kubectl taint nodes --all node-role.kubernetes.io/master-

## 输出
node/ubuntu1 untainted
卸载集群
kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>
sudo kubeadm reset

Dashboard

https://www.cnblogs.com/binghe001/p/12823763.html
https://www.kubernetes.org.cn/7189.html

下载与部署

1、下载 yaml 资源描述文件

mkdir -p /usr/k8s/Dashboard

chmod -R 777 /usr/k8s

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

此时会报错,修改 hosts,再执行即可;

sudo vim /etc/hosts

199.232.68.133 raw.githubusercontent.com

2、修改 recommended.yaml

在 service 的部分 增加端口暴露;

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #增加
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000  #增加
  selector:
    k8s-app: kubernetes-dashboard

3、手动创建证书

mkdir dashboard-certs

cd dashboard-certs/

kubectl create namespace kubernetes-dashboard

openssl req   -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'

此时会报错,Can't load /home/username .rnd into RNG,解决办法:

cd /home/username
openssl rand -writerand .rnd

openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt

kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

4、执行创建语句

# 如果之前创建过,先删除
# kubectl delete -f recommended.yaml 

kubectl create -f recommended.yaml 

namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

5、查看部署情况

kubectl get pods -A  -o wide
kubectl get service -n kubernetes-dashboard  -o wide

kubectl get pod,svc -n kubernetes-dashboard # dashboard 关联 pod 和 service 的状态
创建dashboard管理员 并分配权限

1、创建 dashboard-admin

sudo vim dashboard-admin.yaml 用户

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard

kubectl create  -f dashboard-admin.yaml 

serviceaccount/dashboard-admin created

2、分配权限

sudo vim dashboard-admin-bind-cluster-role.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard

#保存退出

kubectl create -f dashboard-admin-bind-cluster-role.yaml

3、查看 dashboard-admin 用户的 token

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
访问

使用https的方式:

https://xxx.xxx.17.174:30000/

输入生成的token


Kubernetes 服务部署最佳实践


https://mp.weixin.qq.com/s?__biz=Mzg5NjA1MjkxNw==&mid=2247485273&idx=1&sn=475dbaa35aa5046e269218777a83d390&chksm=c007bc83f77035956754dbbc5a8dd78ec84a48b2cafb1c2c30e10650b533662e0e70887be0bd&token=2085612385&lang=zh_CN&scene=21#wechat_redirect


Kubelet 命令

https://mp.weixin.qq.com/s?__biz=MzA5OTAyNzQ2OA==&mid=2649707294&idx=2&sn=c9c741c7b915333205fd36c5f46bd98e&chksm=88936c7dbfe4e56bc877dd1c644659dab618a89ccdeee35195f4e0631710429510ded784fbd5&scene=126&sessionid=1593326570&key=8c1e0ba910e0936aa93d18061480b3739dc725c9e337fb066bc6811da310da93f4c1469b1aa900bee9d6fa87dfab0efb55aebe5e0cba8584ff8043782895cece66c4d343bee1f875db1d8aeb002da6b9&ascene=1&uin=MTkzOTU2NzUwMg%3D%3D&devicetype=Windows+10+x64&version=62090070&lang=zh_CN&exportkey=AT6%2BQT%2FdUS9kNgcOW42660g%3D&pass_ticket=h1CNfjQBx252V%2Bu2LcoK4tZzJrJtP6nCixlUBlpbcBH7DFxElnO6ofmqNnX6IVGK

DevOps的一些方案收集

https://mp.weixin.qq.com/s?__biz=MzA5OTAyNzQ2OA==&mid=2649707626&idx=2&sn=25110d2261492ba1a8dc82e71c321404&chksm=88936b09bfe4e21f311f63eacc6586c5f5e3cc0ae614019697c24c9112d3393c173d7918a7a8&scene=126&sessionid=1593326570&key=8c1e0ba910e0936a4955f37495fac69ce2ed16a2426cb7480728266db1b43707160e718fb052dabf33ce5b4296595026a76e83f0064cfafd2a22d738f62766b084fbd60c61ff5585ce211e4835428c7b&ascene=1&uin=MTkzOTU2NzUwMg%3D%3D&devicetype=Windows+10+x64&version=62090070&lang=zh_CN&exportkey=ATtEs%2FYKKDZhr479t41oOmE%3D&pass_ticket=h1CNfjQBx252V%2Bu2LcoK4tZzJrJtP6nCixlUBlpbcBH7DFxElnO6ofmqNnX6IVGK

更多方案

单独安装每个插件

rancher 部署 k8s 集群

参考资料


K8s部署

基于 ansible的 k8s 集群部署方案(很好):https://zhuanlan.zhihu.com/p/140698089

https://blog.51cto.com/lizhenliang/1983392

https://blog.csdn.net/Giotoolee/article/details/96305277

https://www.kubernetes.org.cn/7189.html

入门级:linux基础命令,例如:yum,ls,top,iptables网络基础知识,例如:host,bridge3.git/docker常用命令:docker run/stop/ps/commit/save/exec 这个阶段撸官方文档:Docker Documentation中级:linux 内核,namespace,cgroup深入理解Docker网络原理,借助第三工具(Flannel,Calico)搭建网络模型。深入理解Docker文件系统和存储原理。高级:这个阶段就是能能对Docker的网络模块,存储模块等模块进行调优。要对GO有一定的基础,尝试对一些部件的更改。 github地址:moby/moby

一整套编排方案,主要有涉及以下部分:

  • 配置管理(anisble,saltstack,jumperserver)
  • 持续集成和持续部署(Jenkins,git,gitlab)
  • 服务编排(k8s,swarm,mesos,rancher)
  • 网络模型(host,bridge,Flannel,Calico)
  • 服务注册(etcd)
  • 服务发现(confd)
  • 日志平台(ELK, loghub)
  • 监控平台(zabbix,cadvisor,prometheus,grafana)
  • 脚本开发(shell,python)
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐