kubernetes学习笔记之-----手把手搭建k8s集群
一、 最简单的方式安装docker1.1 安装dockersudo apt install docker.iodocker version1.2 添加普通用户权限sudo groupadd dockersudo usermod -aG docker $USERnewgrp dockerdocker version/info1.3 修改docker默认的Cgroup Driver官方教程:https
在搭建集群之前,确保自己的ubuntu或者其他虚机已经配置了阿里云源,否则可能因为网络问题失败:
具体可以参考:
https://blog.csdn.net/wuyundong123/article/details/117064170?spm=1001.2014.3001.5501
一、 最简单的方式安装docker
1.1 安装docker
sudo apt install docker.io
docker version
1.2 添加普通用户权限
sudo groupadd docker
sudo usermod -aG docker $USER
newgrp docker
docker version/info
1.3 修改docker默认的Cgroup Driver
官方教程:https://kubernetes.io/docs/setup/production-environment/container-runtimes/
docker info |grep Cgroup 查看默认的Cgroup Driverm,修改为systemd
vim /etc/docker/daemon.json
##内容如下所示:
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
1.4 重启docker
systemctl enable docker
systemctl daemon-reload
systemctl restart docker
#查看Cgroup Driver信息:
docker info | grep Cgroup
二、安装k8s相关 kubectl/kubeadm/kubelet
官方给出的安装命令如下:但是注意,国内网络环境下不适合用官方命令安装,不要使用官方安装命令:
sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
2.1 安装基本工具
sudo apt update && sudo apt install -y apt-transport-https
2.2 配置apt源为阿里云源
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
2.3 apt更新并安装
#也可以不指定版本安装最新版本
apt install -y kubelet kubeadm kubectl
#本次集群采用指定版本:
sudo apt update
apt install -y kubelet=1.18.5-00
apt install -y kubectl=1.18.5-00
apt install -y kubeadm=1.18.5-00
#也可以直接:
sudo apt update && apt install -y kubelet=1.18.5-00 && apt install -y kubectl=1.18.5-00 && apt install -y kubeadm=1.18.5-00
2.4 检查是否安装完成
kubectl version或kubectl version --client
kubeadm version
kubelet --version
三、开始搭建k8s 集群
3.1 初始化集群
kubeadm init
--image-repository=registry.aliyuncs.com/google_containers
--apiserver-advertise-address masterNodeIP
--pod-network-cidr=10.244.0.0/16
--kubernetes-version=v1.18.5
--v=5
结果如下图所示:
our Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
3.2 相关配置初始化
非root用户:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
root用户:
export KUBECONFIG=/etc/kubernetes/admin.conf
另外,为了使用更便捷,启用 kubectl 命令的自动补全功能:
echo "source <(kubectl completion bash)" >> ~/.bashrc
3.3 通过命令查看集群情况
systemctl status kubelet //查看kubelet的情况:
kubectl cluster-info //用命令查看cluster的信息:
kubectl get nodes //查看节点列表
kubectl get nodes -o wide //查看节点详细信息
kubectl get cs //查看组件状态kube-scheduler和kube-controller-manager状态,正常状态为Healthy
此时的node status为NotReady是因为You should now deploy a pod network to the cluster,也就是需要在集群中部署一个pod network;
3.4 安装 Pod 网络
要让 Kubernetes Cluster 能够工作,必须安装 Pod 网络,否则 Pod 之间无法通信。
Kubernetes 支持多种网络方案,因为简单方便,所以我这里先使用 flannel;
Flannel官方文档:https://github.com/flannel-io/flannel/blob/master/Documentation/kubernetes.md
官方教程:执行命令
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
注意:可能会遇到问题:Unable to connect to the server
主要原因:没有办法直接访问这个文件路径 https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
解决办法:采用最原始的办法,也是嘴笨最有效的版本,通过host的网络,加上vpn,把该文件下载下来;
然后运行kubectl apply -f kube-flannel.yml,如果没有外网权限没有关系,kube-flannel.yml文件内容如下所示:
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.14.0-rc1
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.14.0-rc1
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
通过以下命令可以去查看验证:
查看节点列表
kubectl get nodes //ready
kubectl get daemonset -n kube-system -l app=flannel
kubectl get pod -n kube-system -o wide -l app=flannel
kubectl get cm -n kube-system -l app=flannel
kubectl get cm -n kube-system -o yaml kube-flannel-cfg
ip -d link show flannel.1
route -n
arp -n
3.5 给集群添加work nodes
kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>
--token参数 后面的token可以通过命令查询获取
kubeadm token list
--discovery-token-ca-cert-hash 可以通过命令获取
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
master节点上执行命令进行查看验证:
kubectl get nodes
默认情况下: tokens expire after 24 hours.如果过期了, 使用命令直接生成新的join语句:
kubeadm token create --print-join-command
3.6 创建Pod验证
kubectl get nodes //ready
//验证
kubectl create deployment nginx --image=nginx
kubectl expose deployment nginx --port=80 --type=NodePort
kubectl get pod,svc
浏览器访问如下地址:
http://nodeIp:NodePort/
看 Pod 的状态:
kubectl get pod --all-namespaces
3.7 部署Web UI/ Dashboard
部署的过程其实跟部署其它应用没有区别,Dashboard也是作为一个普通的web应用部署在k8s上。
web-ui-dashboard官方教程:https://kubernetes.io/zh/docs/tasks/access-application-cluster/web-ui-dashboard/
注意:官方安装可能不成功,所以不要完全按照官方教程去操作;
Dashboard官方Github的release页面: https://github.com/kubernetes/dashboard
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
国内网操作步骤:
1 下载并修改recommended.yaml:
修改两个部分:
在配置文件的spec部分添加一个type为NodePort,在ports部分添加nodePort为30001,方便外部访问
把拉取策略这一行删掉,因为默认策略是IfNotPresen;
具体如下所示:
#imagePullPolicy: Always
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
2 准备需要的镜像资源
直接创建资源可能会因为网络问题一直卡在ContainerCreating,所以可以先把镜像下载下来:
在vim中搜索image可以发现有两个镜像需要下载,
一个是kubernetesui/dashboard:v2.2.0,
一个是kubernetesui/metrics-scraper:v1.0.6
##使用docker命令拉取:
docker pull kubernetesui/dashboard:v2.2.0
docker pull kubernetesui/metrics-scraper:v1.0.6
3 根据文件recommended.yaml 创建pod
如果没有网络条件,忽略1 2 步骤,直接使用我的这个,内容如下所示:
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.2.0
#imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.6
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
执行命令:
kubectl apply -f recommended.yaml
查看结果:
kubectl get pods --namespace=kubernetes-dashboard
##问题排查:
kubectl describe pod dashboard --namespace=kubernetes-dashboard
Dashboard 会在 kubernetes-dashboard namespace 中创建自己的 Deployment 和 Service
#Deployment
kubectl get deployments kubernetes-dashboard --namespace=kubernetes-dashboard
#Service
kubectl get service kubernetes-dashboard --namespace=kubernetes-dashboard
如果需要重新部署:
kubectl delete -f recommended.yaml
kubectl apply-f recommended.yaml
4 创建登录dashboard ui需要的token
官方教程: https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
我是用的是token方式:
cat <<EOF > account.yml
# Create Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
# Create ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
EOF
然后执行命令:
kubectl apply -f account.yml
5 浏览器登录访问dashboard ui
具体格式:https://nodeIp:nodePortIp/
查看port
kubectl get service kubernetes-dashboard --namespace=kubernetes-dashboard
查看nodePort
kubectl get pod --all-namespaces -o wide
注意:
Kubernetes 的系统组件都被放到 kube-system namespace 中
kubectl get pod -o wide --namespace=kubernetes-dashboard
找到对应的work node,然后找到node对应的Ip
使用下面的命令找到Token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
在搭建的过程中,我也是遇到了很多问题,
如果你也遇了问题,可以参考这篇文章:
https://blog.csdn.net/wuyundong123/article/details/117066629
更多推荐
所有评论(0)