kubernetes的学习—k8s的安装
kubernetes,简称K8s,是用8代替名字中间的8个字符“ubernete”而成的缩写。是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。
k8s的安装
k8s是什么
kubernetes,简称K8s,是用8代替名字中间的8个字符“ubernete”而成的缩写。是一个开源的,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效(powerful),Kubernetes提供了应用部署,规划,更新,维护的一种机制。
k8s的作用
Kubernetes是Google开源的一个容器编排引擎,它支持自动化部署、大规模可伸缩、应用容器化管理。在生产环境中部署一个应用程序时,通常要部署该应用的多个实例以便对应用请求进行负载均衡。在Kubernetes中,我们可以创建多个容器,每个容器里面运行一个应用实例,然后通过内置的负载均衡策略,实现对这一组应用实例的管理、发现、访问,而这些细节都不需要运维人员去进行复杂的手工配置和处理。
特点:
- 可移植: 支持公有云,私有云,混合云,多重云(multi-cloud)
- 可扩展: 模块化,插件化,可挂载,可组合
- 自动化: 自动部署,自动重启,自动复制,自动伸缩/扩展
k8s的安装
安装准备
至少准备4台虚拟机
系统是centos7 2c/4G的配置
1台master
master:192.168.220.104
3台node
node1:192.168.220.105
node2:192.168.220.106
node3:192.168.220.107
(这些都是可以自己设置的)
安装步骤(采用kubeadm安装)
一、环境准备
建议先给每台服务器起好名字,使用固定的ip地址,防止后面因为ip地址的变化,导致整个集群异常
[root@master ~]# hostnamectl set-hostname master
[root@node-1 ~]# hostnamectl set-hostname node-1
[root@node-2 ~]# hostnamectl set-hostname node-2
[root@node-3 ~]# hostnamectl set-hostname node-3
# 更新用户名
su - root
master:192.168.220.104
node1:192.168.220.105
node2:192.168.220.106
node3:192.168.220.107
[root@master ~]# cat /etc/centos-release
CentOS Linux release 7.9.2009 (Core)
二、确认docker已经安装好,启动docker,并且设置开机启动
如若没有安装docker,则可以去看docker篇的docker安装。
[root@master ~]# systemctl restart docker
[root@master ~]# systemctl enable docker
[root@master ~]# ps aux|grep docker
root 2190 1.4 1.5 1159376 59744 ? Ssl 16:22 0:00 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 2387 0.0 0.0 112824 984 pts/0 S+ 16:22 0:00 grep --color=auto docker
三、配置 Docker使用systemd作为默认Cgroup驱动
每台服务器上都要操作,master和node上都要操作
cat <<EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
# 重启docker以此激活配置
systemctl restart docker
四、关闭swap分区
因为k8s不想使用swap分区来存储数据,使用swap会降低性能
每台服务器都需要操作
# 临时关闭
swapoff -a
# 永久关闭
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
五、在所有主机上上添加如下命令,修改hosts文件
cat >> /etc/hosts << EOF
192.168.220.104 master
192.168.220.105 node-1
192.168.220.106 node-2
192.168.220.107 node-3
EOF
# 每台机器上(master和node),永久修改
# 追加到内核会读取的参数文件里
cat <<EOF >> /etc/sysctl.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
# 让内核重新读取数据,加载生效
sysctl -p
六、安装kubeadm,kubelet和kubectl
# 添加kubernetes YUM软件源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 安装kubeadm,kubelet和kubectl,并且指定版本
yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6
# 最好指定版本,因为1.24的版本默认的容器运行时环境不是docker了
# 设置开机自启,因为kubelet是k8s在node节点上的代理,必须开机要运行的
systemctl enable kubelet
七、部署Kubernetes Master(master主机执行)
#提前准备coredns:1.8.4的镜像,后面需要使用,需要在每台机器上下载镜像
[root@master ~]# docker pull coredns/coredns:1.8.4
[root@master ~]# docker tag coredns/coredns:1.8.4
registry.aliyuncs.com/google_containers/coredns:v1.8.4
# 初始化操作在master服务器上执行
kubeadm init \
--apiserver-advertise-address=192.168.220.104 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
#192.168.220.104 是master的ip
# --service-cidr string Use alternative range of IP address for service VIPs. (default "10.96.0.0/12") 服务发布暴露--》dnat
# --pod-network-cidr string Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.
kubeadm初始化数据
[root@master ~]# kubeadm init --apiserver-advertise-address=192.168.220.104 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.1.0.1 192.168.2.130]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.2.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.2.130 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 18.013674 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 2fiwt1.47ss9cjmyaztw58b
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.220.104:6443 --token 2fiwt1.47ss9cjmyaztw58b \
--discovery-token-ca-cert-hash sha256:653c7264622a6935f9b3ec5509570dc288e52143aeb78b139ca3eddf10f2cdf8
还需根据提示进行以下操作
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
八、node节点服务器加入k8s集群
# 在所有node节点执行
kubeadm join 192.168.220.104:6443 --token 2fiwt1.47ss9cjmyaztw58b \
--discovery-token-ca-cert-hash sha256:653c7264622a6935f9b3ec5509570dc288e52143aeb78b139ca3eddf10f2cdf8
# 此操作每个人都不一致,可在kubeadm的初始化数据中找到
测试每个node节点是否能ping通master
[root@node-1 ~]# ping master
PING master (192.168.220.104) 56(84) bytes of data.
64 bytes from master (192.168.220.104): icmp_seq=1 ttl=64 time=0.159 ms
64 bytes from master (192.168.220.104): icmp_seq=2 ttl=64 time=0.246 ms
64 bytes from master (192.168.220.104): icmp_seq=3 ttl=64 time=1.18 ms
64 bytes from master (192.168.220.104): icmp_seq=4 ttl=64 time=1.15 ms
64 bytes from master (192.168.220.104): icmp_seq=5 ttl=64 time=2.04 ms
^C
--- master ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4030ms
rtt min/avg/max/mdev = 0.159/0.957/2.043/0.695 ms
在master查看master节点上的所有的节点服务器
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady control-plane,master 35m v1.23.5
node1 NotReady <none> 5m54s v1.23.5
node2 NotReady <none> 36s v1.23.5
node3 NotReady <none> 29s v1.23.5
# NotReady 说明master和node节点之间的通信还是有问题的,容器之间通信还没有准备好
# 所有需要下载网络插件flannel或者calico
九、安装网络插件flannel(在master节点执行)
实现master上的pod和node节点上的pod之间通信
k8s里的网络插件:作用就是实现不同的宿主机之间的pod的通信
1、flannel --> overlay
2、calico
kube-flannel.yaml 文件需要自己去创建,内容如下:
[root@master ~]# vim kube-flannel.yaml
[root@master feng]# cat kube-flannel2.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.13.1-rc2
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.13.1-rc2
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
部署flannel
[root@master feng]# kubectl apply -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master feng]# ps aux|grep flannel
root 10346 0.7 0.7 1339640 29440 ? Ssl 17:48 0:01 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root 11134 0.0 0.0 112824 988 pts/0 S+ 17:50 0:00 grep --color=auto flannel
[root@master feng]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
10d7d44e2758 dee1cac4dd20 "/opt/bin/flanneld -…" 2 minutes ago Up 2 minutes k8s_kube-flannel_kube-flannel-ds-5ckpg_kube-system_c02e887a-43de-462e-9b2e-d918dead4dbc_0
449988af9f29 registry.aliyuncs.com/google_containers/pause:3.6 "/pause" 3 minutes ago Up 2 minutes k8s_POD_kube-flannel-ds-5ckpg_kube-system_c02e887a-43de-462e-9b2e-d918dead4dbc_0
[root@master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d8c4cb4d-djzfx 1/1 Running 11 (22h ago) 9d
coredns-6d8c4cb4d-tzzlb 1/1 Running 11 (22h ago) 9d
etcd-master 1/1 Running 14 (21h ago) 9d
kube-apiserver-master 1/1 Running 14 (21h ago) 9d
kube-controller-manager-master 1/1 Running 14 (21h ago) 9d
kube-flannel-ds-4cm4l 1/1 Running 11 (22h ago) 9d
kube-flannel-ds-874tn 1/1 Running 13 (21h ago) 9d
kube-flannel-ds-jsp6s 1/1 Running 12 (<invalid> ago) 9d
kube-flannel-ds-pmrxz 1/1 Running 11 (22h ago) 9d
kube-proxy-b4x2f 1/1 Running 11 (22h ago) 9d
kube-proxy-fs4l2 1/1 Running 11 (22h ago) 9d
kube-proxy-llj96 1/1 Running 11 (14h ago) 9d
kube-proxy-rrmlr 1/1 Running 13 (21h ago) 9d
kube-scheduler-master 1/1 Running 14 (21h ago) 9d
十、查看集群状态
[root@master feng]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 52m v1.23.5
node1 Ready <none> 22m v1.23.5
node2 Ready <none> 17m v1.23.5
node3 Ready <none> 17m v1.23.5
[root@master feng]# kubectl get nodes -n kube-system -o wide 查看各个节点详细信息
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
master Ready control-plane,master 55m v1.23.5 192.168.220.104 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.14
node1 Ready <none> 25m v1.23.5 192.168.220.105 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.14
node2 Ready <none> 20m v1.23.5 192.168.220.106 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.14
node3 Ready <none> 20m v1.23.5 192.168.220.107 <none> CentOS Linux 7 (Core) 3.10.0-1160.el7.x86_64 docker://20.10.14
[root@master feng]# kubectl get pod -n kube-system 看kube-system命名空间里的pod
NAME READY STATUS RESTARTS AGE
coredns-6d8c4cb4d-25dsk 1/1 Running 0 57m
coredns-6d8c4cb4d-fmztm 1/1 Running 0 57m
etcd-master 1/1 Running 0 57m
kube-apiserver-master 1/1 Running 0 57m
kube-controller-manager-master 1/1 Running 0 57m
kube-flannel-ds-5ckpg 1/1 Running 0 8m58s
kube-flannel-ds-8jnp8 1/1 Running 0 8m58s
kube-flannel-ds-b674r 1/1 Running 0 8m58s
kube-flannel-ds-dffk4 1/1 Running 0 8m58s
kube-proxy-2csc7 1/1 Running 0 27m
kube-proxy-6zswh 1/1 Running 0 22m
kube-proxy-845jw 1/1 Running 0 22m
kube-proxy-fm2xd 1/1 Running 0 57m
kube-scheduler-master 1/1 Running 0 57m
[root@master feng]# kubectl get ns 查看k8s里的命名空间有哪些--》k8s自己创建的
NAME STATUS AGE
default Active 58m
kube-node-lease Active 58m
kube-public Active 58m
kube-system Active 58m
k8s安装完成
在k8s安装完成后,就可以跟着官方文档进行一些简单的k8s集群部署了
kubernetes的官方文档:https://kubernetes.io/zh-cn/docs/home/
更多推荐
所有评论(0)