K8S在centos7虚拟机上部署集群
目录一、创建centos7虚拟机二、部署三、重新初始化一、创建centos7虚拟机二、部署三、重新初始化一:创建centos7虚拟机我是在VMware上创建的,版本是WORKSTATION 14 PROWORKSTATION 14 PRO下载地址:链接:https://pan.baidu.com/s/1lrWn740OJepIyXCOi1PB8Q提取码:5wt9centos7X86镜像下载地址:链
一、创建centos7虚拟机
一:创建centos7虚拟机
我是在VMware上创建的,版本是WORKSTATION 14 PRO
WORKSTATION 14 PRO下载地址:
链接:https://pan.baidu.com/s/1lrWn740OJepIyXCOi1PB8Q
提取码:5wt9
centos7X86镜像下载地址:
链接:https://pan.baidu.com/s/1Wg0iQmqP0jIzUm2nmimcNw
提取码:tlrr
创建过程暂且不表。先创建一个,命名为K8Smaster,master节点的虚拟机至少需要2GB内存与两个核,由于虚拟机是动态分配的,所以不用担心电脑卡的不能动。主节点配置好之后再克隆剩下的节点就好了。
二、部署
二,部署
首先在各台虚拟机上部署好相应的主机名字与IP地址,是的各台机器之间可以通过机器名称相互访问,通过以下的命令修改编辑hosts文件。
$ vi /etc/hosts
192.168.100.100 k8smaster
192.168.100.101 k8snode1
192.168.100.102 k8snode2
192.168.100.103 k8snode3
在master与node都要安装以下基础组件
首先安装并且启动docker
$ yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
$ yum makecache fast
$ yum -y install docker-ce
$ systemctl start docker
安装完成后,可以通过以下命令进行测试。
$ docker run hello-world
然后,准备安装 Kubernetes 所需的关键组件。为此,先配置安装源地址。
$ vim /etc/yum.repos.d/Kubernetes.repo
接下来,进入编辑界面,修改 Kubernetes.repo,加入以下内容并保存。
[Kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/Kubernetes/yum/repos/Kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/Kubernetes/yum/doc/yum-key.gpg
https://mirrors.aliyun.com/Kubernetes/yum/doc/rpm-package-key.gpg
最后,安装 Kubernetes 的关键组件。
$ setenforce 0
$ yum install -y kubelet kubeadm kubectl
$ systemctl enable kubelet && systemctl start kubelet
Master 的安装与配置
在 Master 节点上,使用如下命令初始化 Master。
$ swapoff -a
$ kubeadm init --pod-network-cidr 10.244.0.0/16
如果因为网络原因,就使用以下的命令
kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=all
根据界面提示,需要执行以下命令,以创建集群。
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/Kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
Node 的安装与配置
在获取到 Master 上的 kubeadm join 参数后,就可以登录 Node 进行初始化,加入集群。
具体命令及参数在 Master 安装成功界面上已经给出提示,如下所示。
$ kubeadm join 192.168.100.100:6443 --token lsqgab.i6n2n9qngeevzgqe\
--discovery-token-ca-cert-hash sha256:581a72e9d3b05ccb12294062fa8dcbab83b759
84896e113028135069aef02f88
三、重新初始化
三,重新初始化
关闭防火墙,需要关闭防火墙,确保可以自由访问端口
systemctl stop firewalld & systemctl disable firewalld
关闭 swap
#临时关闭
swapoff -a
# 永久关闭
vi /etc/fstab
注释以下代码
/dev/mapper/centos-swap swap ...
关闭 selinux
# 暂时关闭 selinux
setenforce 0
# 永久关闭 需重启
vi /etc/sysconfig/selinux
# 修改以下参数,设置为disable
SELINUX=disabled
关于防火墙的原因(nftables后端兼容性问题,产生重复的防火墙规则)
Theiptablestooling can act as a compatibility layer, behaving like iptables but actually configuring nftables. This nftables backend is not compatible with the current kubeadm packages: it causes duplicated firewall rules and breakskube-proxy.
关于selinux的原因(关闭selinux以允许容器访问宿主机的文件系统)
Setting SELinux in permissive mode by runningsetenforce 0andsed …effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet.
当重新创建集群的时候之前的目录存在会报错
$ rm -rf $HOME/.kube #删除之前的残余目录
$ kubeadm reset
$ systemctl daemon-reload
$ systemctl restart kubelet
$ iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
$ kubeadm init --pod-network-cidr=10.244.0.0/16 --image-repository=registry.aliyuncs.com/google_containers --ignore-preflight-errors=all
四、启动时常见错误:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUMvakNDQWVhZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1UQXlNekExTkRjME1Wb1hEVE14TVRBeU1UQTFORGMwTVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTlJxCjZHa3RuK1lkM2xpKytNS0gwUkpVYXljOXdndlJnYThqYW5EdzNNdlJGUXBkWCtJNE8ybVBxZFNaemJjVkFQTXgKa0ZMWEJnaEZKVGdlejlKVGRKSUlHWUl1MVd5QWw3bjVQS3ZUTkNNWnZJOGpzWFU4SVd2WUJNa1M0MEhFMG81RwpYeUtGaSsxaTFIMHJIYmwremdDanpZYnEyN0FSeVJaYWo0VndxTVpOY1FyVHd3cTVMNGk5VUxvektkMS9ZcWh4Ck5BVmU3d0UrWFkwcTNMZ0hXZmp4blhrSmFGVE1OdGZ6c1o3WmZoejJzUldZZEl4R09pbzYzZkRPbWY3OEt0MzMKYTRoYWpMVld6ZmFjYnIxYm5YZjB2REk2MFQ0L0xWRFhoL0RiTEFlZFV5QmRwa0dGZUkyb2h2Ylc2WDJHaG44TApEWTA5OFlYQ0dEVXpTcUhDakNzQ0F3RUFBYU5aTUZjd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0hRWURWUjBPQkJZRUZEbk1rOUhxNjBLVFQxd2ZaY3RYai9ndFhvVFNNQlVHQTFVZEVRUU8KTUF5Q0NtdDFZbVZ5Ym1WMFpYTXdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBSitxMkJaRUlzMzBMWkx6MG51TApvNHF0dDVCVG0rdU13N3M4czNucCt0amdmbk0ybHZYbjNzam9DK2NoOTV5Wm5RbXFWWW1iNmpuSmtwd2psSzhvCjYrNkFINVJ4U0FGc09xNVIwZFdvY0UwVVc4ZmdML3gvWTJqeHMrZEN3cTh5bjhMWVE0ZGg2R1dpOHJrVU5GZ2EKMW9wSUtWOHg3ZVViU0d2SEx2NEFyVk9Pb3VDWkdSY05INnhJdFFMaHdVVG9MTFI3a01uQllNdngvei9mNXVuMAo3K1EwYmJiOXA4QmtubElkemdxb2czT2FQVmxPc25pb2hvelRoVXNhc3lBZWdPNzBZZlQ3UEY0UU1FYjZXZUY4ClJOMGF6V3pMNG5CaWxMWXFxN1dEckhrOEZ6bDd4ckt4TkVzWnF1MUxPamlWQ25QZDJLR3d3WU04T0tkNzhZMnUKYzlnPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://192.168.125.128:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURJVENDQWdtZ0F3SUJBZ0lJWXR5blpLbUNsOE13RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFd01qTXdOVFEzTkRGYUZ3MHlNakV3TWpNd05UUTNORE5hTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXhhdVRFSng3U0s1Y1prYnoKZFJRTk9QVVZqcVFySkVwVG9KN0lBVllRREp4cDQyUTREVjJwN0JDYmdYTytvL3YweGJHampaRTgvL0Y4QUt2NQo0Z29UMGVKMEY5NHBMOVFhZVVQVERtNVAvdFQ3VjNrT3h0MU4yblZ5dFZqcVB0a3pRS2R5MHlNelVORHVzdnduCmVXNFBEYjBhdW1jbHBJNStwK081Z3EvbG1DZHAxZ3VPdTRLV0xRUGNXNUFTVCtsaG5rWWFzMU1PSGhxTWo0dlUKaExpaU04RW9UUW01NldKNnljNlY5U1NTdjNZZGVVYUlXNUdGREhYa0VGN3JRUWh6cHpDT2tDT0pqQ2hmaGdiUQpmcEdLWmJuSlZ5OGlSN29GYXQxME81c2FKZ2ZhdHBFN2ZrSWp6a2g3eldUeU9TV3JFOG5DblpZcnJLTjVaeWtUCnd4WHpnUUlEQVFBQm8xWXdWREFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RBWURWUjBUQVFIL0JBSXdBREFmQmdOVkhTTUVHREFXZ0JRNXpKUFI2dXRDazA5Y0gyWExWNC80TFY2RQowakFOQmdrcWhraUc5dzBCQVFzRkFBT0NBUUVBU2VHSWxkVU5lR3VpYStWQzYzdXFVYmR1UUN5VEJpMS9aSWlGCjMvc09kbHVndjlGN1RONUczK25IRDBrbXBOWm9odW1ZUHlERWQvUGxIZ0ZTZ2FmWjVUaEJsZStPQWlDbnU2WUIKZ3VUb2dPYUdPY0plRTlBd1NvKzB6eGpWN1Vzdmg2YTFROFVYRUZaUHF4NXRteVJXcVpxNEswbEdoVVpzNFVSMApHVGI0MFVjVjJIVUlrbUUveStvVmRHU2tyM0Y4c2NqRGR5WHJyQWtCSUxWU2c1SWhqUWIzVk01L0sxS1dFZ01SCnlteWJxcmg4NG16K0NiVmt5Vjc2ME5TUHVJVytvKzVDWndaU1krd3o0ZllaWW1yc2NSSGNrNW9ZZzZZVUV4QUUKQm0wUjJRYlRrTVBlamNORkxLVmpkNU1vektVWUhEUkNLV1M2SFl0bG52ekNuVGpTR2c9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcEFJQkFBS0NBUUVBeGF1VEVKeDdTSzVjWmtiemRSUU5PUFVWanFRckpFcFRvSjdJQVZZUURKeHA0MlE0CkRWMnA3QkNiZ1hPK28vdjB4YkdqalpFOC8vRjhBS3Y1NGdvVDBlSjBGOTRwTDlRYWVVUFREbTVQL3RUN1Yza08KeHQxTjJuVnl0VmpxUHRrelFLZHkweU16VU5EdXN2d25lVzRQRGIwYXVtY2xwSTUrcCtPNWdxL2xtQ2RwMWd1Twp1NEtXTFFQY1c1QVNUK2xobmtZYXMxTU9IaHFNajR2VWhMaWlNOEVvVFFtNTZXSjZ5YzZWOVNTU3YzWWRlVWFJClc1R0ZESFhrRUY3clFRaHpwekNPa0NPSmpDaGZoZ2JRZnBHS1pibkpWeThpUjdvRmF0MTBPNXNhSmdmYXRwRTcKZmtJanpraDd6V1R5T1NXckU4bkNuWllycktONVp5a1R3eFh6Z1FJREFRQUJBb0lCQURhcS9mQlJKck55TFhISAoyNXNjb1krSUVKOHpmZzc2VTJpUG9VYmxVMmo1ZFR1RFF4RkhQekJmWTNLSTNVZWk3ZzRpMDlYYVBpR1cyckdnCjNtb0tXWExwaXl2eXNEZGZGTGRHNzc4RStUREVISU1Ub3VlUzJ1NDVIekZTVnU1c3lZVHZDbzJrSlpRTFJJalIKdmVVU2NDMWZpRjNYR3cwSXI3U2xBWEJJVDFvbHg1YkdBQ0JLUisvSENPSjlVSjFBOXRha2RIdWJQZExkVDN3UQpoZXlMelE0TG9LU1IweUFaalpaTWZuRmJYSFJuU0NsVUt4R0phOElQRHhTOC9pREpQdWhwYUVZaVB0M09KeTZ6ClFhZld5bFdhWmJRMWpGM09PelF4TTlsNGt0MVBaTHVLWW9ibnoxaTBoNnl4WDlMR3NpM0VhRzExR0hlUDhvYnIKZm5oUytFVUNnWUVBMFFJS3l2YWJ1ajVhZDJiYk8ydFZlNzkyc090dU9CR3U0RTNkMy9WSmVuUHlzNFVPYzNERApnbm90NlhlbDdFOGVoYmZBbWpoWUdJVTlEKzRJdkNiSDFaZWR6M3M5T1BsNE9rSnp6eiszRkFKc3FmV2VHbHVmCm5PUjVQdDRhbUx5RGtiT29UUWJqUk5hU0h0dnE3Mnpzcnd0L3BkWGJYZ1IwUFpvYjFUeS9Mb3NDZ1lFQThoejEKdjlNTVpkcUpBeklBbVk3T3kxWXU2enBLQjQxWUJYSDRSdFVaTEZQZjBiTjdGc0xLbHZnMlVwczI5V3lobUdXawpHQUYxOFJPVGZWRGY2UXpKYlpzU3FMY3FIKzNDbnI5c0w4d2QzK0FpV2JEdExzMFFZVW1Venh4d3NpUmdITmliClAyK1ZXN3VrNW5lRGxHMzZvSWVZb0RGdFozV2tlZy9qQ1NxVkU2TUNnWUVBcVU4dk1RVWVWM3VsU3k0ZUQvODkKeXpYcFR4NFlOZ0ZWR1V6YW5FNldERVVhNlFPekZoN1ZzYitKcTZPSjNHaW5RQWovVTY2cTVvb2dVZVF3WFVKSgpCU1NCNlE1YkpPa3AxSC82VW51NXNkTFk5Y0VMSnl6cm1tdVdNREE1ZVZyVWRkWUVVd2x1VjFnK0hCTm9PRFdUCmNhVXQ3VWZWSVU4WVhzS1ZJMkxIT0VzQ2dZQU0wcW5WV2drckQ5TDMzMXNXeHZCKzVuYWZzTHVoQU1ScnJXaVgKMzh0d2hKU3pGNDFxWERDOHBETEVWMEltNTNUN2pFNlBrdXc3TTIwNVV1STVCcHRZZWNFWVBITTNzN0QrRldkVwpkTG9VVkZ1ZFluaDlaUkQ4QmhpaWk0QVFmMHF6M0drRWlCVmlBV012Ylo4RGFudStxcy9UbENxV015M2Q1UitDCktjWXhmd0tCZ1FERmFpR2NSWGdzTm54a20wQ05ERk14SktMSlZpU0ZWV2t5R1N2eGxZM2Y2aVUzQ0ZnSmh3enIKOWQrZGhnR2V2Sml4S3RBdzl4cDczTjY1WmtCMDd4NTBHazhseWZzdmpxVkFvMU1QQitNcE45WmpGOG1tZTlYVwpUKzB1NjhtbVZNRXg0N2orbFhDQ0h5VGk2VzkyL0t6MmNkNjVaTUIvY2xlWEkxUHZJTHQwa2c9PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
每次重启K8S时,如果要重新开启pods,会一直处于ContainerCreating状态,可以使用以下命令一直监视pods状态
[root@k8snode1 ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
examplepodforport 0/1 ContainerCreating 0 3m14s
examplepodforvolumemount 0/2 ContainerCreating 0 108s
^Z
[1]+ 已停止 kubectl get pods -w
如果发现没法儿变成Running状态,可能是因为网络组件缺失,可以使用Flannel组件。
[root@k8snode1 ~]# kubectl get pod --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-7f6cbbb7b8-8876r 1/1 Running 0 17m
coredns-7f6cbbb7b8-mwj6d 1/1 Running 0 17m
etcd-k8smaster 1/1 Running 0 18m
kube-apiserver-k8smaster 1/1 Running 0 18m
kube-controller-manager-k8smaster 1/1 Running 0 18m
kube-proxy-gkhnj 1/1 Running 0 15m
kube-proxy-qcs7g 1/1 Running 0 17m
kube-scheduler-k8smaster 1/1 Running 0 18m
[root@k8snode1 ~]# kubectl apply -f
on/kube-flannel.ymlError: flag needs an argument: 'f' in -f
See 'kubectl apply --help' for usage.
[root@k8snode1 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@k8snode1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
examplepodforport 1/1 Running 0 5m44s
examplepodforvolumemount 0/2 ContainerCreating 0 4m18s
[root@k8snode1 ~]# kubectl get pod --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-7f6cbbb7b8-8876r 1/1 Running 0 19m
coredns-7f6cbbb7b8-mwj6d 1/1 Running 0 19m
etcd-k8smaster 1/1 Running 0 20m
kube-apiserver-k8smaster 1/1 Running 0 20m
kube-controller-manager-k8smaster 1/1 Running 0 20m
kube-flannel-ds-6dlhp 1/1 Running 0 12s
kube-flannel-ds-fpn4x 1/1 Running 0 12s
kube-proxy-gkhnj 1/1 Running 0 17m
kube-proxy-qcs7g 1/1 Running 0 19m
kube-scheduler-k8smaster 1/1 Running 0 20m
[root@k8snode1 ~]# kubectl get pods -w
NAME READY STATUS RESTARTS AGE
examplepodforport 1/1 Running 0 5m55s
examplepodforvolumemount 2/2 Running 0 4m29s
^Z
[2]+ 已停止 kubectl get pods -w
更多推荐
所有评论(0)