(K8S实践0)Centos7.6部署k8S(v1.14.2)集群
Centos7.6部署k8S(v1.14.2)集群系统环境说明:一、docker安装1.安装依赖包[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm22.设置Docker源[root@master ~]# yum-config-manager --add-repo https://download.
Centos7.6部署k8S(v1.14.2)集群
系统环境说明:
一、docker安装
1.安装依赖包
[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
2.设置Docker源
[root@master ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
3.Docker安装版本查看
[root@master ~]# yum list docker-ce --showduplicates | sort -r
4.安装指定docker(自行选择)
[root@master ~]# yum install docker-ce-18.09.6 docker-ce-cli-18.09.6 containerd.io
5.启动docker
[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker
6.命令补全,安装bash-completion,加载bash-completion
[root@master ~]# yum -y install bash-completion
[root@master ~]# source /etc/profile.d/bash_completion.sh
7.镜像加速,配置镜像加速器(阿里云账号申请)
[root@master ~]# echo ‘{"registry-mirrors": ["https://m2to419r.mirror.aliyuncs.com"]}‘ > /etc/docker/daemon.json
8.重启服务
[root@master ~]# systemctl daemon-reload
[root@master ~]# systemctl restart docker
9.验证
[root@master ~]# docker version
Client:
Version: 18.09.6
API version: 1.39
Go version: go1.10.8
Git commit: 481bc77156
Built: Sat May 4 02:34:58 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.6
API version: 1.39 (minimum version 1.12)
Go version: go1.10.8
Git commit: 481bc77
Built: Sat May 4 02:02:43 2019
OS/Arch: linux/amd64
Experimental: false
注意:docker安装默认安装位置在/var/lib/docker,后期需要存放镜像需要较大磁盘空间,可修改存储位置。
1.停止docker容器
[root@master ~]# systemctl stop docker
2.将文件移动到指定目录
[root@master ~]# cd /var/lib/
[root@master ~]# mv docker /data/
3.建立软链接
[root@master ~]# ln -s /data/docker/ /var/lib/docker
4.启动docker
[root@master ~]# systemctl start docker
5.查看是否成功
[root@master ~]# docker info|grep 'Docker Root'
Docker Root Dir: /data/docker
二、K8S安装准备工作
1.配置主机名
[root@master ~]# hostnamectl set-hostname master
退出重新登录即可显示
2.修改hosts文件
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.3.183 master
192.168.2.110 node1
192.168.2.111 node2
3.验证mac地址uuid,保证各个节点mac和uuid唯一
[root@master ~]# cat /sys/class/net/ens32/address
00:0c:29:53:4e:c3
[root@master ~]# cat /sys/class/dmi/id/product_uuid
BB714D56-895D-BBC9-4879-C0BC0C534EC3
4.禁用swap
临时禁用:
[root@master ~]# swapoff -a
永久禁用:
[root@master ~]# sed -i.bak '/swap/s/^/#/' /etc/fstab
5.内核参数修改
临时修改:
[root@master ~]# sysctl net.bridge.bridge-nf-call-iptables=1
[root@master ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1
永久修改:
[root@master ~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
6.修改Cgroup Driver,消除警告信息。
修改daemon.json,新增‘“exec-opts”: [“native.cgroupdriver=systemd”’
[root@master ~]# more /etc/docker/daemon.json
{
"registry-mirrors": ["https://m2to419r.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"
}
7.重新加载docker
# systemctl daemon-reload
[root@master ~]# systemctl restart docker
8.设置Kubernetes源
1.新增Kubernetes源
[root@master ~]# more /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
- [] 中括号中的是repository id,唯一,用来标识不同仓库
- name 仓库名称,自定义
- baseurl 仓库地址
- enable 是否启用该仓库,默认为1表示启用
- gpgcheck 是否验证从该仓库获得程序包的合法性,1为验证
- repo_gpgcheck 是否验证元数据的合法性 元数据就是程序包列表,1为验证
- gpgkey=URL 数字签名的公钥文件所在位置,如果gpgcheck值为1,此处就需要指定gpgkey文件的位置,如果gpgcheck值为0就不需要此项了
2.更新缓存
[root@master yum.repos.d]# yum clean all
[root@master yum.repos.d]# yum -y makecache
三、Master节点安装
1.版本查看
[root@master ~]# yum list kubelet --showduplicates | sort -r
2.安装Kubelet、kubeadm和kubectl
[root@master ~]# yum install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2
若不指定版本直接运行yum install -y kubelet kubeadm kubectl则默认安装最新版。
2.1 安装包说明
- kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具
- kubeadm 用于初始化集群,启动集群的命令工具
- kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
2.2 启动Kubelet
[root@master ~]# systemctl enable kubelet && systemctl start kubelet
kubelet命令补全
[root@master ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master ~]# source .bash_profile
3.下载镜像
3.1 镜像下载的脚本
[root@master ~]# vim images.sh
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/google_containers
version=v1.14.2
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
docker pull $url/$imagename
docker tag $url/$imagename k8s.gcr.io/$imagename
docker rmi -f $url/$imagename
done
Url为阿里云镜像仓库地址,version为安装的Kubernetes版本。
3.2 下载镜像
运行脚本images.sh,下载版本的镜像,运行脚本脚本前先赋权
[root@master ~]# chmod +x images.sh
[root@master ~]# ./images.sh
[root@master ~]# docker images
4.初始化master
4.1初始化
[root@master ~]# kubeadm init --apiserver-advertise-address 192.168.3.107 --pod-network-cidr=10.244.0.0/16
Apiserver-advertise-address指定master的interface,pod-network-cid指定Pod网络的范围,这里使用flannel网络方案。192.168.3.107为master主机的ip。
注意:记录kubeadm join的输出,后面需要这个命令将在各个节点上执行,将node节点加入集群中。
kubeadm join 192.168.3.107:6443 --token iegn1r.sc93aaob0gvs5y2q --discovery-token-ca-cert-hash sha256:cd80d74c457c804389c34d90cb006e719dae1946e7f7fe3dfb1f774b0e83f526
4.2 加载环境变量
[root@master ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master ~]# source .bash_profile
本文所有操作都在root用户下执行,若为非root用户,则执行如下操作:
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
5.安装pod网络,该链接是我自己的,尽量将文件保留本地。
[root@master tools]# kubectl apply -f https://yyf-docker-image.oss-cn-shenzhen.aliyuncs.com/k8s/kubectl1.14.2/kube-flannel.yml
6.Master节点配置
taint:污点的意思,如果一个节点被打上污点,那么pod是不允许运行在这个节点上面的
6.1删除master节点默认污点
默认情况下集群不会在master上调度pod,如果偏想在master上调度Pod,可以执行如下操作:
查看污点:
[root@master tools]# kubectl describe node master|grep -i taints
删除默认污点:
[root@master tools]# kubectl taint nodes master node-role.kubernetes.io/master-node/master untaintednode/master untainted
四、Node节点安装
1.安装kubelet、kubeadm和kubectl
操作同master节点
2.下载镜像
操作同master节点
3.加入集群
一下操作在master节点上执行
3.1查看令牌
[root@master tools]# kubeadm token list
3.2生产新的令牌
[root@master tools]# kubeadm token create
7ah2e7.n9vbb3u94g6861xk
3.3生成新的加密串
[root@master tools]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outformder 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
3.4 node节点加入集群
在2个node节点上分别执行如下操作:(master节点操作步骤提到)
[root@node1 ~]# kubeadm join 192.168.3.107:6443 --token iegn1r.sc93aaob0gvs5y2q --discovery-token-ca-cert-hash sha256:cd80d74c457c804389c34d90cb006e719dae1946e7f7fe3dfb1f774b0e83f526
查看节点
[root@master ~]# kubectl get nodes
五、Dashboard安装
1.下载yaml
[root@master ~]# wget https://yyf-docker-image.oss-cn-shenzhen.aliyuncs.com/k8s/k2.配置yaml
2.1 修改镜像地址(已修改,跳过)
[root@master ~]# sed -i 's/k8s.gcr.io/registry.cn-hangzhou.aliyuncs.com\/kuberneters/g' kubernetes-dashboard.yaml
由于默认的镜像仓库网络访问不通,故改成阿里云镜像
2.2外网访问(已修改,跳过)
[root@master ~]# sed -i '/targetPort:/a\ \ \ \ \ \ nodePort: 30001\n\ \ type: NodePort' kubernetes-dashboard.yaml
2.3新增管理员账号
[root@master ~]# cat >> kubernetes-dashboard.yaml << EOF
---
# ------------------- dashboard-admin ------------------- #
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
EOF
3.部署访问
3.1部署Dashboard
[root@master ~]# kubectl apply -f kubernetes-dashboard.yaml
3.2状态查看
[root@master opt]# kubectl get deployment kubernetes-dashboard -n kube-system
NAME READY UP-TO-DATE AVAILABLE AGE
kubernetes-dashboard 1/1 1 1 2d20h
[root@master opt]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-fb8b8dccf-7r8dw 1/1 Running 161 3d1h 10.244.0.15 master <none> <none>
coredns-fb8b8dccf-sw6wf 1/1 Running 163 3d1h 10.244.0.17 master <none> <none>
etcd-master 1/1 Running 5 3d1h 192.168.3.107 master <none> <none>
kube-apiserver-master 1/1 Running 197 3d1h 192.168.3.107 master <none> <none>
kube-controller-manager-master 1/1 Running 8 3d1h 192.168.4.64 master <none> <none>
kube-flannel-ds-amd64-gx88f 1/1 Running 3 3d 192.168.3.107 master <none> <none>
kube-flannel-ds-amd64-qgxql 1/1 Running 3 2d21h 192.168.2.110 node1 <none> <none>
kube-flannel-ds-amd64-r92jq 1/1 Running 3 2d21h 192.168.2.111 node2 <none> <none>
kube-proxy-d28jv 1/1 Running 3 3d1h 192.168.3.107 master <none> <none>
kube-proxy-d7nrl 1/1 Running 3 2d21h 192.168.4.65 node2 <none> <none>
kube-proxy-p7fft 1/1 Running 3 2d21h 192.168.2.110 node1 <none> <none>
kube-scheduler-master 1/1 Running 10 3d1h 192.168.3.107 master <none> <none>
kubernetes-dashboard-7b87f5bdd6-ndvtd 1/1 Running 4 2d20h 10.244.2.16 node1 <none> <none>
[root@master opt]# kubectl get services -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 3d1h
kubernetes-dashboard NodePort 10.105.12.125 <none> 443:30001/TCP 2d20h
3.3令牌查看
[root@master ~]# kubectl describe secrets -n kube-system dashboard-admin
令牌为:eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tN202djUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYjQ3MTM3OTYtOGY1ZC0xMWVhLWJiZjEtMDAwYzI5NTM0ZWMzIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.09N1uXF2Q0mSW2vLCc3NBONSELk3wRXne6TImkYnFx7JpXNKZlwzNbvSqKf-1fU6Kntjul73HSsmwjbDj-XhSPRh8DWKQPCPfN1ViZAdvbwxo7q0rmUU2UgDoBIkhA3iwczMYoAEV1mgRuDA-ljoNoLEqCr_HLcYRUakdKfABR2hB5J8Oej-RU5OQGt0LUfgXfTl5_QI_Yoh2H4bW_MncNIrqUeAnVHNU1rO1fwaEdsofPywnUz4jJIJ0yA5JWXIqvnVOP-R0DJ5h0mTuDpOixRgLzzCL9XJ_u7Ck30Gp9GQigshHwTF3Hmo3ChKH-U2QBiUXDHJz0gZdMRpk81EWQ
3.4访问
火狐浏览器访问https://MasterIP:30001
通过令牌方式登录(使用命令查看令牌)
Dashboard提供了可以实现集群管理、工作负载、服务发现和负载均衡、存储、字典配置、日志视图等功能。
六、集群测试
1.部署应用
1.1命令方式
[root@master ~]# kubectl run httpd-app --image=httpd --replicas=3
1.2配置文件方式
[root@master ~]# more nginx.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
restartPolicy: Always
containers:
- name: nginx
image: nginx:latest
[root@master ~]# kubectl apply -f nginx.yml
2.状态插卡
2.1查看节点状态
[root@master opt]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 3d1h v1.14.2
node1 Ready <none> 2d21h v1.14.2
node2 Ready <none> 2d21h v1.14.2
2.2查看pod状态,默认是default命名空间,加参数 --all-namespace可查看所有pod
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-9d4cf4f77-4dccf 1/1 Running 0 40s
nginx-9d4cf4f77-tjcb6 1/1 Running 0 40s
nginx-9d4cf4f77-v7r4q 1/1 Running 0 40s
2.3查看副本数
[root@master yaml]# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 3/3 3 3 2m33s
[root@master yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-9d4cf4f77-4dccf 1/1 Running 0 2m37s 10.244.1.14 node2 <none> <none>
nginx-9d4cf4f77-tjcb6 1/1 Running 0 2m37s 10.244.0.18 master <none> <none>
nginx-9d4cf4f77-v7r4q 1/1 Running 0 2m37s 10.244.2.18 node1 <none> <none>
可以看到nginx的3个副本pod均匀分布在3个节点上
2.4查看刚创建nignx的deployment详情
[root@master yaml]# kubectl describe deployment nginx
Name: nginx
Namespace: default
CreationTimestamp: Sat, 09 May 2020 10:53:47 +0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision: 1
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"nginx","namespace":"default"},"spec":{"replica...
Selector: app=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:latest
Port: <none>
Host Port: <none>
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-9d4cf4f77 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 6m10s deployment-controller Scaled up replica set nginx-9d4cf4f77 to 3
2.5查看集群基本组件状态
[root@master yaml]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
至此完成centos7.6下k8s(v1.14.2)集群部署。
更多推荐
所有评论(0)