Docker&K8s安装部署教程
一、说明本教程为内网安装,无法连接internet安装包如下:链接: https://pan.baidu.com/s/1BpadtQcDGHzKYY1zEbRLeQ 提取码: 649m二、安装Docker1.上传rz -y,选择安装包2.解压tar -xvf docker-18.06.1-ce.tgz3.移动将解压出来的docker文件内容移动到 /usr/bin/ 目录下cp docker/*
目录
一、说明
本教程为内网安装,无法连接internet
安装包如下:
链接: https://pan.baidu.com/s/1BpadtQcDGHzKYY1zEbRLeQ 提取码: 649m
二、安装Docker
1.上传
rz -y,选择安装包
2.解压
tar -xvf docker-18.06.1-ce.tgz
3.移动
将解压出来的docker文件内容移动到 /usr/bin/ 目录下
cp docker/* /usr/bin/
4.注册服务
将docker注册为service
vim /etc/systemd/system/docker.service
复制下面的内容到docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP $MAINPID
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
#TasksMax=infinity
TimeoutStartSec=0
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
# restart the docker process if it exits prematurely
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
5.启动
chmod +x /etc/systemd/system/docker.service #添加文件权限
systemctl daemon-reload #重载unit配 置文件
systemctl start docker #启动Docker
systemctl enable docker.service #设置开机自启
6.验证
systemctl status docker #查看Docker状态
docker -v #查看Docker版本
docker的安装启动到这里就结束了,没遇到什么问题,所有节点都要安装docker。
三、K8s安装
1.各节点通用操作
1.1 上传安装源
rz -y 选择k8s-offline-install.zip
1.2解压
unzip k8s-offline-install.zip
解压之后里面有docker_images的压缩包,也要解压:
分卷压缩包在linux不好解压,可在window解压后,上传回去,解压之后有11个文件。
将上述镜像导入docker
(全部复制,粘贴到命令行回车即可)
docker load < /root/package/k8s-images/docker_images/etcd-amd64_v3.1.10.tar
docker load < /root/package/k8s-images/docker_images/flannel_v0.9.1-amd64.tar
docker load < /root/package/k8s-images/docker_images/k8s-dns-dnsmasq-nanny-amd64_v1.14.7.tar
docker load < /root/package/k8s-images/docker_images/k8s-dns-kube-dns-amd64_1.14.7.tar
docker load < /root/package/k8s-images/docker_images/k8s-dns-sidecar-amd64_1.14.7.tar
docker load < /root/package/k8s-images/docker_images/kube-apiserver-amd64_v1.9.0.tar
docker load < /root/package/k8s-images/docker_images/kube-controller-manager-amd64_v1.9.0.tar
docker load < /root/package/k8s-images/docker_images/kube-proxy-amd64_v1.9.0.tar
docker load < /root/package/k8s-images/docker_images/kubernetes-dashboard_v1.8.1.tar
docker load < /root/package/k8s-images/docker_images/kube-scheduler-amd64_v1.9.0.tar
docker load < /root/package/k8s-images/docker_images/pause-amd64_3.0.tar
1.3关闭selinux和防火墙
[root@hadoop04 ~]# setenforce 0
[root@hadoop04 ~]# sed -i "s/SELINUX=enforcing/SELINUX=permissive/g" /etc/selinux/config
[root@hadoop04 ~]# systemctl disable firewalld.service
[root@hadoop04 ~]# systemctl stop firewalld.service
1.4 修改hosts文件
使用本地解析主机名,给每个节点都起个名字
[root@hadoop04 ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.20.11 master
172.16.20.12 node1
172.16.20.13 node2
1.5配置系统路由参数
防止kubeadm报路由警告
[root@hadoop04 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@hadoop04 ~]# sysctl --system
1.6关闭swap
[root@hadoop04 ~]# swapoff -a
# 注释掉带swap的行,你可以直接执行下面的命令,或者自己打开文件手动注释 vi /etc/fstab
[root@hadoop04 ~]# sed -i 's/.*swap/#&/' /etc/fstab
1.7安装kubelet kubeadm kubectl包
就是安装装包中的rpm文件
[root@hadoop04 k8s-images]#rpm -ivh socat-1.7.3.2-2.el7.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm kubelet-1.9.9-9.x86_64.rpm kubectl-1.9.0-0.x86_64.rpm kubectl-1.9.0-0.x86_64.rpm kubeadm-1.9.0-0.x86_64.rpm
1.8启用开机启动
此处仅需要enable
[root@hadoop04 k8s-images]# systemctl enable kubelet
1.9修改配置文件
设置cgroupkubelet默认的cgroup的driver和docker的不一样,docker默认的cgroupfs,kubelet默认为systemd
[root@hadoop04 k8s-images]# cp -a /etc/systemd/system/kubelet.service.d/10-kubeadm.conf /etc/systemd/system/kubelet.service.d/10-kubeadm.conf_bak
[root@hadoop04 k8s-images]# sed -i "s/systemd/cgroupfs/g" /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
1.10重启reload
[root@hadoop04 k8s-images]# systemctl daemon-reload
2.master节点操作
2.1节点互信
master节点与node节点做互相通信(ssh-copy-id的后面写二、2设置的,你要作为node的节点的名字)
[root@hadoop04 k8s-images]# ssh-keygen -t rsa -f ~/.ssh/id_rsa -N "" -q
[root@hadoop04 k8s-images]# ssh-copy-id node1
[root@hadoop04 k8s-images]# ssh-copy-id node2
2.2环境reset
[root@hadoop04 k8s-images]# kubeadm reset
2.3kubeadm 初始化
(注意:执行该命令后将出现的kubeadm join xxx保存下来,等下node节点需要使用)
[root@hadoop04 k8s-images]# kubeadm init --kubernetes-version=v1.9.0 --pod-network-cidr=10.244.0.0/16
kubernetes默认支持多重网络插件如flannel、weave、calico,这里使用flanne,就必须要设置–pod-network-cidr参数,
10.244.0.0/16是kube-flannel.yml里面配置的默认网段,如果需要修改的话,
需要把kubeadm init的–pod-network-cidr参数和后面的kube-flannel.yml里面修改成一样的网段就可以了。(一般不需要修改)
2.4配置下环境变量
对于root用户:
[root@hadoop04 k8s-images]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@hadoop04 k8s-images]# source ~/.bash_profile
对于非root用户:(非root用户的操作没试过)
[root@hadoop04 k8s-images]# mkdir -p $HOME/.kube
[root@hadoop04 k8s-images]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@hadoop04 k8s-images]# chown $(id -u):$(id -g) $HOME/.kube/config
2.5测试是否启动
[root@hadoop04 k8s-images]# kubectl version
2.6安装网络
可以使用flannel、calico、weave、macvlan这里我们用flannel(这个yaml文件是压缩包里带的)
[root@hadoop04 k8s-images]# kubectl create -f kube-flannel.yml
2.7安装dashboard
(这个yaml文件是压缩包里带的,这里要注意步骤否则有问题)
找到gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.1
修改kubernetes-dashboard.yaml,做如下修改:
image: image: reg.qiniu.com/k8s/kubernetes-dashboard-amd64:v1.8.3
改为
image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.1
#执行kubernetes-dashboard
[root@hadoop04 k8s-images]# kubectl apply -f kubernetes-dashboard.yaml
#查看部署状态,如果不修改kubernetes-dashboard.yaml,直接执行kubernetes-dashboard,下面的STATUS是ImagePullBackOff 我是通过这个网址解决的问题https://www.jianshu.com/p/7159ef208c47
[root@hadoop04 k8s-images]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system kubernetes-dashboard-7d5dcdb6d9-mf6l2 1/1 Running 0 9m
2.8创建用户
创建服务账号,首先创建一个叫admin-user的服务账号,并放在kube-system名称空间下:
[root@hadoop04 k8s-images]# kubectl create -f admin-user.yaml
绑定角色,默认情况下,kubeadm创建集群时已经创建了admin角色,我们直接绑定即可:
[root@hadoop04 k8s-images]# kubectl create -f admin-user-role-binding.yaml
获取Token,现在我们需要找到新创建的用户的Token,以便用来登录dashboard:
[root@hadoop04 k8s-images]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
输出类似:
Name: admin-user-token-qrj82
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=admin-user
kubernetes.io/service-account.uid=6cd60673-4d13-11e8-a548-00155d000529
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLXFyajgyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2Y2Q2MDY3My00ZDEzLTExZTgtYTU0OC0wMDE1NWQwMDA1MjkiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.C5mjsa2uqJwjscWQ9x4mEsWALUTJu3OSfLYecqpS1niYXxp328mgx0t-QY8A7GQvAr5fWoIhhC_NOHkSkn2ubn0U22VGh2msU6zAbz9sZZ7BMXG4DLMq3AaXTXY8LzS3PQyEOCaLieyEDe-tuTZz4pbqoZQJ6V6zaKJtE9u6-zMBC2_iFujBwhBViaAP9KBbE5WfREEc0SQR9siN8W8gLSc8ZL4snndv527Pe9SxojpDGw6qP_8R-i51bP2nZGlpPadEPXj-lQqz4g5pgGziQqnsInSMpctJmHbfAh7s9lIMoBFW7GVE8AQNSoLHuuevbLArJ7sHriQtDB76_j4fmA
ca.crt: 1025 bytes
namespace: 11 bytes
这里的token是登陆dashboard的web网页需要的码
用firefox访问https://172.x.x.x:32666 ,选择令牌登录,输入token
注意:一定要是https不然访问不了。这个问题我是通过这个网址解决的:https://github.com/kubernetes/dashboard/issues/3460
四、加入node节点到集群
使用刚刚执行kubeadm后的kubeadm join –xxx(注意,执行你自己的,别执行这里的,这只是个例子)
[root@hadoop03 ~]# kubeadm join --token 361c68.fbafaa96a5381651 master:6443 --discovery-token-ca-cert-hash sha256:e5e392f4ce66117635431f76512d96824b88816dfdf0178dc497972cf8631a98
五、查看集群
在master节,查看节点
[root@hadoop04 k8s-images]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 3h v1.9.0
node1 Ready <none> 3h v1.9.0
node2 Ready <none> 3h v1.9.0
查看服务
[root@hadoop04 k8s-images]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-hadoop04 1/1 Running 0 1d
kube-system kube-apiserver-hadoop04 1/1 Running 0 1d
kube-system kube-controller-manager-hadoop04 1/1 Running 0 1d
kube-system kube-dns-6f4fd4bdf-2sll4 3/3 Running 0 1d
kube-system kube-flannel-ds-4xp4v 1/1 Running 0 1d
kube-system kube-flannel-ds-8wfw4 1/1 Running 0 23h
kube-system kube-proxy-7gzw9 1/1 Running 0 1d
kube-system kube-proxy-phnm8 1/1 Running 0 23h
kube-system kube-scheduler-hadoop04 1/1 Running 0 1d
kube-system kubernetes-dashboard-7b7b5cd79b-ctz85 1/1 Running 0 23h
在集群配置时,默认会生成kubernetes-admin这个用户及相关公私钥和证书,证书信息保存在master节点的配置文件/etc/kubernetes/admin.conf中
kubectl config的配置信息中需要三个证书信息,都可以在admin.conf中找到
certificate-authority-data
client-certificate-data
client-key-data
整体的离线安装思路是这个网址提供的:https://blog.csdn.net/wylfengyujiancheng/article/details/81102801
六、遇到的问题
1.node节点无法加入集群,一直报错connection refused
发现node节点无法ping通master节点,报错如下,暂时无法解决,只好换个主节点。
2.主节点执行kubeadm init等待巨长时间后,报错
测试kubectl version也说connection refused,解决方式还是换个主节点,哈哈,我也是小白,暂时没查到原因,20个结点,应该够我换的。
更多推荐
所有评论(0)