系统规划
主机名称 ip地址 操作系统 角色 软件版本 备注
master 10.0.0.11 CentOS 7.6 master k8s 1.14.2 docker-ce 18.09.5
node1 10.0.0.12 CentOS 7.6 node k8s 1.14.2 docker-ce 18.09.5
node2 10.0.0.13 CentOS 7.6 node k8s 1.14.2 docker-ce 18.09.5
网络插件采用flannel,部署在master节点。我利用了我前面搭建的barbor。
host文件:
# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.0.11 ntpserver master k8s-master
10.0.0.12 node1 k8s-node1
10.0.0.13 node2 k8s-node2
10.0.0.6 harbor.zmjcd.cc
一、配置ntp服务
服务器
安装软件包
# yum install -y chrony
更新/etc/chrony.conf 配置文件
将第一个同步的时间服务器指向邮电大学
# sed -i 's/server 0.centos.pool.ntp.org iburst/server s1a.time.edu.cn iburst/' /etc/chrony.conf
server s1a.time.edu.cn iburst
允许从此时间服务器同步时间的网段
# sed -i 's/#allow 192\.168\/16/allow 172.16.25.0\/24/' /etc/chrony.conf
allow 10.0.0.0/16
完整服务器配置文件
# egrep -v '^#|^$' /etc/chrony.conf
cat > /etc/chrony.conf << EOF
server s1a.time.edu.cn iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
hwtimestamp *
allow 10.0.0.0/16
local stratum 10
logdir /var/log/chrony
EOF
重启时间服务器
# systemctl enable chronyd.service; systemctl start chronyd.service
验证时间的同步
# chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^- ntp5.flashdance.cx 2 8 177 113 +10ms[ +10ms] +/- 216ms
^- ntp6.flashdance.cx 2 7 177 242 +12ms[ +12ms] +/- 209ms
^- montreal.ca.logiplex.net 2 7 357 54 -4901us[-4901us] +/- 231ms
^* 203.107.6.88 2 8 377 123 -911us[-1157us] +/- 39ms
# timedatectl status
Local time: Sat 2019-05-18 01:12:46 EDT
Universal time: Sat 2019-05-18 05:12:46 UTC
RTC time: Sat 2019-05-18 05:12:46
Time zone: America/New_York (EDT, -0400)
NTP enabled: yes
NTP synchronized: yes
RTC in local TZ: no
DST active: yes
Last DST change: DST began at
Sun 2019-03-10 01:59:59 EST
Sun 2019-03-10 03:00:00 EDT
Next DST change: DST ends (the clock jumps one hour backwards) at
Sun 2019-11-03 01:59:59 EDT
Sun 2019-11-03 01:00:00 EST
# 设置系统时区为上海
# timedatectl set-timezone Asia/Shanghai
客户端:
安装软件包
# yum install -y chrony
配置文件:
cat > /etc/chrony.conf << EOF
server 10.0.0.12
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
EOF
重启时间服务器
# systemctl enable chronyd.service; systemctl restart chronyd.service
# 设置系统时区为上海
# timedatectl set-timezone Asia/Shanghai
# ntpdate ntpserver
18 May 13:32:52 ntpdate[29848]: adjust time server 10.0.0.11 offset -0.238395 sec
设置5分钟自动同步:
# crontab -e //编辑crontab文件
# crontab -l //查看任务计划
#synchronization time //每隔五分钟进行同步一次
*/5 * * * * /usr/sbin/ntpdate ntpserver > /dev/null 2>&1
测试同步:
systemctl enable chronyd.service; systemctl restart chronyd.service
ntpdate ntpserver
二、各节点iptables及firewalld服务被disable;
# sed -i 's/=enforcing/=disabled/' /etc/sysconfig/selinux
# systemctl stop firewalld;systemctl disable firewalld
# setenforce 0
# hostnamectl set-hostname k8s-master
三、安装k8smaster
安装docker
安装docker-ce:
安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
添加软件源信息
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
更新并安装 Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
开启Docker服务
sudo service docker start
创建docker配置文件:
# cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://fgl80ig9.mirror.aliyuncs.com","http://04be47cf.m.daocloud.io"],
"insecure-registries": ["harbor.zmjcd.cc"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
启动docker服务:
systemctl enable docker && systemctl start docker
安装kubelet (代理服务)kubeadm(部署工具) kubectl(命令行工具):
安装aliyun kubernetes仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kebunetes基本安装环境
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
setenforce 0
此步骤可以忽略
# vim /usr/lib/systemd/system/docker.service
# Environment="HTTPS_PROXY=http://www.ik8s.io:10080"
# Environment="NO_PROXY=127.0.0.1/8,10.0.0.0/8"
重启服务
# systemctl restart docker && systemctl restart kubelet
修改内核:
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
查看安装的文件:
# rpm -ql kubelet
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/usr/bin/kubelet
/usr/lib/systemd/system/kubelet.service
查看kubelet版本
# kubelet --version
Kubernetes v1.14.2
不关闭swap,忽略swap告警
cat > /etc/sysconfig/kubelet << EOF
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF
安装kubernetes(此步骤会报错):
# kubeadm init --kubernetes-version="v1.14.2" \
--image-repository="mirrorgooglecontainers" \
--pod-network-cidr="10.244.0.0/16" \
--service-cidr="10.96.0.0/12" \
--ignore-preflight-errors="Swap"
手动下载:
# docker pull coredns/coredns:1.3.1
登录私有仓库:
# docker login harbor.zmjcd.cc
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
打标镜像:
docker tag coredns/coredns:1.3.1 harbor.zmjcd.cc/zmj_k8s/coredns:1.3.1
docker tag mirrorgooglecontainers/kube-proxy:v1.14.2 harbor.zmjcd.cc/zmj_k8s/kube-proxy:v1.14.2
docker tag mirrorgooglecontainers/kube-apiserver:v1.14.2 harbor.zmjcd.cc/zmj_k8s/kube-apiserver:v1.14.2
docker tag mirrorgooglecontainers/kube-controller-manager:v1.14.2 harbor.zmjcd.cc/zmj_k8s/kube-controller-manager:v1.14.2
docker tag mirrorgooglecontainers/kube-scheduler:v1.14.2 harbor.zmjcd.cc/zmj_k8s/kube-scheduler:v1.14.2
docker tag mirrorgooglecontainers/etcd:3.3.10 harbor.zmjcd.cc/zmj_k8s/etcd:3.3.10
docker tag mirrorgooglecontainers/pause:3.1 harbor.zmjcd.cc/zmj_k8s/pause:3.1
上传镜像:
docker push harbor.zmjcd.cc/zmj_k8s/coredns
docker push harbor.zmjcd.cc/zmj_k8s/kube-proxy
docker push harbor.zmjcd.cc/zmj_k8s/kube-apiserver
docker push harbor.zmjcd.cc/zmj_k8s/kube-controller-manager
docker push harbor.zmjcd.cc/zmj_k8s/kube-scheduler
docker push harbor.zmjcd.cc/zmj_k8s/etcd
docker push harbor.zmjcd.cc/zmj_k8s/pause
安装镜像:
# kubeadm init --kubernetes-version="v1.14.2" \
--image-repository="harbor.zmjcd.cc/zmj_k8s" \
--pod-network-cidr="10.244.0.0/16" \
--service-cidr="10.96.0.0/12" \
--ignore-preflight-errors="Swap"
看见如下信息:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.0.11:6443 --token 972arr.vkfwnusgewhrdw9x \
--discovery-token-ca-cert-hash sha256:e4b7fe2ae34865bf71132ef7b6f8d9dd99a9ef890d914b8a1a72225ae5c1c558
检查master节点状态:
查看容器状态
# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
查看节点状态
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 29m v1.14.2
安装flannel:
https://github.com/coreos/flannel
For Kubernetes v1.7+
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.extensions/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
再次查看节点状态:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 37m v1.14.2
查看pods状态
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-56584c9484-lfhg4 1/1 Running 0 38m
coredns-56584c9484-llj59 1/1 Running 0 38m
etcd-k8s-master 1/1 Running 0 37m
kube-apiserver-k8s-master 1/1 Running 0 37m
kube-controller-manager-k8s-master 1/1 Running 0 37m
kube-flannel-ds-amd64-bq88w 1/1 Running 0 4m13s
kube-proxy-glnr9 1/1 Running 0 38m
kube-scheduler-k8s-master 1/1 Running 0 37m
查看名称空间
# kubectl get ns
NAME STATUS AGE
default Active 39m
kube-node-lease Active 39m
kube-public Active 39m
kube-system Active 39m
安装node节点:
安装docker-ce:
安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
添加软件源信息
sudo yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
更新并安装 Docker-CE
sudo yum makecache fast
sudo yum -y install docker-ce
开启Docker服务
sudo service docker start
创建docker配置文件:
# cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://fgl80ig9.mirror.aliyuncs.com","http://04be47cf.m.daocloud.io"],
"insecure-registries": ["harbor.zmjcd.cc"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
启动docker服务:
systemctl enable docker && systemctl start docker
安装kubelet (代理服务)kubeadm(部署工具) kubectl(命令行工具):
制作aliyunkubernetes仓库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装kebunetes基本安装环境
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet
setenforce 0
修改内核:
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
不关闭swap,忽略swap告警
# cat > /etc/sysconfig/kubelet << EOF
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
EOF
加入集群:
kubeadm join 10.0.0.11:6443 --token 972arr.vkfwnusgewhrdw9x \
--discovery-token-ca-cert-hash sha256:e4b7fe2ae34865bf71132ef7b6f8d9dd99a9ef890d914b8a1a72225ae5c1c558 \
--ignore-preflight-errors="Swap"
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
加载需要的image:
# docker load -i flannel-0-11-0-amd64.tar.gz
7bff100f35cb: Loading layer [==================================================>] 4.672MB/4.672MB
5d3f68f6da8f: Loading layer [==================================================>] 9.526MB/9.526MB
9b48060f404d: Loading layer [==================================================>] 5.912MB/5.912MB
3f3a4ce2b719: Loading layer [==================================================>] 35.25MB/35.25MB
9ce0bb155166: Loading layer [==================================================>] 5.12kB/5.12kB
Loaded image: quay.io/coreos/flannel:v0.11.0-amd64
# docker load -i kube-proxy\:v1.14.2.tar.gz
fe9a8b4f1dcc: Loading layer [==================================================>] 43.87MB/43.87MB
15c9248be8a9: Loading layer [==================================================>] 3.403MB/3.403MB
050daa4add0c: Loading layer [==================================================>] 36.69MB/36.69MB
Loaded image: harbor.zmjcd.cc/zmj_k8s/kube-proxy:v1.14.2
# docker load -i pause-3-1.tar.gz
e17133b79956: Loading layer [==================================================>] 744.4kB/744.4kB
Loaded image: harbor.zmjcd.cc/zmj_k8s/pause:3.1
重新打标镜像:
# docker tag harbor.zmjcd.cc/zmj_k8s/kube-proxy:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
# docker tag harbor.zmjcd.cc/zmj_k8s/pause:3.1 k8s.gcr.io/pause:3.1
# docker tag quay.io/coreos/flannel:v0.11.0-amd64 k8s.gcr.io/flannel:v0.11.0-amd64
检查集群状态:
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 81m v1.14.2
k8s-node1 Ready <none> 20m v1.14.2
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-56584c9484-lfhg4 1/1 Running 0 81m
coredns-56584c9484-llj59 1/1 Running 0 81m
etcd-k8s-master 1/1 Running 0 80m
kube-apiserver-k8s-master 1/1 Running 0 80m
kube-controller-manager-k8s-master 1/1 Running 0 80m
kube-flannel-ds-amd64-7qjvm 1/1 Running 1 21m
kube-flannel-ds-amd64-bq88w 1/1 Running 0 47m
kube-proxy-glnr9 1/1 Running 0 81m
kube-proxy-rw8dh 1/1 Running 0 21m
kube-scheduler-k8s-master 1/1 Running 0 80m
# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-56584c9484-lfhg4 1/1 Running 0 81m 10.244.0.3 k8s-master <none> <none>
coredns-56584c9484-llj59 1/1 Running 0 81m 10.244.0.2 k8s-master <none> <none>
etcd-k8s-master 1/1 Running 0 81m 10.0.0.11 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 0 81m 10.0.0.11 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 0 80m 10.0.0.11 k8s-master <none> <none>
kube-flannel-ds-amd64-7qjvm 1/1 Running 1 21m 10.0.0.12 k8s-node1 <none> <none>
kube-flannel-ds-amd64-bq88w 1/1 Running 0 47m 10.0.0.11 k8s-master <none> <none>
kube-proxy-glnr9 1/1 Running 0 81m 10.0.0.11 k8s-master <none> <none>
kube-proxy-rw8dh 1/1 Running 0 21m 10.0.0.12 k8s-node1 <none> <none>
kube-scheduler-k8s-master 1/1 Running 0 81m 10.0.0.11 k8s-master <none> <none>
所有评论(0)