局域网使用kubeadm安装k8s
主机列表:ip主机名节点cpu内存192.168.23.100k8smastermaster2核2G192.168.23.101k8snode01node2核2G192.168.23.102k8snode02node2核2G1、配置本地y...
主机列表:
ip | 主机名 | 节点 | cpu | 内存 |
192.168.23.100 | k8smaster | master | 2核 | 2G |
192.168.23.101 | k8snode01 | node | 2核 | 2G |
192.168.23.102 | k8snode02 | node | 2核 | 2G |
1、配置本地yum源
yum源包:
链接:https://pan.baidu.com/s/1KAYWlw5Ky2ESUEZVsphQ0Q
配置本地yum源,将yum.repo拷贝到/etc/yum.repos.d/目录。
[root@k8smaster yum.repos.d]# more yum.repo
[soft]
name=base
baseurl=http://192.168.23.100/yum
gpgcheck=0
[root@k8smaster yum.repos.d]# scp yum.repo 192.168.23.102:/etc/yum.repos.d/
root@192.168.23.102's password:
yum.repo 100% 63 0.1KB/s 00:00
[root@k8smaster yum.repos.d]# scp yum.repo 192.168.23.101:/etc/yum.repos.d/
root@192.168.23.101's password:
yum.repo
2、修改/etc/hosts
[root@k8smaster yum.repos.d]# cat >> /etc/hosts << EOF
> 192.168.23.100 k8smaster
> 192.168.23.101 k8snode01
> 192.168.23.102 k8snode02
> EOF
[root@k8smaster yum.repos.d]#
3、安装依赖
yum install -y conntrack ntpdate ntp ipvsadm ipset iptables curl sysstat libseccomp wget vim net-tools git iproute lrzsz bash-completion tree bridge-utils unzip bind-utils gcc
4、关闭selinux
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
5、关闭防火墙,设置防火墙为iptables并设置空规则
#关闭firewalld并取消自启动
systemctl stop firewalld && systemctl disable firewalld
#安装iptables,启动iptables,设置开机自启,清空iptables规则,保存当前规则到默认规则
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
6、关闭swap分区
#关闭swap分区【虚拟内存】并且永久关闭虚拟内存。
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
7、配置内核参数,对于k8s
cat > kubernetes.conf <<EOF
#开启网桥模式【重要】
net.bridge.bridge-nf-call-iptables=1
#开启网桥模式【重要】
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
#禁止使用swap空间,只有当系统OOM时才允许使用它
vm.swappiness=0
#不检查物理内存是否够用
vm.overcommit_memory=1
#开启OOM
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
#关闭ipv6【重要】
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
#将优化内核文件拷贝到/etc/sysctl.d/文件夹下,这样优化文件开机的时候能够被调用
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
#手动刷新,让优化文件立即生效
sysctl -p /etc/sysctl.d/kubernetes.conf
8、调整系统时区
#设置系统时区为中国/上海
timedatectl set-timezone Asia/Shanghai
#将当前的 UTC 时间写入硬件时钟
timedatectl set-local-rtc 0
#重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond
9、关闭系统不需要的服务
#关闭及禁用邮件服务
systemctl stop postfix && systemctl disable postfix
10、设置日志的保存方式
在Centos7以后,因为引导方式改为了system.d,所以有两个日志系统同时在工作,默认的是rsyslogd,以及systemd journald
使用systemd journald更好一些,因此我们更改默认为systemd journald,只保留一个日志的保存方式。
1).创建保存日志的目录
mkdir /var/log/journal
2).创建配置文件存放目录
mkdir /etc/systemd/journald.conf.d
3).创建配置文件
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
#持久化保存到磁盘
Storage=persistent
#压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
#最大占用空间10G
SystemMaxUse=10G
#单日志文件最大200M
SystemMaxFileSize=200M
#日志保存时间2周
MaxRetentionSec=2week
#不将日志转发到syslog
ForwardToSyslog=no
EOF
4).重启systemd journald的配置
systemctl restart systemd-journald
11、打开文件数调整
echo "* soft nofile 65536" >> /etc/security/limits.conf
echo "* hard nofile 65536" >> /etc/security/limits.conf
12、升级Linux内核为4.44版本
[root@k8smaster yum.repos.d]# yum install kernel-lt.x86_64 -y (4.4.213-1.el7.elrepo)
[root@k8smaster yum.repos.d]# awk -F\' '$1=="menuentry " {print $2}' /etc/grub2.cfg
CentOS Linux (4.4.213-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux, with Linux 3.10.0-123.el7.x86_64
CentOS Linux, with Linux 0-rescue-b7478dd50b1d41a5836a6a670b5cd8c1
[root@k8smaster yum.repos.d]#grub2-set-default 'CentOS Linux (4.4.213-1.el7.elrepo.x86_64) 7 (Core)'
[root@k8snode01 ~]# uname -a
Linux k8snode01 4.4.213-1.el7.elrepo.x86_64 #1 SMP Wed Feb 5 10:44:50 EST 2020 x86_64 x86_64 x86_64 GNU/Linux
12、kube-proxy开启ipvs的前置条件
modprobe br_netfilter #加载netfilter模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4 #使用lsmod命令查看这些文件是否被引导。
13、安装docker
依赖 yum install yum-utils device-mapper-persistent-data lvm2 -y
yum install -y docker-ce #安装docker
创建/etc/docker目录
[ ! -d /etc/docker ] && mkdir /etc/docker
配置daemon
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}
修改docker.service文件
/usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry 0.0.0.0/0 -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375 --containerd=/run/containerd/containerd.sock
# 重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
14、安装仓库和镜像初始化
docker run -d -p 5000:5000 --restart=always --name private-docker-registry --privileged=true -v /data/registry:/var/lib/registry 192.168.23.100:5000/registry:v1
flannel网络镜像包
链接:https://pan.baidu.com/s/1-DYxDoU2X85aobaGFclKfA
提取码:nson
k8s基础镜像包
链接:https://pan.baidu.com/s/17uV90VPXqoaezwccpTj2GQ
提取码:13t3
导入镜像
[root@k8smaster k8s_image]# more load_image.sh
#!/bin/bash
ls /home/zhaiky/k8s_image|grep -v load > /tmp/image-list.txt
cd /home/zhaiky/k8s_image
for i in $( cat /tmp/image-list.txt )
do
docker load -i $i
done
rm -rf /tmp/image-list.txt
上传镜像到私有仓库
docker push 192.168.23.100:5000/kube-apiserver:v1.15.1
docker push 192.168.23.100:5000/kube-proxy:v1.15.1
docker push 192.168.23.100:5000/kube-controller-manager:v1.15.1
docker push 192.168.23.100:5000/kube-scheduler:v1.15.1
docker push 192.168.23.100:5000/registry:v1
docker push 192.168.23.100:5000/coreos/flannel:v0.11.0-s390x
docker push 192.168.23.100:5000/coreos/flannel:v0.11.0-ppc64le
docker push 192.168.23.100:5000/coreos/flannel:v0.11.0-arm64
docker push 192.168.23.100:5000/coreos/flannel:v0.11.0-arm
docker push 192.168.23.100:5000/coreos/flannel:v0.11.0-amd64
docker push 192.168.23.100:5000/coredns:1.3.1
docker push 192.168.23.100:5000/etcd:3.3.10
docker push 192.168.23.100:5000/pause:3.1
15、安装kubeadm、kubelet、kubectl
yum install -y kubeadm-1.15.1 kubelet-1.15.1 kubectl-1.15.1
systemctl enable kubelet && systemctl start kubelet
16、启用kubectl命令的自动补全功能
# 安装并配置bash-completion
yum install -y bash-completion
echo 'source /usr/share/bash-completion/bash_completion' >> /etc/profile
source /etc/profile
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc
17、初始化Master
配置文件包,包括kubeadm-config.yaml和kube-flannel.yml都在里面
链接:https://pan.baidu.com/s/1g0G7Ion0n6lERpluNjh_9A
提取码:6pxt
[root@k8smaster ~]# cp /home/zhaiky/kubeadm-config.yaml .
kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
关键日志记录
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.23.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:78c3f1e110ed1f954665ba55a689397c2dc4d35243dc4516dd00b0bac97172f6
18、安装flannel网络插件
[root@k8smaster ~]# cp /home/zhaiky/kube-flannel.yml .
[root@k8smaster ~]# kubectl create -f kube-flannel.yml
19、将k8s子节点加入到k8s主节点
kubeadm join 192.168.23.100:6443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:78c3f1e110ed1f954665ba55a689397c2dc4d35243dc4516dd00b0bac97172f6
[root@k8smaster zhaiky]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@k8smaster ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8smaster Ready master 4m58s v1.15.1
k8snode01 NotReady <none> 21s v1.15.1
k8snode02 NotReady <none> 16s v1.15.1
[root@k8smaster ~]#
20、简单操作
使用k8s运行一个nginx实例
[root@k8smaster ~]# kubectl run nginx --image=192.168.23.100:5000/nginx:v1 --port=80 --replicas=1
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
[root@k8smaster ~]#
[root@k8smaster ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-5bbb49fb76-xzj6x 1/1 Running 0 59s 10.244.1.2 k8snode01 <none> <none>
[root@k8smaster ~]#
[root@k8smaster ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 2m15s
[root@k8smaster ~]#
[root@k8smaster ~]# curl "http://10.244.1.2"
<title>Welcome to nginx!</title>
[root@k8smaster ~]# kubectl expose deployment nginx --port=80 --type=LoadBalancer
service/nginx exposed
[root@k8smaster ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h
nginx LoadBalancer 10.99.225.215 <pending> 80:32461/TCP 13s
[root@k8smaster ~]#
[root@k8smaster ~]# curl "http://192.168.23.101:32461"
<title>Welcome to nginx!</title>
[root@k8smaster ~]# curl "http://10.99.225.215"
<title>Welcome to nginx!</title>
更多推荐
所有评论(0)