k8s部署
启用此内核模块,以便遍历桥的数据包由iptables进行处理以进行过滤和端口转发,并且群集中的kubernetes窗格可以相互通信。推送成功之后,可以在https://gsl.harbor.com/中查看到刚才上传的镜像。上传kubeadm-basic.images.tar.gz 到root家目录。COREDNS:可以为集群中的SVC创建一个域名IP的对应关系解析。Scheduler::负责介
文章目录
特点:
1、轻量级:消耗资源小
2、开源
3、弹性伸缩
4、负载均衡
高可用集群副本数据最好是 >= 3 奇数个
k8s组件
-
APISERVER:所有服务访问的统一入口
-
CrontrollerManager:维持副本期望数目
-
Scheduler::负责介绍任务,选择合适的节点进行分配任务
-
ETCD:键值对数据库 储存K8S集群所有重要信息(持久化)
-
Kubelet:直接跟容器引擎交互实现容器的生命周期管理
-
Kube-proxy:负责写入规则至 IPTABLES、IPVS 实现服务映射访问的
其他插件说明:
-
COREDNS:可以为集群中的SVC创建一个域名IP的对应关系解析
-
DASHBOARD:给 K8S 集群提供一个 B/S 结构访问体系
-
INGRESS CONTROLLER:官方只能实现四层代理,INGRESS 可以实现七层代理
-
FEDERATION:提供一个可以跨集群中心多K8S统一管理功能
-
PROMETHEUS:提供K8S集群的监控能力
-
ELK:提供 K8S 集群日志统一分析接入平台
k8s安装部署
准备四台centos7主机
分配网卡ip及主机名,设置如下
192.168.2.111 k8s-master01 2vcpu/4G/100G
192.168.2.112 k8s-node1 2vcpu/4G/100G
192.168.2.113 k8s-node2 2vcpu/2G/100G
192.168.2.114 k8s-harbor 2vcpu/1G/100G
设置主机名和hosts解析
# hostnamectl set-hostname k8s-node01
依次修改四台主机
# vi /etc/hosts
192.168.2.111 k8s-master01
192.168.2.112 k8s-node1
192.168.2.113 k8s-node2
192.168.2.114 k8s-harbor
把上节的内容粘贴到hosts文件
安装依赖包
# yum install epel-release -y
# yum install conntrack ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git -y
设置防火墙
关闭防火墙,设置防火墙为iptables,并清空规则
# systemctl stop firewalld && systemctl disable firewalld
# yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
Installed:
iptables-services.x86_64 0:1.4.21-34.el7
Complete!
Created symlink from /etc/systemd/system/basic.target.wants/iptables.service to /usr/lib/systemd/system/iptables.service.
iptables: Saving firewall rules to /etc/sysconfig/iptables: [ OK ]
关闭虚拟内存和selinux
# swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
临时关闭swap后启用swap
swapoff -a
swapon -a
永久关闭swap
vi /etc/fstab # 注释掉含swap的那一项即可
#
# /etc/fstab
# Created by anaconda on Wed Jun 10 20:23:16 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=c4948c51-3d55-406b-8afe-e5a3c1ec9bd3 /boot xfs defaull
ts 0 0
/dev/mapper/centos-swap swap swap defaults 0 0
调整内核参数
# vim /etc/sysctl.d/kubernetes.conf
# 必须的参数:
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv6.conf.all.disable_ipv6=1
# 可选的优化参数:
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 #禁止使用swap空间,只有当系统oom时才允许使用它
vm.overcommit_memory=1 #不检查物理内存是否够用
vm.panic_on_oom=0 #开启oom
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
按下面做
cat >kubernetes.conf << EOF
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv6.conf.all.disable_ipv6=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
调整系统时区
# timedatectl set-timezone Asia/Shanghai
# timedatectl set-local-rtc 0
# systemctl restart rsyslog crond
关闭系统不需要的服务
# systemctl stop postfix && systemctl disable postfix
设置rsyslogd和systemd journald
# mkdir /var/log/journal #持续化保存日志的目录
# mkdir /etc/systemd/journald.conf.d
#
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF
[Journal]
#持续化保存到磁盘
Storage=persistent
#压缩历史日志
Compress=yes
SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
#最大占用空间 10G
SystemMaxUse=10G
#单日志文件最大 200M
SystemMaxFileSize=200M
#日志保存时间 2周
MaxRetentionSec=2week
#不将日志转发到 syslog
ForwardToSyslog=no
EOF
# systemctl restart systemd-journald
升级系统内核为4.4
centos7x 系统自带的3.10.x内核存在一些bugs,导致运行的docker、kubernetes不稳定
# rpm -Uvh https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
# 安装完成后检查 /boot/grub2/grub.cfg中对应的menuentry是否包含initrd16,没有再安装一次
# yum --enablerepo=elrepo-kernel install -y kernel-lt
# 设置开机从新内核启动,这个值需要从cat /boot/grub2/grub.cfg |grep menuentry 中查出准确的值,要不修改失败
# grub2-set-default 'CentOS Linux (4.4.231-1.el7.elrepo.x86_64) 7 (Core)'
# reboot
kube-proxy开启ipvs的前置条件
kubernetes安装需要br_netfilter模块。 启用此内核模块,以便遍历桥的数据包由iptables进行处理以进行过滤和端口转发,并且群集中的kubernetes窗格可以相互通信。
运行以下命令启用br_netfilter内核模块。
# modprobe br_netfilter
引导lvs相关依赖加载
# cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
安装docker软件
# yum install -y yum-utils device-mapper-persistent-data lvm2
# 添加阿里云镜像源
# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 更新yum,安装docker社区版
# yum update -y && yum install -y docker-ce
# 重新设置由4.4核心启动
# grub2-set-default "CentOS Linux (4.4.231-1.el7.elrepo) 7 (Core)" && reboot #重启后请确认内核版本是4.4
# 添加docker到自启动,并启动
# systemctl start docker && systemctl enable docker
## 配置deamon
# cat > /etc/docker/daemon.json << EOF
{
"exec-opts": ["native.cgroupdriver-systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}
EOF
# mkdir -p /etc/systemd/system/docker.service.d
# systemctl daemon-reload && systemctl restart docker && systemctl enable docker
安装kubeadm(主从配置)
生成配置文件
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
安装
# yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
# systemctl enable kubelet.service
安装 Kubeadm镜像
上传kubeadm-basic.images.tar.gz 到root家目录。
解压
tar -zxf kubeadm-basic.images.tar.gz
用脚本导入images。
vim load-images.sh
加入以下内容
#!/bin/bash
ls /root/kubeadm-basic.images > /tmp/image-list.txt
cd /root/kubeadm-basic.images
for i in $(cat /tmp/image-list.txt)
do
docker load -i $i
done
rm -rf /tmp/image-list.txt
chmod a+x loadimages.sh
./loadimages.sh
以下在主节点执行:(此处指k8s-master01节点)
初始化主节点
打印初始化默认配置到kubeadm-config.yaml
# kubeadm config print init-defaults > kubeadm-config.yaml
修改一下内容
# vim kubeadm-config.yaml
localAPIEndpoint:
advertiseAddress: 192.168.2.111 #修改为主节点IP地址,注意冒号后的空格
kubernetesVersion: v1.15.1 #修改为正确的版本信息,前面安装指定的版本
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 #添加flannel网络插件提供的pod子网的默认地址
serviceSubnet: 10.96.0.0/12
#增加一下内容,将默认的调度方式改为ipvs方式
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
指定配置文件初始化,并生成log
#kubeadm初始化并记录日志信息
# kubeadm init --config=kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
结果如下,后面有用
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.2.111:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:12de2b49090ec0ed543dae58712b0d5657fed04a280563d5a70834ccbbde5450
根据提示执行以下内容
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看节点状态,notready是因为没有部署网络flannel
# kubectl get node #获取节点状态,此处因没有扁平化网络,所以status为noready;
部署网络:
# mkdir -pv install-k8s/{core,plugin}
# mv kubeadm-initlog kubeadm-config.yaml install-k8s/core #kubeadm-initlog kubeadm-config.yaml需要留存
# mkdir install-k8s/plugin/flannel
# cd install-k8s/plugin/flannel
# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
下载不了说明方法不对
然后,根据剧本创建flannel网络
[root@k8s-master01 flannel]# kubectl create -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
查看结果
指定命名空间时kube-system
[root@k8s-master01 flannel]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-dk24f 1/1 Running 0 4h15m
coredns-5c98db65d4-x54tz 1/1 Running 0 4h15m
etcd-k8s-master01 1/1 Running 0 4h14m
kube-apiserver-k8s-master01 1/1 Running 0 4h14m
kube-controller-manager-k8s-master01 1/1 Running 0 4h14m
kube-flannel-ds-amd64-nld5j 1/1 Running 0 112s
kube-proxy-cfvhm 1/1 Running 0 4h15m
kube-scheduler-k8s-master01 1/1 Running 0 4h14m
再查看节点状态,ready
[root@k8s-master01 flannel]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 4h17m v1.15.1
用前面部署k8s的给出的命令,添加节点,依次在节点机运行
kubeadm join 192.168.2.111:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:12de2b49090ec0ed543dae58712b0d5657fed04a280563d5a70834ccbbde5450
再看节点状态
[root@k8s-master01 flannel]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 4h24m v1.15.1
k8s-node1 Ready <none> 59s v1.15.1
k8s-node2 NotReady <none> 11s v1.15.1
[root@k8s-master01 flannel]#
更详细信息
[root@k8s-master01 flannel]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master01 Ready master 4h32m v1.15.1 192.168.2.111 <none> CentOS Linux 7 (Core) 4.4.231-1.el7.elrepo.x86_64 docker://19.3.12
k8s-node1 Ready <none> 8m47s v1.15.1 192.168.2.112 <none> CentOS Linux 7 (Core) 4.4.231-1.el7.elrepo.x86_64 docker://19.3.12
k8s-node2 Ready <none> 7m59s v1.15.1 192.168.2.113 <none> CentOS Linux 7 (Core) 4.4.231-1.el7.elrepo.x86_64 docker://19.3.12
[root@k8s-master01 flannel]#
harbor安装
虚拟机配置:
配置主机名为gsl.harbor.com
修改hosts,并同步所有hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.2.111 k8s-master01
192.168.2.112 k8s-node1
192.168.2.113 k8s-node2
192.168.2.114 gsl.harbor.com
安装docke
步骤同上
安装完成之后,需要在每个机器上修改/etc/docker/daemon.json ,增加一行"insecure-registries": [“https://gsl.harbor.com”]来忽略不合法的ssl证书异常,即每个机器上的daemon.json为:
{
"exec-opts": ["native.cgroupdriver-systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"insecure-registries": ["https://gsl.harbor.com"]
}
重启docker
安装harbor
上传docker-compose
上传harbor-offline-installer-v1.2.0.tgz
解压并改配置
mv docker-compose /usr/local/bin/
chmod a+x /usr/local/bin/docker-compose
tar -zxf harbor-offline-installer-v1.2.0.tgz
mv harbor /usr/local/
cd /usr/local/harbor/
vim harbor.cfg #修改下面两个参数
hostname = gsl.harbor.com
ui_url_protocol = https
制作ssl证书
创建/data/cert目录,并进入这个目录中。
cd /data/cert
生成公钥私钥
openssl genrsa -des3 -out server.key 2048 #生成秘钥
openssl req -new -key server.key -out server.csr #生成公钥
过程如下,在harbor主机上/data/cert/
[root@k8s-harbor cert]# openssl genrsa -des3 -out server.key 2048
Generating RSA private key, 2048 bit long modulus
.................................+++
............+++
e is 65537 (0x10001)
Enter pass phrase for server.key:
Verifying - Enter pass phrase for server.key:
[root@k8s-harbor cert]# ls
server.key
[root@k8s-harbor cert]# openssl req -new -key server.key -out server.csr
Enter pass phrase for server.key:
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:cn
State or Province Name (full name) []:hn
Locality Name (eg, city) [Default City]:xxx
Organization Name (eg, company) [Default Company Ltd]:xxx
Organizational Unit Name (eg, section) []:cx
Common Name (eg, your name or your server's hostname) []:xxx
Email Address []:xxx@sina.com
Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
[root@k8s-harbor cert]# ls
server.csr server.key
[root@k8s-harbor cert]#
生成证书
#备份秘钥
cp server.key server.key.org
#去掉密码
openssl rsa -in server.key.org -out server.key
#生成证书
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
#给证书赋予执行权限
chmod a+x *
完成安装
再回到原来目录
#安装harbor
./install.sh
结果
[Step 4]: starting Harbor ...
Creating network "harbor_harbor" with the default driver
Creating harbor-log ... done
Creating registry ... done
Creating harbor-db ... done
Creating harbor-adminserver ... done
Creating harbor-ui ... done
Creating harbor-jobservice ... done
Creating nginx ... done
✔ ----Harbor has been installed and started successfully.----
Now you should be able to visit the admin portal at https://gsl.harbor.com.
For more details, please visit https://github.com/vmware/harbor .
等待安装完成之后查看docker进程
docker ps -a
使用harbor
登陆,默认用户名admin/Harbor12345
在node1上拉一个Nginx
docker pull nginx
#重命名为自己的镜像
docker tag nginx gsl.harbor.com/library/nginx:v2
将重命名的自己的镜像推到harbor中
先登陆仓库
[root@k8s-node1 ~]# docker login https://gsl.harbor.com
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
推入
docker push gsl.harbor.com/library/nginx:v2
推送成功之后,可以在https://gsl.harbor.com/中查看到刚才上传的镜像
使用k8s测试拉取上传的镜像
删除刚才拉下来的Nginx
[root@k8s-master01 ~]# kubectl run nginx --generator=run-pod/v1 --image=gsl.harbor.com/library/nginx:v2 --port=80 --replicas=1
pod/nginx created
[root@k8s-master01 ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 6m7s
[root@k8s-master01 ~]#
[root@k8s-master01 ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-84658949d 1 1 1 8m13s
[root@k8s-master01 ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m41s
nginx-84658949d-4qccm 1/1 Running 0 8m57s
nginx-deployment-798ffc4b84-rsfpc 1/1 Running 0 6m36s
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 3m55s 10.244.1.3 k8s-node1 <none> <none>
nginx-84658949d-4qccm 1/1 Running 0 9m11s 10.244.1.2 k8s-node1 <none> <none>
nginx-deployment-798ffc4b84-rsfpc 1/1 Running 0 6m50s 10.244.2.2 k8s-node2 <none> <none>
删除一个pod,自动有生成一个,保持一副本
更多推荐
所有评论(0)