N60-W2 dockerfile和搭建k8s集群
N60-W2文章目录N60-W2dockerfile构建nginx镜像容器的cpu和内存的资源限制k8s master和node节点各组件的功能部署k8s集群(待完善高可用)dockerfile构建nginx镜像1.新建docker目录并下载nginxmkdir /dockercd /dockerwget http://nginx.org/download/nginx-1.20.2.tar.gz2
N60-W2
dockerfile构建nginx镜像
1.新建docker目录并下载nginx
mkdir /docker
cd /docker
wget http://nginx.org/download/nginx-1.20.2.tar.gz
2.在docker目录下新建dockerfile文件
FROM centos:centos7.9.2009
MAINTAINER 407740435@qq.com
RUN yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel pcre pcre-devel
ADD nginx-1.20.2.tar.gz /opt
WORKDIR /usr/local/nginx
RUN cd /opt/nginx-1.20.2 && ./configure --prefix=/usr/local/nginx && make && make install && rm -rf /opt/nginx-1.20.2
RUN ln -sv /dev/stdout /usr/local/nginx/logs/access.log
RUN ln -sv /dev/stderr /usr/local/nginx/logs/error.log
EXPOSE 80 443
ENTRYPOINT ["/usr/local/nginx/sbin/nginx","-g","daemon off;"]
3.在Dockerfile对应目录下构建nginx镜像
docker build -t nginx_hhy:v1 .
4.运行容器
docker run -it -d --name nginx -p 80:80 hhy_nginx:v1.2
5.验证容器
wget localhost:80
--2022-01-07 00:36:08-- http://localhost/
Resolving localhost (localhost)... ::1, 127.0.0.1
Connecting to localhost (localhost)|::1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 612 [text/html]
Saving to: ‘index.html.3’
index.html.3 100%[=============================================================================>] 612 --.-KB/s in 0s
2022-01-07 00:36:08 (113 MB/s) - ‘index.html.3’ saved [612/612]
容器的cpu和内存的资源限制
限制容器cpu
–cpus
限制cpu为0.1核
docker run -it -d --name nginx1 -p 80:80 --cpus 0.1 hhy_nginx:v1.2
限制容器内存
-m, --memory bytes Memory limit
限制内存为256M
docker run -it -d --name nginx2 -p 8001:80 -m 256m hhy_nginx:v1.2
k8s master和node节点各组件的功能
kube-apiserver提供了K8S各类资源对象增删改查以及watch等HTTP REST接口,对象包括pods、services、replicationcontrollers等,api server为rest操作提供服务,并为集群的共享状态提供前端,所有其他组件都通过该前端进行交互。
kube-controller-manager作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群始终处于预期的工作状态
kube-scheduler是k8s的调度器,负责将pods指派到节点上
kubelet是运行在每个worker节点的代理组件,它回监视已分配给节点的pod,功能如下:
- 向master汇报node节点的状态信息
- 接受指令并在pod中创建docker容器
- 准备pod所需的数据卷
- 返回pod的运行状态
- 在node节点执行容器监控检查
kube-proxy 在主机上维护网络规则并执行连接转发,实现K8S的服务访问。运行在每个节点上,监听API Server中服务对象的变化,再通过管理iptables或者ipvs规则来实现网络的转发
etcd 是K8S默认使用的key-value数据存储系统,用于保存k8s的所有集群数据。
kubectl是一个通过命令行对k8s集群进行管理的客户端工具
网络组件 calico 、flannel、dns服务
部署k8s集群(待完善高可用)
类型 | 配置 | ip | 公网ip | 主机名 | VIP |
---|---|---|---|---|---|
k8s master1、etcd、ansible | 2c4g | 172.16.14.101 | 112.124.29.43 | master1 | |
k8s node1、etcd | 2c4g | 172.16.14.103 | 47.97.83.117 | node1 | |
k8s node2、etcd | 2c4g | 172.16.14.104 | 121.40.130.36 | node2 | |
haproxy1、harbor | 1c2g | 172.16.14.100 | 47.98.143.1 | harbor | |
slb负载均衡器 | 172.16.36.156 |
环境准备
1.在阿里云开通负载均衡器
2.在主控节点安装ansible
yum -y install ansible
3.配置免密登录
在ansible主控节点172.16.14.100执行如下命令
ssh-keygen
ssh-copy-id 172.16.14.101
ssh-copy-id 172.16.14.102
ssh-copy-id 172.16.14.103
ssh-copy-id 172.16.14.104
4.将docker离线安装包拷贝至各主机的/opt下,通过ansible在master和node节点安装docker环境
ansible all -m shell -a "cd /opt && tar -xvf docker-19.03.15-binary-install.tar.gz && ./docker-install.sh"
通过kubeadm安装k8s
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
所有master、node节点安装kubelet kubeadm kubectl 并设置为开机自启动
sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
sudo systemctl enable --now kubelet
修改docker的默认cgroup driver为systemd,否则初始化会报错
vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
重启docker
systemctl daemon-reload
systemctl restart docker
初始化master集群
kubeadm init \
--apiserver-advertise-address=172.16.14.101 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.23.0 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
初始化master成功如下
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.14.101:6443 --token 77lm1g.gckvxq4ond9r4ojf \
--discovery-token-ca-cert-hash sha256:ec7412004ad0989e716d86e126a93a78714c06150d9ae0e36c6323d6b4946af8
在master节点上部署flannet网络组件
wget https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
在node1、node2上执行加入集群
```shell kubeadm join 172.16.14.101:6443 --token 77lm1g.gckvxq4ond9r4ojf \ --discovery-token-ca-cert-hash sha256:ec7412004ad0989e716d86e126a93a78714c06150d9ae0e36c6323d6b4946af8 ```
在master上执行命令查看节点工作状态
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,master 109m v1.23.1
node1 Ready <none> 15s v1.23.1
node2 Ready <none> 19s v1.23.1
更多推荐
所有评论(0)