不需要先安装docker,k8s会自己安装,先安装docker会造成版本不匹配

  1. 安装etcd

与Zookeeper相似的高可用键值存储系统

yum install etcd -y

启动etcd

systemctl start etcd
systemctl enable etcd

查看etcd健康状态

etcdctl -C http://localhost:2379 cluster-health
  1. 安装k8s
yum install kubernetes -y

编辑文件/etc/kubernetes/apiserver
去掉KUBE_ADMISSION_CONTROL中的ServiceAccount

#启动Master
systemctl start kube-apiserver
systemctl enable kube-apiserver
systemctl start kube-controller-manager
systemctl enable kube-controller-manager
systemctl start kube-scheduler
systemctl enable kube-scheduler

#启动 Node 节点的程序
systemctl start kubelet
systemctl enable kubelet
systemctl start kube-proxy
systemctl enable kube-proxy

kubectl get no查看集群状态
3. docker

k8s安装的时候会自动安装docker,查看docker是否安装成功

docker version

如果出现Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?的问题,则执行:

systemctl daemon-reload
systemctl restart docker.service
  1. 安装flannel

flannel对集群中pod的网络进行统一管理

yum install flannel -y

编辑文件/etc/sysconfig/flanneld,增加以下代码:

--logtostderr=false --log_dir=/var/log/k8s/flannel/ --etcd-prefix=/atomic.io/network  --etcd-endpoints=http://localhost:2379 --iface=网卡名

ip addr 可以查看正在使用的网卡的网卡名

配置etcd中关于flanneld的key:
flannel使用etcd进行配置,来保证多个flannel实例之间的配置一致性,所以需要在etcd上进行如下配置
etcdctl mk /atomic.io/network/config ‘{ “Network”: “10.0.0.0/16” }’
/atomic.io/network/config 这个 key 与/etc/sysconfig/flannel 中的配置项 FLANNEL_ETCD_PREFIX 是相对应的,错误的话启动就会出错

  1. 启动
systemctl enable flanneld  
systemctl start flanneld
service docker restart
systemctl restart kube-apiserver
systemctl restart kube-controller-manager
systemctl restart kube-scheduler
systemctl enable flanneld
systemctl start flanneld
service docker restart
systemctl restart kubelet
systemctl restart kube-proxy
  1. 利用k8s部署SpringBoot项目

#1.先将jar包打包成镜像,需要配置Dockerfile

#java8的镜像
FROM java:8
#将本地文件挂到到/tmp目录
VOLUME /tmp
#复制文件到容器
ADD demo-0.0.1-SNAPSHOT.jar /demo.jar
#暴露8080端口
EXPOSE 8080
#配置启动容器后执行的命令
ENTRYPOINT ["java","-jar","/demo.jar"]
docker build -t demo .
docker pull demo

需要pull一下镜像,不然后面启动的时候一直不能成功
#2.创建 rc 文件 demo-rc.yaml:

apiVersion: v1
kind: ReplicationController
metadata:
  name: demo
spec:
  replicas: 1
  selector:
    app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      containers:
        - name: demo
          image: demo
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080

创建pod

kubectl create -f demo-rc.yaml 

查看pod

kubectl get po
kubectl describe po PODNAME

如果发现redhat-cat.crt不存在的警告,则执行:

yum install *rhsm* -y 
touch /etc/rhsm/ca/redhat-uep.pem
执行完以上操作后重启
systemctl restart kubelet
将rc删除,再创建:
kubectl delete rc demo
kubectl create -f demo-rc.yaml 

如果出现一直卡在teminating,则可以强制删除:

kubectl delete pod PODNAME --force --grace-period=0

#3.创建 service 文件 demo-svc.yaml:

apiVersion: v1
kind: Service
metadata:
  name: demo
spec:
  type: NodePort
  ports:
  - port: 8080
    targetPort: 8080
    # 节点暴露给外部的端口(范围必须为30000-32767)
    nodePort: 30001
  selector:
    app: demo

创建svc

kubectl create -f demo-svc.yaml 
kubectl get svc

如果访问不到,需要关闭防火墙:
systemctl stop firewalld
iptables -P FORWARD ACCEPT

参考来源:https://blog.csdn.net/ysk_xh_521/article/details/81668631?depth_1-utm_source=distribute.pc_relevant_right.none-task-blog-BlogCommendFromBaidu-1&utm_source=distribute.pc_relevant_right.none-task-blog-BlogCommendFromBaidu-1

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐