k8s搭建
关闭防火墙:(节点都执行)systemctl stop firewalldsystemctl disable firewalld关闭交换分区:(节点都执行)swapp off 临时vim /etc/fstab 注释最后一行配置各个节点主机名:(节点都执行)hostnamectl set-hostname 主机名关闭selinuxsetenforce 0 临时修改vim /etc/selinux/c
一、节点规划
因为实验是在本地做的,所以以上节点中:
k8s-master1 对应本文的 192.168.8.126
k8s-node1 对应本文的 192.168.8.123
k8s-node2 对应本文的 192.168.8.198
二、服务器初始化
关闭防火墙(节点都执行)
systemctl stop firewalld
systemctl disable firewalld
关闭交换分区(节点都执行)
swapp off
临时
vim /etc/fstab
注释最后一行
配置各个节点主机名(节点都执行)
hostnamectl set-hostname 主机名
关闭selinux(节点都执行)
setenforce 0
临时修改
vim /etc/selinux/config
修改SELINUX=disabled 永久修改
配置时间同步
选择一个节点作为服务端,剩下的作为客户端
master1为时间服务器服务端
其他为时间服务器客户端
1)配置k8s-master1
yum install chrony -y
vim /etc/chrony.conf
修改三项:
server 127.127.1.0 iburst
allow 192.168.0.0/16
local stratum 10
systemctl restart chronyd
systemctl enable chronyd
2)配置k8s-node1和k8s-node2
yum install chrony -y
vim /etc/chrony.conf
server 192.168.x.x iburst (服务端地址)
systemctl restart chronyd
systemctl enable chronyd
chronyc sources
变成 ^? 说明还没同步,^* 说明时间已经同步
三、部署etcd
部署etcd,etcd需要三台虚拟机
但是由于服务器限制,我们在master、node1、node2上分别安装一个etcd
给etcd颁发证书
上传软件包至root目录下,k8sFile,解压
1)创建证书颁发机构
2)填写表单–写明etcd所在节点的ip
3)向证书颁发机构申请证书
cd /root/k8sFile
tar TLS.tar.gz
./cfssl.sh # 复制可执行文件到bin目录下
cd etcd
vim server-csr.json # 编辑 etcd 证书的节点信息,要修改hosts,为etcd主机地址
cat generate_etcd_cert.sh
[root@k8s-master1 etcd]# cat generate_etcd_cert.sh
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
[root@k8s-master1 etcd]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # 自建ca
[root@k8s-master1 etcd]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server # 颁发etcd证书
安装etcd
etct需要三台虚拟机
在master、node1、node2上分别安装一个etcd
注意:解压之后会生成一个systemd文件和一个目录
cd /root/k8sFile
[root@k8s-master1 k8sFiles]# tar xvf etcd.tar.gz
etcd/
etcd/bin/
etcd/bin/etcd
etcd/bin/etcdctl
etcd/cfg/
etcd/cfg/etcd.conf
etcd/ssl/
etcd/ssl/ca.pem
etcd/ssl/server.pem
etcd/ssl/server-key.pem
etcd.service
[root@k8s-master1 k8sFiles]# vim /opt/etcd/cfg/etcd.conf # 编辑etcd配置文件
[root@k8s-master1 k8sFiles]# \cp -f /root/k8sFiles/TLS/etcd/{ca,server,server-key}.pem /opt/etcd/ssl # 拷贝证书到 etcd/ssl 目录
[root@k8s-master1 k8sFiles]# scp /usr/lib/systemd/system/etcd.service root@k8s-node1:/usr/lib/systemd/system # 拷贝master节点相关文件至node1 node2
[root@k8s-master1 k8sFiles]# scp /usr/lib/systemd/system/etcd.service root@k8s-node2:/usr/lib/systemd/system
[root@k8s-master1 k8sFiles]# scp -r /opt/etcd/ root@k8s-node1:/opt
[root@k8s-master1 k8sFiles]# scp -r /opt/etcd/ root@k8s-node2:/opt
在node1 2上修改配置文件
node1节点
[root@k8s-node1 ~]# vim /opt/etcd/cfg/etcd.conf
[root@k8s-master1 bin]# systemctl start etcd
[root@k8s-master1 bin]# systemctl enable etcd
node2节点
[root@k8s-node1 ~]# vim /opt/etcd/cfg/etcd.conf
[root@k8s-master1 bin]# systemctl start etcd
[root@k8s-master1 bin]# systemctl enable etcd
master节点
[root@k8s-master1 bin]# systemctl start etcd
[root@k8s-master1 bin]# systemctl enable etcd
健康检查
/opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.117.63:2379,https://192.168.117.65:2379,https://192.168.117.66:2379" cluster-health
四、部署master服务
自己给k8smaster 颁发证书 略
[root@k8s-master1 k8sFiles]# tar -xvf k8s-master.tar.gz
[root@k8s-master1 k8sFiles]# mv kube-apiserver.service kube-controller-manager.service kube-scheduler.service /usr/lib/systemd/system
[root@k8s-master1 k8sFiles]# mv kubernetes/ /opt/
[root@k8s-master1 k8sFiles]# cp /root/k8sFiles/TLS/k8s/{ca*pem,server.pem,server-key.pem} /opt/kubernetes/ssl -rvf
修改配置文件
[root@k8s-master1 k8sFiles]# vim /opt/kubernetes/cfg/kube-apiserver.conf
启动master
[root@k8s-master1 k8sFiles]# systemctl start kube-apiserver
[root@k8s-master1 k8sFiles]# systemctl start kube-scheduler
[root@k8s-master1 k8sFiles]# systemctl start kube-controller-manager
[root@k8s-master1 k8sFiles]# systemctl enable kube-apiserver
[root@k8s-master1 k8sFiles]# systemctl enable kube-scheduler
[root@k8s-master1 k8sFiles]# systemctl enable kube-controller-manager
验证一下
ps aux |grep kube
[root@k8s-master1 k8sFiles]# cp /opt/kubernetes/bin/kubectl /bin/
配置tls基于bootstrap自动颁发证书
[root@k8s-master1 k8sFiles]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper \
> --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
五、部署worker node节点
docker安装:
将master上k8s-node.tart.gz分别拷贝到node节点上,解压,这里以node1为例
[root@k8s-node1 ~]# tar xvf k8s-node.tar.gz
cni-plugins-linux-amd64-v0.8.2.tgz
daemon.json
docker-18.09.6.tgz
docker.service
kubelet.service
kube-proxy.service
kubernetes/
kubernetes/bin/
kubernetes/bin/kubelet
kubernetes/bin/kube-proxy
kubernetes/cfg/
kubernetes/cfg/kubelet-config.yml
kubernetes/cfg/bootstrap.kubeconfig
kubernetes/cfg/kube-proxy.kubeconfig
kubernetes/cfg/kube-proxy.conf
kubernetes/cfg/kubelet.conf
kubernetes/cfg/kube-proxy-config.yml
kubernetes/ssl/
kubernetes/logs/
[root@k8s-node1 ~]# mv docker.service /usr/lib/systemd/system
[root@k8s-node1 ~]# mkdir /etc/docker
[root@k8s-node1 ~]# cp daemon.json /etc/docker/
[root@k8s-node1 ~]# tar xf docker-18.09.6.tgz
[root@k8s-node1 ~]# mv docker/* /bin/
[root@k8s-node1 ~]# systemctl start docker.service
[root@k8s-node1 ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@k8s-node1 ~]#
安装kubelet和kube-proxy
[root@k8s-node1 ~]# mv kubelet.service kube-proxy.service /usr/lib/systemd/system
[root@k8s-node1 ~]# mv kubernetes/ /opt
修改配置文件
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/kube-proxy-config.yml
修改 hostnameOverride: k8s-node1
这里是指定当前主机的主机名
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/kubelet.conf
修改 --hostname-override=k8s-node1 \
这里是指定当前主机的主机名
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
修改 server: https://192.168.8.126:6443
这里指定的是master的地址
[root@k8s-node1 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
修改 server: https://192.168.31.61:6443
这里指定的是master的地址
从master节点复制证书到worker节点
[root@k8s-master1 YAML]# scp /root/k8sFiles/TLS/k8s/ca.pem /root/k8sFiles/TLS/k8s/kube-proxy.pem /root/k8sFiles/TLS/k8s/kube-proxy-key.pem root@k8s-node2:/opt/kubernetes/ssl/
启动服务
[root@k8s-node1 ~]# systemctl start kubelet.service
[root@k8s-node1 ~]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-node1 ~]# systemctl start kube-proxy.service
[root@k8s-node1 ~]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
查看服务
[root@k8s-node1 ~]# tail -f /opt/kubernetes/logs/kubelet.INFO
I0906 04:55:25.760872 6333 feature_gate.go:216] feature gates: &{map[]}
I0906 04:55:25.760917 6333 feature_gate.go:216] feature gates: &{map[]}
I0906 04:55:26.335240 6333 mount_linux.go:168] Detected OS with systemd
I0906 04:55:26.345006 6333 server.go:410] Version: v1.16.0
I0906 04:55:26.345102 6333 feature_gate.go:216] feature gates: &{map[]}
I0906 04:55:26.345348 6333 feature_gate.go:216] feature gates: &{map[]}
I0906 04:55:26.345637 6333 plugins.go:100] No cloud provider specified.
I0906 04:55:26.345719 6333 server.go:526] No cloud provider specified: "" from the config file: ""
I0906 04:55:26.345861 6333 bootstrap.go:119] Using bootstrap kubeconfig to generate TLS client cert, key and kubeconfig file
I0906 04:55:26.351090 6333 bootstrap.go:150] No valid private key and/or certificate found, reusing existing private key or creating a new one
在master节点为worker节点颁发证书,可以看到node1节点的证书请求
[root@k8s-master1 k8s]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-VcNvIajyry9hbBwvBahcV5G4t7PFUKZEQI84dO1W_4Y 3m10s kubelet-bootstrap Pending
颁发证书
[root@k8s-master1 k8s]# kubectl certificate approve node-csr-VcNvIajyry9hbBwvBahcV5G4t7PFUKZEQI84dO1W_4Y
certificatesigningrequest.certificates.k8s.io/node-csr-VcNvIajyry9hbBwvBahcV5G4t7PFUKZEQI84dO1W_4Y approved
再次查看,已经变成Issued状态
[root@k8s-master1 k8s]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-VcNvIajyry9hbBwvBahcV5G4t7PFUKZEQI84dO1W_4Y 6m29s kubelet-bootstrap Approved,Issued
给worker节点颁发证书后,就可以看到node节点信息了
[root@k8s-master1 k8s]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-node1 NotReady <none> 5m41s v1.16.0
注意node2也做同样操作
六、安装网络
查看cni
[root@k8s-node1 ~]# grep "cni" /opt/kubernetes/cfg/kubelet.conf
--network-plugin=cni \
创建目录
[root@k8s-master1 k8s]# mkdir -pv /opt/cni/bin /etc/cni/net.d
mkdir: created directory ‘/opt/cni’
mkdir: created directory ‘/opt/cni/bin’
mkdir: created directory ‘/etc/cni’
mkdir: created directory ‘/etc/cni/net.d’
解压cni
[root@k8s-node2 ~]# tar xvf k8s-node.tar.gz
[root@k8s-node1 ~]# tar xv cni-plugins-linux-amd64-v0.8.2.tgz -C /opt/cni/bin
在master上执行yaml脚本,来实现worker节点安装启动网络插件功能,镜像下载到worker节点上,在worker上启动
例如:
kubectl apply -f kube-flannel.yaml
作用:下载镜像,启动容器
kubectl delete -f kube-flannel.yaml
作用:删除镜像,关闭容器
注意:在worker节点上的docker配置中国镜像源,可参考
https://www.cnblogs.com/fish3yu/p/12616291.html
[root@k8s-master1 k8s]# cd /root/k8sFiles/
[root@k8s-master1 k8sFiles]# ls
etcd etcd.service etcd.tar.gz HA.zip k8s-master.tar.gz k8s-node.tar.gz kubernetes TLS TLS.tar.gz YAML
[root@k8s-master1 k8sFiles]# cd YAML/
[root@k8s-master1 YAML]# ls
apiserver-to-kubelet-rbac.yaml bs.yaml coredns.yaml dashboard-adminuser.yaml dashboard.yaml kube-flannel.yaml
[root@k8s-master1 YAML]#
[root@k8s-master1 YAML]#
查看状态
[root@k8s-master1 YAML]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kube-flannel-ds-amd64-59xw8 0/1 Init:0/1 0 3m50s
kube-flannel-ds-amd64-h79xv 1/1 Running 0 3m50s
[root@k8s-master1 YAML]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 81m v1.16.0
k8s-node2 NotReady <none> 59m v1.16.0
授权api server可以访问kubelet
[root@k8s-master1 YAML]# kubectl apply -f apiserver-to-kubelet-rbac.yaml
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
查看问题节点信息
[root@k8s-master1 YAML]# kubectl describe node k8s-node2
七、启动nginx容器
worker节点分别下载nginx:1.7.9
[root@k8s-node2 ~]# docker pull nginx:1.7.9
master节点执行启动容器,通过deployment创建和管理容器
[root@k8s-node2 ~]# kubectl create deployment myweb --image=nginx:1.7.9
查看
[root@k8s-node2 ~]# kubectl get deployment
暴露web端口
[root@k8s-node2 ~]# kubectl expose deployment myweb --port=80 --type=NodePort
查看端口映射
[root@k8s-node2 ~]# kubectl get svc
访问节点任意的ip地址加对应映射的端口就可以访问到nginx
八、配置web界面
查看命名空间:kubectl get namespaces
两种:
官方:kubernetes dashboard
第三方:kuboard
安装 kubernetes dashboard
安装 kuboard
kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml
kubectl apply -f https://addons.kuboard.cn/metrics-server/0.3.7/metrics-server.yaml
查看 kuboard 运行情况
主节点:
[root@k8s-master1 YAML]# kubectl get pods -n kube-system
查看 kubeboard 占用的端口,两个节点都可以看到
[root@k8s-node1 ~]# netstat -tlupn |grep 32567
tcp6 0 0 :::32567 :::* LISTEN 895/kube-proxy
访问:
获取token
# 如果您参考 www.kuboard.cn 提供的文档安装 Kuberenetes,可在第一个 Master 节点上执行此命令
echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d)
参考kuboard文档:https://www.kuboard.cn/install/install-dashboard.html#%E5%85%BC%E5%AE%B9%E6%80%A7
九、安装CoreDNS
安装
cd /root/k8sFiles/YAML/
kubectl apply -f coredns.yaml
查看
[root@k8s-master1 YAML]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d8cfdd59d-7tmlj 1/1 Running 0 18s
kube-flannel-ds-amd64-4qlw2 1/1 Running 2 6d14h
kube-flannel-ds-amd64-68gh4 1/1 Running 3 6d14h
kuboard-5ffbc8466d-dtjlr 1/1 Running 0 76m
metrics-server-78664db96b-5d96s 1/1 Running 0 73m
十、远程管理k8s
默认情况下,k8s仅仅可以在master节点进行管理操作
1)移动管理命令,复制到node节点,master节点执行
[root@k8s-master1 YAML]# scp /bin/kubectl root@k8s-node1:/bin
[root@k8s-master1 YAML]# scp /bin/kubectl root@k8s-node2:/bin
2)生成管理员证书
[root@k8s-master1 YAML]# cd /root/k8sFiles/TLS/k8s
[root@k8s-master1 k8s]# vim admin-csr.json
写入
{
"CN": "admin",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "system:masters",
"OU": "System"
}
]
}
[root@k8s-master1 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
3)创建kubeconfig文件,–server为master ip
设置集群参数
[root@k8s-master1 k8s]# kubectl config set-cluster kubernetes \
> --server=https://192.168.8.126:6443 \
> --certificate-authority=ca.pem \
> --embed-certs=true \
> --kubeconfig=config
设置客户端认证参数
[root@k8s-master1 k8s]# kubectl config set-credentials cluster-admin \
--certificate-authority=ca.pem \
--embed-certs=true \
--client-key=admin-key.pem \
--client-certificate=admin.pem \
--kubeconfig=config
设置上下文参数
[root@k8s-master1 k8s]# kubectl config set-context default \
> --cluster=kubernetes \
> --user=cluster-admin \
> --kubeconfig=config
设置默认上下文
[root@k8s-master1 k8s]# kubectl config use-context default --kubeconfig=config
会生成一个config文件,等下要把这个config文件复制到node节点,node会使用config文件来向master发请求
[root@k8s-master1 k8s]# ls
admin.csr admin.pem ca-csr.json config kube-proxy-csr.json server.csr server.pem
admin-csr.json ca-config.json ca-key.pem generate_k8s_cert.sh kube-proxy-key.pem server-csr.json
admin-key.pem ca.csr ca.pem kube-proxy.csr kube-proxy.pem server-key.pem
[root@k8s-master1 k8s]#
4)将生成的config文件发送到node
[root@k8s-master1 k8s]# scp config root@k8s-node1:/root
root@k8s-node2's password:
config
[root@k8s-master1 k8s]# scp config root@k8s-node2:/root
root@k8s-node2's password:
config
5)在两个node上,基于config实现执行kubectl
[root@k8s-node1 ~]# kubectl get nodes --kubeconfig=config
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 6d18h v1.16.0
k8s-node2 Ready <none> 6d17h v1.16.0
不加参数执行
[root@k8s-node1 ~]# mkdir .kube
mkdir: cannot create directory ‘.kube’: File exists
[root@k8s-node1 ~]# mv /root/config /root/.kube/
[root@k8s-node1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-node1 Ready <none> 6d18h v1.16.0
k8s-node2 Ready <none> 6d17h v1.16.0
十一、一些概念
pod:pod共享网络空间和存储空间,也就是pod中的容器具有相同的ip和端口以及存储文件,通常一个pod中是一组相关联的容器服务的集合,而一组pod(n个)来实现相同服务的负载均衡,这组pod由一个deployment来管理
Pod有两个必须知道的特点。
网络:每一个Pod都会被指派一个唯一的Ip地址,在Pod中的每一个容器共享网络命名空间,包括Ip地址和网络端口。在同一个Pod中的容器可以同locahost进行互相通信。当Pod中的容器需要与Pod外的实体进行通信时,则需要通过端口等共享的网络资源。
存储:Pod能够被指定共享存储卷的集合,在Pod中所有的容器能够访问共享存储卷,允许这些容器共享数据。存储卷也允许在一个Pod持久化数据,以防止其中的容器需要被重启。
service,由于pod的地址会发生改变,通过service可以为pod提供一个统一的访问路口
可以为pod提供负载均衡
deployment,创建指定数量的pod,检查pod健康状况和数量
每个pod一个内部ip,每个service管理一组多个pod向外提供服务,service也有属于自己的内部ip,用来和pod通信,对外提供服务的时候使用的是端口映射出去的服务器ip
十二、创建deployment
1)基于deployment创建nginx pod,有一个副本
方法一:命令
[root@k8s-master ~]# kubectl run nginx-dep1 --image=nginx:1.8 --replicas=1
方法二:
kubeborard
方法三:
yaml文件
[root@k8s-master1 YAML]# vim ngx-dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngx-dep3
labels:
app: nginx
type: webservice
spec:
replicas: 1
selector:
matchLabels:
app: ngx
template:
metadata:
labels:
app: ngx
type: webService
spec:
containers:
- name: nginx
image: nginx:1.8
[root@k8s-master1 YAML]# kubectl apply -f ngx-dep.yaml
查看
[root@k8s-master1 YAML]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-dep1 1/1 1 1 2m16s
ngx-dep3 1/1 1 1 80s
[root@k8s-master1 YAML]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dep1-6dd5d75f8b-7dsrz 1/1 Running 0 2m54s
ngx-dep3-86c6cf474b-lps7f 1/1 Running 0 118s
查看调度到哪个node上了
[root@k8s-master1 YAML]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-dep1-6dd5d75f8b-7dsrz 1/1 Running 0 3m48s 10.244.1.4 k8s-node2 <none> <none>
ngx-dep3-86c6cf474b-lps7f 1/1 Running 0 2m52s 10.244.1.5 k8s-node2 <none> <none>
十三、启动pod故障分析
kubectl exec -it pod对象 /bin/bash
kubectl logs 资源类型
kubectl describe 资源类型
kubectl get 资源类型
资源类型:
node
-n
-A 显示所有名称空间中的pod
pod
service
deployment
十四、发布应用
一个pod一个ip,并且会变
1)用service来发布服务,创建yaml
[root@k8s-master1 YAML]# vim ngx_svc.yaml
apiVersion: v1
kind: Service
metadata:
name: ngxsvc
labels:
app: ngx
spec:
selector:
app: ngx
ports:
- name: nginx-ports
protocol: TCP
# pod和service通信的端口
port: 80
# service向外暴露的端口
nodePort: 32002
# service向pod转发的端口
targetPort: 80
type: NodePort
2)执行
[root@k8s-master1 YAML]# kubectl apply -f ngx_svc.yaml
3)查看
[root@k8s-master1 YAML]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 7d1h
ngxsvc NodePort 10.0.0.248 <none> 80:32002/TCP 58s
node节点查看
[root@k8s-node1 ~]# ss -tlupn |grep 32002
tcp LISTEN 0 128 :::32002 :::* users:(("kube-proxy",pid=895,fd=16))
web访问,任意一个node节点都可以访问,虽然只部署在一个节点上
十五、服务伸缩
根据客户端的请求流量
查看pod ngx-dep3数量,只有1个
[root@k8s-master1 YAML]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dep1-6dd5d75f8b-7dsrz 1/1 Running 0 2m54s
ngx-dep3-86c6cf474b-lps7f 1/1 Running 0 118s
修改deployment yaml
[root@k8s-master1 YAML]# vim ngx-dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngx-dep3
labels:
app: nginx
type: webservice
spec:
replicas: 3
selector:
matchLabels:
app: ngx
template:
metadata:
labels:
app: ngx
type: webService
spec:
containers:
- name: nginx
image: nginx:1.8
执行
[root@k8s-master1 YAML]# kubectl apply -f ngx-dep.yaml
再次查看pod ngx-dep3数量
[root@k8s-master1 YAML]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dep1-6dd5d75f8b-7dsrz 1/1 Running 0 73m
ngx-dep3-86c6cf474b-b7pwt 1/1 Running 0 3m6s
ngx-dep3-86c6cf474b-k76tf 1/1 Running 0 3m5s
ngx-dep3-86c6cf474b-lps7f 1/1 Running 0 72m
十六、滚动更新
修改deployment yaml文件的nginx版本然后执行即可
[root@k8s-master1 YAML]# vim ngx-dep.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: ngx-dep3
labels:
app: nginx
type: webservice
spec:
replicas: 3
selector:
matchLabels:
app: ngx
template:
metadata:
labels:
app: ngx
type: webService
spec:
containers:
- name: nginx
image: nginx:1.7.9
执行
[root@k8s-master1 YAML]# kubectl apply -f ngx-dep.yaml
查看
[root@k8s-master1 YAML]# kubectl describe pod ngx-dep3-6fd578b8fd-9snz9 |grep nginx
nginx:
Image: nginx:1.7.9
Image ID: docker-pullable://nginx@sha256:e3456c851a152494c3e4ff5fcc26f240206abac0c9d794affb40e0714846c451
Normal Pulled 67s kubelet, k8s-node2 Container image "nginx:1.7.9" already present on machine
Normal Created 66s kubelet, k8s-node2 Created container nginx
Normal Started 65s kubelet, k8s-node2 Started container nginx
十七、持续集成CI
十八、部署java程序在k8s上
https://blog.csdn.net/yemuxiaweiliang/article/details/107300209
更多推荐
所有评论(0)