k8s二进制安装
k8s二进制安装[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Qrp2XXJl-1610034806228)(C:\Users\wxs\Desktop\k8s\图片\1563068809299.png)][外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zKFAD3ba-1610034806229)(C:\Users\wxs\Deskto
k8s二进制安装
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-Qrp2XXJl-1610034806228)(C:\Users\wxs\Desktop\k8s\图片\1563068809299.png)]
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-zKFAD3ba-1610034806229)(C:\Users\wxs\Desktop\k8s\图片\1601195361513.png)]
kube-proxy:负载均衡
kubernetes 是一个分布式的集群管理系统,在每个节点(node)上都要运行一个 worker 对容器进行生命周期的管理,这个 worker 程序就是 kubelet。
controller-manager:Controller Manager作为集群内部的管理控制中心
scheduler:调度器
Replication Controller的职责
确保集群中有且仅有N个Pod实例,N是RC中定义的Pod副本数量。
通过调整RC中的spec.replicas属性值来实现系统扩容或缩容。
通过改变RC中的Pod模板来实现系统的滚动升级。
2.2. Replication Controller使用场景
使用场景 说明 使用命令
重新调度 当发生节点故障或Pod被意外终止运行时,可以重新调度保证集群中仍然运行指定的副本数。
弹性伸缩 通过手动或自动扩容代理修复副本控制器的spec.replicas属性,可以实现弹性伸缩。 kubectl scale
滚动更新 创建一个新的RC文件,通过kubectl 命令或API执行,则会新增一个新的副本同时删除旧的副本,当旧副本为0时,删除旧的RC。 kubectl rolling-update
2.k8s的安装
环境准备(修改ip地址,主机名,host解析)
主机 | ip | 内存 | 软件 |
---|---|---|---|
k8s-master | 10.0.0.11 | 1g | etcd,api-server,controller-manager,scheduler |
k8s-node1 | 100.0.12 | 2g | etcd,kubelet,kube-proxy,docker,flannel |
k8s-node2 | 10.0.0.13 | 2g | ectd,kubelet,kube-proxy,docker,flannel |
k8s-node3 | 10.0.0.14 | 1g | kubelet,kube-proxy,docker,flannel |
host解析
10.0.0.11 k8s-master
10.0.0.12 k8s-node1
10.0.0.13 k8s-node2
10.0.0.14 k8s-node3
免密钥登陆
[root@k8s-master ~]# ssh-keygen -t rsa
[root@k8s-master ~]# ls .ssh/
id_rsa id_rsa.pub
[root@k8s-master ~]# scp -rp .ssh root@10.0.0.11:/root
[root@k8s-master ~]# scp -rp .ssh root@10.0.0.12:/root
[root@k8s-master ~]# scp -rp .ssh root@10.0.0.13:/root
[root@k8s-master ~]# scp -rp .ssh root@10.0.0.14:/root
[root@k8s-master ~]# ssh root@10.0.0.12
[root@k8s-master ~]# ssh root@10.0.0.13
[root@k8s-master ~]# ssh root@10.0.0.14
[root@k8s-master ~]# scp /etc/hosts root@10.0.0.12:/etc/hosts
hosts 100% 240 4.6KB/s 00:00
[root@k8s-master ~]# scp /etc/hosts root@10.0.0.13:/etc/hosts
hosts 100% 240 51.4KB/s 00:00
[root@k8s-master ~]# scp /etc/hosts root@10.0.0.14:/etc/hosts
hosts 100% 240 49.2KB/s 00:00
2.1 颁发证书:
准备证书颁发工具
在node3节点上
[root@k8s-node3 ~]# mkdir /opt/softs
[root@k8s-node3 ~]# cd /opt/softs
[root@k8s-node3 softs]# ls
cfssl cfssl-certinfo cfssl-json
[root@k8s-node3 softs]# chmod +x /opt/softs/*
[root@k8s-node3 softs]# ln -s /opt/softs/* /usr/bin/
[root@k8s-node3 softs]# mkdir /opt/certs
[root@k8s-node3 softs]# cd /opt/certs
编辑ca证书配置文件
vi /opt/certs/ca-config.json i{ "signing": { "default": { "expiry": "175200h" }, "profiles": { "server": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "server auth" ] }, "client": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "client auth" ] }, "peer": { "expiry": "175200h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } }
编辑ca证书请求配置文件
vi /opt/certs/ca-csr.json i{ "CN": "kubernetes-ca", "hosts": [ ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ], "ca": { "expiry": "175200h" } }
生成CA证书和私钥
[root@k8s-node3 certs]# cfssl gencert -initca ca-csr.json | cfssl-json -bare ca - 2020/09/27 17:20:56 [INFO] generating a new CA key and certificate from CSR 2020/09/27 17:20:56 [INFO] generate received request 2020/09/27 17:20:56 [INFO] received CSR 2020/09/27 17:20:56 [INFO] generating key: rsa-2048 2020/09/27 17:20:56 [INFO] encoded CSR 2020/09/27 17:20:56 [INFO] signed certificate with serial number 409112456326145160001566370622647869686523100724 [root@k8s-node3 certs]# ls ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
ca.csr:证书的请求颁发文件
2.2 部署etcd集群
主机名 | ip | 角色 |
---|---|---|
k8s-master | 10.0.0.11 | etcd lead |
k8s-node1 | 10.0.0.12 | etcd follow |
k8s-node2 | 10.0.0.13 | etcd follow |
颁发etcd节点之间通信的证书
[root@k8s-node3 certs]# vi /opt/certs/etcd-peer-csr.json
vi /opt/certs/etcd-peer-csr.json i{ "CN": "etcd-peer", "hosts": [ "10.0.0.11", "10.0.0.12", "10.0.0.13" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] } [root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd-peer-csr.json | cfssl-json -bare etcd-peer 2020/09/27 17:29:49 [INFO] generate received request 2020/09/27 17:29:49 [INFO] received CSR 2020/09/27 17:29:49 [INFO] generating key: rsa-2048 2020/09/27 17:29:49 [INFO] encoded CSR 2020/09/27 17:29:49 [INFO] signed certificate with serial number 15140302313813859454537131325115129339480067698 2020/09/27 17:29:49 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [root@k8s-node3 certs]# ls etcd-peer* etcd-peer.csr etcd-peer-csr.json etcd-peer-key.pem etcd-peer.pem
#安装etcd服务
在k8s-master,k8s-node1,k8s-node2上
[root@k8s-master ~]# yum install etcd -y
[root@k8s-node1 ~]# yum install etcd -y
[root@k8s-node2 ~]# yum install etcd -y
#发送证书
[root@k8s-node3 certs]# scp -rp *.pem root@10.0.0.11:/etc/etcd/
[root@k8s-node3 certs]#scp -rp *.pem root@10.0.0.12:/etc/etcd/
[root@k8s-node3 certs]#scp -rp *.pem root@10.0.0.13:/etc/etcd/
#master节点
[root@k8s-master ~]# chown -R etcd:etcd /etc/etcd/*.pem
[root@k8s-master etc]# vim /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/" ETCD_LISTEN_PEER_URLS="https://10.0.0.11:2380" ETCD_LISTEN_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379" ETCD_NAME="node1" ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.11:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.11:2379,http://127.0.0.1:2379" ETCD_INITIAL_CLUSTER="node1=https://10.0.0.11:2380,node2=https://10.0.0.12:2380,node3=https://10.0.0.13:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" ETCD_CERT_FILE="/etc/etcd/etcd-peer.pem" ETCD_KEY_FILE="/etc/etcd/etcd-peer-key.pem" ETCD_CLIENT_CERT_AUTH="true" ETCD_TRUSTED_CA_FILE="/etc/etcd/ca.pem" ETCD_AUTO_TLS="true" ETCD_PEER_CERT_FILE="/etc/etcd/etcd-peer.pem" ETCD_PEER_KEY_FILE="/etc/etcd/etcd-peer-key.pem" ETCD_PEER_CLIENT_CERT_AUTH="true" ETCD_PEER_TRUSTED_CA_FILE="/etc/etcd/ca.pem" ETCD_PEER_AUTO_TLS="true"
[root@k8s-master etc]# scp -rp /etc/etcd/etcd.conf root@10.0.0.12:/etc/etcd/etcd.conf
[root@k8s-master etc]# scp -rp /etc/etcd/etcd.conf root@10.0.0.13:/etc/etcd/etcd.conf
#node1和node2需修改
node1
ETCD_LISTEN_PEER_URLS="https://10.0.0.12:2380" ETCD_LISTEN_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379" ETCD_NAME="node2" ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.12:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.12:2379,http://127.0.0.1:2379"
node2
ETCD_LISTEN_PEER_URLS="https://10.0.0.13:2380" ETCD_LISTEN_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379" ETCD_NAME="node3" ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.0.0.13:2380" ETCD_ADVERTISE_CLIENT_URLS="https://10.0.0.13:2379,http://127.0.0.1:2379"
#3个etcd节点同时启动
systemctl start etcd
systemctl enable etcd
#验证
[root@k8s-master ~]# etcdctl member list
2.3 master节点的安装
安装api-server服务
上传kubernetes-server-linux-amd64-v1.15.4.tar.gz到node3上,然后解压
[root@k8s-node3 softs]# ls
cfssl cfssl-certinfo cfssl-json kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# tar xf kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# ls
cfssl cfssl-certinfo cfssl-json kubernetes kubernetes-server-linux-amd64-v1.15.4.tar.gz
[root@k8s-node3 softs]# cd /opt/softs/kubernetes/server/bin/
[root@k8s-node3 bin]# scp -rp kube-apiserver kube-controller-manager kube-scheduler kubectl root@10.0.0.11:/usr/sbin/
签发client证书
[root@k8s-node3 bin]# cd /opt/certs/ [root@k8s-node3 certs]# vi /opt/certs/client-csr.json i{ "CN": "k8s-node", "hosts": [ ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] }
生成证书
[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client client-csr.json | cfssl-json -bare client
[root@k8s-node3 certs]# ls client*
client.csr client-key.pem
client-csr.json client.pem
签发kube-apiserver证书
[root@k8s-node3 certs]# vi /opt/certs/apiserver-csr.json i{ "CN": "apiserver", "hosts": [ "127.0.0.1", "10.254.0.1", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local", "10.0.0.11", "10.0.0.12", "10.0.0.13" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] } #注意10.254.0.1为clusterIP网段的第一个ip,做为pod访问api-server的内部ip,oldqiang在这一块被坑了很久 [root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server apiserver-csr.json | cfssl-json -bare apiserver [root@k8s-node3 certs]# ls apiserver* apiserver.csr apiserver-key.pem apiserver-csr.json apiserver.pem
配置api-server服务
master节点
[root@k8s-master kubernetes]# scp -rp root@10.0.0.14:/opt/certs/capem .
[root@k8s-master kubernetes]# scp -rp root@10.0.0.14:/opt/certs/apiserverpem .
[root@k8s-master kubernetes]# scp -rp root@10.0.0.14:/opt/certs/client*pem .
[root@k8s-master kubernetes]# ls
apiserver-key.pem apiserver.pem ca-key.pem ca.pem client-key.pem client.pem
RBAC:基于角色的访问控制 role bash access controller
#api-server审计日志规则
[root@k8s-master kubernetes]# vi audit.yaml iapiVersion: audit.k8s.io/v1beta1 # This is required. kind: Policy # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" # Resource "pods" doesn't match requests to any subresource of pods, # which is consistent with the RBAC policy. resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata # Long-running requests like watches that fall under this rule will not # generate an audit event in RequestReceived. omitStages: - "RequestReceived" vi /usr/lib/systemd/system/kube-apiserver.service i[Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes After=etcd.service [Service] ExecStart=/usr/sbin/kube-apiserver \ --audit-log-path /var/log/kubernetes/audit-log \ --audit-policy-file /etc/kubernetes/audit.yaml \ --authorization-mode RBAC \ --client-ca-file /etc/kubernetes/ca.pem \ --requestheader-client-ca-file /etc/kubernetes/ca.pem \ --enable-admission-plugins NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota \ --etcd-cafile /etc/kubernetes/ca.pem \ --etcd-certfile /etc/kubernetes/client.pem \ --etcd-keyfile /etc/kubernetes/client-key.pem \ --etcd-servers https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379 \ --service-account-key-file /etc/kubernetes/ca-key.pem \ --service-cluster-ip-range 10.254.0.0/16 \ --service-node-port-range 30000-59999 \ --kubelet-client-certificate /etc/kubernetes/client.pem \ --kubelet-client-key /etc/kubernetes/client-key.pem \ --log-dir /var/log/kubernetes/ \ --logtostderr=false \ --tls-cert-file /etc/kubernetes/apiserver.pem \ --tls-private-key-file /etc/kubernetes/apiserver-key.pem \ --v 2 Restart=on-failure [Install] WantedBy=multi-user.target
[root@k8s-master kubernetes]# mkdir /var/log/kubernetes
[root@k8s-master kubernetes]# systemctl daemon-reload
[root@k8s-master kubernetes]# systemctl start kube-apiserver.service
[root@k8s-master kubernetes]# systemctl enable kube-apiserver.service
[root@k8s-master kubernetes]# kubectl get cs //检测
[root@k8s-master kubernetes]# mkdir /var/log/kubernetes
[root@k8s-master kubernetes]# systemctl daemon-reload
[root@k8s-master kubernetes]# systemctl start kube-apiserver.service
[root@k8s-master kubernetes]# systemctl enable kube-apiserver.service
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-f28U8tV0-1610034806231)(C:\Users\wxs\Desktop\k8s\图片\image-20201213203210465.png)]
安装controller-manager服务
[root@k8s-master kubernetes]# vi /usr/lib/systemd/system/kube-controller-manager.service i[Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes After=kube-apiserver.service [Service] ExecStart=/usr/sbin/kube-controller-manager \ --cluster-cidr 172.18.0.0/16 \ --log-dir /var/log/kubernetes/ \ --master http://127.0.0.1:8080 \ --service-account-private-key-file /etc/kubernetes/ca-key.pem \ --service-cluster-ip-range 10.254.0.0/16 \ --root-ca-file /etc/kubernetes/ca.pem \ --logtostderr=false \ --v 2 Restart=on-failure [Install] WantedBy=multi-user.target [root@k8s-master kubernetes]# systemctl daemon-reload [root@k8s-master kubernetes]# systemctl start kube-controller-manager.service [root@k8s-master kubernetes]# systemctl enable kube-controller-manager.service
安装scheduler服务
[root@k8s-master kubernetes]# vi /usr/lib/systemd/system/kube-scheduler.service i[Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes After=kube-apiserver.service [Service] ExecStart=/usr/sbin/kube-scheduler \ --log-dir /var/log/kubernetes/ \ --master http://127.0.0.1:8080 \ --logtostderr=false \ --v 2 Restart=on-failure [Install] WantedBy=multi-user.target [root@k8s-master kubernetes]# systemctl daemon-reload [root@k8s-master kubernetes]# systemctl start kube-scheduler.service [root@k8s-master kubernetes]# systemctl enable kube-scheduler.service
验证master节点
[root@k8s-master kubernetes]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} etcd-2 Healthy {"health":"true"}
2.4 node节点的安装
安装kubelet服务
在node3节点上签发证书
[root@k8s-node3 bin]# cd /opt/certs/ [root@k8s-node3 certs]# vi kubelet-csr.json i{ "CN": "kubelet-node", "hosts": [ "127.0.0.1", "10.0.0.11", "10.0.0.12", "10.0.0.13", "10.0.0.14", "10.0.0.15" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "beijing", "L": "beijing", "O": "od", "OU": "ops" } ] } [root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=server kubelet-csr.json | cfssl-json -bare kubelet [root@k8s-node3 certs]# ls kubelet* kubelet.csr kubelet-csr.json kubelet-key.pem kubelet.pem
#生成kubelet启动所需的kube-config文件
[root@k8s-node3 certs]# ln -s /opt/softs/kubernetes/server/bin/kubectl /usr/sbin/ #设置集群参数 [root@k8s-node3 certs]# kubectl config set-cluster myk8s \ --certificate-authority=/opt/certs/ca.pem \ --embed-certs=true \ --server=https://10.0.0.11:6443 \ --kubeconfig=kubelet.kubeconfig Cluster "myk8s" set. #设置客户端认证参数 [root@k8s-node3 certs]# kubectl config set-credentials k8s-node --client-certificate=/opt/certs/client.pem --client-key=/opt/certs/client-key.pem --embed-certs=true --kubeconfig=kubelet.kubeconfig User "k8s-node" set. #生成上下文参数 [root@k8s-node3 certs]# kubectl config set-context myk8s-context \ --cluster=myk8s \ --user=k8s-node \ --kubeconfig=kubelet.kubeconfig Context "myk8s-context" created. #切换默认上下文 [root@k8s-node3 certs]# kubectl config use-context myk8s-context --kubeconfig=kubelet.kubeconfig Switched to context "myk8s-context". #查看生成的kube-config文件 [root@k8s-node3 certs]# ls kubelet.kubeconfig kubelet.kubeconfig
master节点上
[root@k8s-master ~]# vi k8s-node.yaml iapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: k8s-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: k8s-node [root@k8s-master ~]# kubectl create -f k8s-node.yaml clusterrolebinding.rbac.authorization.k8s.io/k8s-node created
node1节点
#安装docker-ce 过程略 vim /etc/docker/daemon.json i{ "registry-mirrors": ["https://registry.docker-cn.com"], "exec-opts": ["native.cgroupdriver=systemd"] } systemctl restart docker.service systemctl enable docker.service [root@k8s-node1 ~]# mkdir /etc/kubernetes [root@k8s-node1 ~]# cd /etc/kubernetes [root@k8s-node1 kubernetes]# scp -rp root@10.0.0.14:/opt/certs/kubelet.kubeconfig . root@10.0.0.14's password: kubelet.kubeconfig 100% 6219 3.8MB/s 00:00 [root@k8s-node1 kubernetes]# scp -rp root@10.0.0.14:/opt/certs/ca*pem . root@10.0.0.14's password: ca-key.pem 100% 1675 1.2MB/s 00:00 ca.pem 100% 1354 946.9KB/s 00:00 [root@k8s-node1 kubernetes]# scp -rp root@10.0.0.14:/opt/certs/kubelet*pem . root@10.0.0.14's password: kubelet-key.pem 100% 1679 1.2MB/s 00:00 kubelet.pem 100% 1464 1.1MB/s 00:00 [root@k8s-node1 kubernetes]# [root@k8s-node1 kubernetes]# scp -rp root@10.0.0.14:/opt/softs/kubernetes/server/bin/kubelet /usr/bin/ root@10.0.0.14's password: kubelet 100% 114MB 29.6MB/s 00:03 [root@k8s-node1 kubernetes]# mkdir /var/log/kubernetes [root@k8s-node1 kubernetes]# vi /usr/lib/systemd/system/kubelet.service i[Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] ExecStart=/usr/bin/kubelet \ --anonymous-auth=false \ --cgroup-driver systemd \ --cluster-dns 10.254.230.254 \ --cluster-domain cluster.local \ --runtime-cgroups=/systemd/system.slice \ --kubelet-cgroups=/systemd/system.slice \ --fail-swap-on=false \ --client-ca-file /etc/kubernetes/ca.pem \ --tls-cert-file /etc/kubernetes/kubelet.pem \ --tls-private-key-file /etc/kubernetes/kubelet-key.pem \ --hostname-override 10.0.0.12 \ --image-gc-high-threshold 20 \ --image-gc-low-threshold 10 \ --kubeconfig /etc/kubernetes/kubelet.kubeconfig \ --log-dir /var/log/kubernetes/ \ --pod-infra-container-image t29617342/pause-amd64:3.0 \ --logtostderr=false \ --v=2 Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@k8s-node1 kubernetes]# systemctl daemon-reload [root@k8s-node1 kubernetes]# systemctl start kubelet.service [root@k8s-node1 kubernetes]# systemctl enable kubelet.service
node-2执行相同的命令
修改ip地址
–hostname-override 10.0.0.13 \
master节点验证
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.0.0.12 Ready <none> 15m v1.15.4
10.0.0.13 Ready <none> 16s v1.15.4
安装kube-proxy服务
在node3节点上签发证书
[root@k8s-node3 ~]# cd /opt/certs/
[root@k8s-node3 certs]# vi /opt/certs/kube-proxy-csr.json
i{
"CN": "system:kube-proxy",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"ST": "beijing",
"L": "beijing",
"O": "od",
"OU": "ops"
}
]
}
[root@k8s-node3 certs]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=client kube-proxy-csr.json | cfssl-json -bare kube-proxy-client
[root@k8s-node3 certs]# ls kube-proxy-c*
#生成kube-proxy启动所需要kube-config
[root@k8s-node3 certs]# kubectl config set-cluster myk8s \ --certificate-authority=/opt/certs/ca.pem \ --embed-certs=true \ --server=https://10.0.0.11:6443 \ --kubeconfig=kube-proxy.kubeconfig Cluster "myk8s" set. [root@k8s-node3 certs]# kubectl config set-credentials kube-proxy \ --client-certificate=/opt/certs/kube-proxy-client.pem \ --client-key=/opt/certs/kube-proxy-client-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig User "kube-proxy" set. [root@k8s-node3 certs]# kubectl config set-context myk8s-context \ --cluster=myk8s \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig Context "myk8s-context" created. [root@k8s-node3 certs]# kubectl config use-context myk8s-context --kubeconfig=kube-proxy.kubeconfig Switched to context "myk8s-context". [root@k8s-node3 certs]# ls kube-proxy.kubeconfig kube-proxy.kubeconfig [root@k8s-node3 certs]# scp -rp kube-proxy.kubeconfig root@10.0.0.12:/etc/kubernetes/ [root@k8s-node3 certs]# scp -rp kube-proxy.kubeconfig root@10.0.0.13:/etc/kubernetes/ [root@k8s-node3 bin]# scp -rp kube-proxy root@10.0.0.12:/usr/bin/ [root@k8s-node3 bin]# scp -rp kube-proxy root@10.0.0.13:/usr/bin/
在node1节点上配置kube-proxy
[root@k8s-node1 ~]# vi /usr/lib/systemd/system/kube-proxy.service i[Unit] Description=Kubernetes Proxy After=network.target [Service] ExecStart=/usr/bin/kube-proxy \ --kubeconfig /etc/kubernetes/kube-proxy.kubeconfig \ --cluster-cidr 172.18.0.0/16 \ --hostname-override 10.0.0.12 \ --logtostderr=false \ --v=2 Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target [root@k8s-node1 ~]# systemctl daemon-reload [root@k8s-node1 ~]# systemctl start kube-proxy.service [root@k8s-node1 ~]# systemctl enable kube-proxy.service
在node-2上面执行相同的命令
–hostname-override 10.0.0.13 \
2.5 配置flannel网络
所有节点安装flannel
yum install flannel -y
mkdir /opt/certs/
在node3上分发证书
[root@k8s-node3 ~]# cd /opt/certs/
[root@k8s-node3 certs]# scp -rp ca.pem client*pem root@10.0.0.11:/opt/certs/
[root@k8s-node3 certs]# scp -rp ca.pem client*pem root@10.0.0.12:/opt/certs/
[root@k8s-node3 certs]# scp -rp ca.pem client*pem root@10.0.0.13:/opt/certs/
在master节点上
etcd创建flannel的key
#通过这个key定义pod的ip地址范围 etcdctl mk /atomic.io/network/config '{ "Network": "172.18.0.0/16","Backend": {"Type": "vxlan"} }' #注意可能会失败提示 Error: x509: certificate signed by unknown authority #多重试几次就好了
配置启动flannel
vi /etc/sysconfig/flanneld 第4行:FLANNEL_ETCD_ENDPOINTS="https://10.0.0.11:2379,https://10.0.0.12:2379,https://10.0.0.13:2379" 第8行不变:FLANNEL_ETCD_PREFIX="/atomic.io/network" 第11行:FLANNEL_OPTIONS="-etcd-cafile=/opt/certs/ca.pem -etcd-certfile=/opt/certs/client.pem -etcd-keyfile=/opt/certs/client-key.pem" systemctl start flanneld.service systemctl enable flanneld.service #验证 [root@k8s-master ~]# ifconfig flannel.1 flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450 inet 172.18.43.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::30d9:50ff:fe47:599e prefixlen 64 scopeid 0x20<link> ether 32:d9:50:47:59:9e txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 8 overruns 0 carrier 0 collisions 0
在node1和node2上
[root@k8s-node1 ~]# vim /usr/lib/systemd/system/docker.service 将ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock 修改为ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock 增加一行ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT [root@k8s-node1 ~]# systemctl daemon-reload [root@k8s-node1 ~]# systemctl restart docker #验证,docker0网络为172.18网段就ok了 [root@k8s-node1 ~]# ifconfig docker0 docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500 inet 172.18.41.1 netmask 255.255.255.0 broadcast 172.18.41.255 ether 02:42:07:3e:8a:09 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
验证k8s集群的安装
[root@k8s-master ~]# kubectl run nginx --image=nginx:1.13 --replicas=2 #多等待一段时间,再查看pod状态 [root@k8s-master ~]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-6459cd46fd-8lln4 1/1 Running 0 3m27s 172.18.41.2 10.0.0.12 <none> <none> nginx-6459cd46fd-xxt24 1/1 Running 0 3m27s 172.18.96.2 10.0.0.13 <none> <none> [root@k8s-master ~]# kubectl expose deployment nginx --port=80 --target-port=80 --type=NodePort service/nginx exposed [root@k8s-master ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 6h46m nginx NodePort 10.254.160.83 <none> 80:41760/TCP 3s #打开浏览器访问http://10.0.0.12:41760,能访问就ok了 [root@k8s-node1 kubernetes]# docker load -i docker_alpine3.9.tar.gz [root@k8s-node1 kubernetes]# docker run -it alpine:3.9 / # ip add [root@k8s-node2 kubernetes]# docker load -i docker_nginx1.13.tar.gz [root@k8s-master ~]# curl -I 10.0.0.12:44473 [root@k8s-master ~]# curl -I 10.0.0.13:44473
验证
[root@k8s-node1 kubernetes]# docker load -i docker_alpine3.9.tar.gz
[root@k8s-node1 kubernetes]# docker run -it alpine:3.9
/ # ip add
[root@k8s-node2 kubernetes]# docker load -i docker_nginx1.13.tar.gz
[root@k8s-master ~]# curl -I 10.0.0.12:44473
[root@k8s-master ~]# curl -I 10.0.0.13:44473
3:k8s的常用资源
3.1 pod资源
pod资源至少由两个容器组成,一个基础容器pod+业务容器
动态pod,这个pod的yaml文件从etcd获取的yaml
静态pod,kubelet本地目录读取yaml文件,启动的pod
在node1上面执行
mkdir /etc/kubernetes/manifest
vim /usr/lib/systemd/system/kubelet.service
#启动参数增加一行
--pod-manifest-path /etc/kubernetes/manifest \
systemctl daemon-reload
systemctl restart kubelet.service
cd /etc/kubernetes/manifest/
vi k8s_pod.yaml
iapiVersion: v1
kind: Pod
metadata:
name: static-pod
spec:
containers:
- name: nginx
image: nginx:1.13
ports:
- containerPort: 80
#验证
[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-6459cd46fd-hg2kq 1/1 Running 1 2d16h
nginx-6459cd46fd-ng9v6 1/1 Running 1 2d16h
oldboy-5478b985bc-6f8gz 1/1 Running 1 2d16h
static-pod-10.0.0.12 1/1 Running 0 21s
3.2 secrets资源
密码,密钥证书,存的密码密码需要base64位加密
加密的
为什么要有 Secret
使用过 Kubernetes 的人应该都知道, Kubernetes 对动态配置管理提供了两种管理方式, 一种是 ConfigMap
一种就是现在要讲的 Secret
.
这是因为我们在 kubernetes 上部署应用的时候,经常会需要传一些动态配置给应用使用,比如数据库地址,用户名, 密码之类的信息。如果没有上面提供的方法, 我们只能使用下面几种方法
- 直接打包到镜像中,这种方式不够灵活且对于敏感信息不够安全。
- 在部署文件里通过 ENV 环境变量传入,但是这样的话会导致修改 ENV 有需要重启所有的 Container, 也不够灵活.
- 应用启动的时候去数据库或者专门的配置中心拿,没问题!但是实现起来比较麻烦. 并且万一存放地址变更要更新所有应用.
Secret
为什么安全?
主要是 Kubernetes 对 Secret
对象采取额外了预防措施。
-
传输安全
在大多数 Kubernetes 项目维护的发行版中,用户与 API server 之间的通信以及从 API server 到 kubelet 的通信都受到
SSL/TLS
的保护。对于开启 HTTPS 的 Kubernetes 来说Secret
受到保护所以是安全的。 -
存储安全
只有当挂载 Secret
的POD 调度到具体节点上时,Secret
才会被发送并存储到该节点上。但是它不会被写入磁盘,而是存储在 tmpfs 中。一旦依赖于它的 POD 被删除,Secret
就被删除。
由于节点上的 Secret
数据存储在 tmpfs 卷中,因此只会存在于内存中而不会写入到节点上的磁盘。以避免非法人员通过数据恢复等方法获取到敏感信息.
- 访问安全
同一节点上可能有多个 POD 分别拥有单个或多个Secret
。但是 Secret
只对请求挂载的 POD 中的容器才是可见的。因此,一个 POD 不能访问另一个 POD 的 Secret
。
方式1:
kubectl create secret docker-registry harbor-secret --namespace=default --docker-username=admin --docker-password=a123456 --docker-server=blog.oldqiang.com vi k8s_sa_harbor.yaml apiVersion: v1 kind: ServiceAccount metadata: name: docker-image namespace: default imagePullSecrets: - name: harbor-secret vi k8s_pod.yaml iapiVersion: v1 kind: Pod metadata: name: static-pod spec: serviceAccount: docker-image containers: - name: nginx image: blog.oldqiang.com/oldboy/nginx:1.13 ports: - containerPort: 80
方法二:
kubectl create secret docker-registry regcred --docker-server=blog.oldqiang.com --docker-username=admin --docker-password=a123456 --docker-email=296917342@qq.com
#验证
[root@k8s-master ~]# kubectl get secrets
NAME TYPE DATA AGE
default-token-vgc4l kubernetes.io/service-account-token 3 2d19h
regcred kubernetes.io/dockerconfigjson 1 114s
[root@k8s-master ~]# cat k8s_pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: static-pod
spec:
nodeName: 10.0.0.12
imagePullSecrets:
- name: regcred
containers:
- name: nginx
image: blog.oldqiang.com/oldboy/nginx:1.13
ports:
- containerPort: 80
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-beLY2a9N-1610034806234)(C:\Users\wxs\AppData\Roaming\Typora\typora-user-images\image-20201215233047928.png)]
kubectl get secrets
kubectl get secrets -n kube-tooken
kubectl get secrets regred -o yaml
3.3 configmap资源
明文的
作为配置文件
inircontainer
ConfigMap对象用于为容器中的应用提供配置数据以定制程序的行为,不过敏感的配置信息,例如密钥、证书等通常由Secret对象来进行配置
。他们将相应的配置信息保存于对象中,而后在pod资源上以存储卷的形式将其挂载并获取相关的配置,以实现配置与镜像文件的解耦。ConfigMap对象将配置数据以键值对的形式进行存储
,这些数据可以在pod对象中使用或为系统组件提供配置,例如控制器对象等。不过,无论应用程序如何使用ConfigMap对象中的数据,用户都完全可以通过在不同的环境中创建名称相同但内存不同的ConfigMap对象,从而为不同环境中同一功能的pod资源提供不同的配置信息,实现应用于配置的灵活勾兑。
vi /opt/81.conf server { listen 81; server_name localhost; root /html; index index.html index.htm; location / { } } kubectl create configmap 81.conf --from-file=/opt/81.conf #验证 kubectl get cm vi k8s_deploy.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx spec: replicas: 2 template: metadata: labels: app: nginx spec: volumes: - name: nginx-config configMap: name: 81.conf items: - key: 81.conf path: 81.conf containers: - name: nginx image: nginx:1.13 volumeMounts: - name: nginx-config mountPath: /etc/nginx/conf.d ports: - containerPort: 80 name: port1 - containerPort: 81 name: port2
4: k8s常用服务
4.1 部署dns服务
rbac
—> 基于角色的权限访问控制 ---- Role-Based Access Control
—> 在RBAC 中,权限与角色相关联,用户通过成为适当角色的成员而得到这些角色的权限。
—> role分为局部角色role
和全局角色clusterrole
—> 用户—sa
: ServiceAccount
—> 角色绑定 ClusterRoleBinding
RoleBinding
DNS服务的作用域名解析,svc的名字解析成clusterIP
role:局部角色 全局角色:clusterrole
用户sa:ServiceAccount
vi coredns.yaml apiVersion: v1 kind: ServiceAccount metadata: name: coredns namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: Reconcile name: system:coredns rules: // - apiGroups: - "" resources: - endpoints - services - pods - namespaces verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults addonmanager.kubernetes.io/mode: EnsureExists name: system:coredns roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:coredns subjects: - kind: ServiceAccount name: coredns namespace: kube-system --- apiVersion: v1 kind: ConfigMap metadata: name: coredns namespace: kube-system labels: addonmanager.kubernetes.io/mode: EnsureExists data: Corefile: | .:53 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa ttl 30 } prometheus :9153 forward . /etc/resolv.conf cache 30 loop reload loadbalance } --- apiVersion: apps/v1 kind: Deployment metadata: name: coredns namespace: kube-system labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: # replicas: not specified here: strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 selector: matchLabels: k8s-app: kube-dns template: metadata: labels: k8s-app: kube-dns annotations: seccomp.security.alpha.kubernetes.io/pod: 'docker/default' spec: priorityClassName: system-cluster-critical serviceAccountName: coredns tolerations: - key: "CriticalAddonsOnly" operator: "Exists" nodeSelector: beta.kubernetes.io/os: linux nodeName: 10.0.0.13 containers: - name: coredns image: coredns/coredns:1.3.1 imagePullPolicy: IfNotPresent resources: limits: memory: 100Mi requests: cpu: 100m memory: 70Mi args: [ "-conf", "/etc/coredns/Corefile" ] volumeMounts: - name: config-volume mountPath: /etc/coredns readOnly: true - name: tmp mountPath: /tmp ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP livenessProbe: httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 5 readinessProbe: httpGet: path: /health port: 8080 scheme: HTTP securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true dnsPolicy: Default volumes: - name: tmp emptyDir: {} - name: config-volume configMap: name: coredns items: - key: Corefile path: Corefile --- apiVersion: v1 kind: Service metadata: name: kube-dns namespace: kube-system annotations: prometheus.io/port: "9153" prometheus.io/scrape: "true" labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile kubernetes.io/name: "CoreDNS" spec: selector: k8s-app: kube-dns clusterIP: 10.254.230.254 ports: - name: dns port: 53 protocol: UDP - name: dns-tcp port: 53 protocol: TCP - name: metrics port: 9153 protocol: TCP #测试 yum install bind-utils.x86_64 -y dig @10.254.230.254 kubernetes.default.svc.cluster.local +short
source <(kubectl completion bash)
echo “source <(kubectl completion bash)” >> ~/.bashrc
kubectl delete pod --all
kubectl get all
kubectl delete pod --all
kubectl get nodes
kubectl delete nodes 10.0.0.13
kubectl get nodes
kubectl get all
kubectl describe pod/mysql-t2j5s
kubectl taint node 10.0.0.12 disk-
kubectl get all
kubectl delete -f .
kubectl create -f .
kubectl get all -n kube-system
kubectl get all
kubectl exec -ti myweb-smj7p /bin/bash
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-dW0h4CWP-1610034806236)(C:\Users\wxs\Desktop\k8s\图片\image-20201215143719137.png)]
k8s-集群里的三种IP(NodeIP、PodIP、ClusterIP)
- Node IP:Node节点的IP地址,即物理网卡的IP地址。
- Pod IP:Pod的IP地址,即docker容器的IP地址,此为虚拟IP地址。
- Cluster IP:Service的IP地址,此为虚拟IP地址。
4.2 部署dashboard服务
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml vi kubernetes-dashboard.yaml #修改镜像地址 image: registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 #修改service类型为NodePort类型 spec: type: NodePort ports: - port: 443 nodePort: 30001 targetPort: 8443 kubectl create -f kubernetes-dashboard.yaml #使用火狐浏览器访问https://10.0.0.12:30001 vim dashboard_rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard addonmanager.kubernetes.io/mode: Reconcile name: admin namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard-admin labels: k8s-app: kubernetes-dashboard addonmanager.kubernetes.io/mode: Reconcile roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin namespace: kube-system
[root@k8s-master dashboard]# kubectl get secrets -n kube-system
NAME TYPE DATA AGE
coredns-token-dtpzv kubernetes.io/service-account-token 3 3h20m
default-token-xpm9f kubernetes.io/service-account-token 3 43h
kubernetes-admin-token-6lz98 kubernetes.io/service-account-token 3 10m
kubernetes-dashboard-admin-token-p7wjc kubernetes.io/service-account-token 3 27m
kubernetes-dashboard-certs Opaque 0 66m
kubernetes-dashboard-key-holder Opaque 2 66m
kubernetes-dashboard-token-lvn7f kubernetes.io/service-account-token 3 66m
[root@k8s-master dashboard]# kubectl describe secrets -n kube-system kubernetes-admin-token-6lz98
kubectl config set-cluster kubernetes --server=–kubeconfig=/root/dashbord-admin.conf
kubectl config set-credentials admin --token=$DASH_TOCKEN --kubeconfig=/root/dashbord-admin.conf
kubectl config set-context admin --cluster=kubernetes --user=admin --kubeconfig=/root/dashbord-admin.conf
kubectl config use-context admin --kubeconfig=/root/dashbord-admin.conf
kubectl delete secret kubernetes-dashboard-certs -n kube-system
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
kubectl get pod -n kube-system -o wide
kubectl delete pod -n kube-system kubernetes-dashboard-5dc4c54b55-qpzpf
DASH_TOCKEN=‘令牌’
kubectl config set-cluster kubernetes --server=10.0.0.11:6443 --kubeconfig=/root/dashbord-admin.conf
kubectl config set-credentials admin --token=$DASH_TOCKEN --kubeconfig=/root/dashbord-admin.conf
kubectl config set-context admin --cluster=kubernetes --user=admin --kubeconfig=/root/dashbord-admin.conf
kubectl config use-context admin --kubeconfig=/root/dashbord-admin.conf
[root@k8s-master key]# ls /root/dashbord-admin.conf
/root/dashbord-admin.conf
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWFkbWluLXRva2VuLTZsejk4Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVybmV0ZXMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlYzQ4NDA3Yy0yMTM4LTQyZWYtOTI0My1kYzMwZDBlMzA0MjciLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZXJuZXRlcy1hZG1pbiJ9.hp9H-lWByhuOMLeaO3Won2Pf4lCpNHDHWeZVVJHpxk-UKQdkSTWXkpoT65aFhRYiE09OoRKxKE6Ba6Jpz8wwQA1vwZzU_YUzTDZOhd1TZWGkPIlL21vDlLvRYhMvU90lpoAiLCmsjxDy0NAMOjUgLb74_K12i9Sqg7dG-Z6XbC-Ay-PKmlCBxXIa-c1YWfQu0CR1hsACszuK5_ZiW7CjGcofrqukdgSEz3adbN9XhH-fIN2Jcbgo4nUOj0JVWlc-Zz5n-ebk0IkIZur5R2iaLlMjwXHwvTHof-TOymvOi5iHu16frbd5NKRg2elXlSKmuvK4IcW26MP5Z_EQUqmGTA
http://10.0.0.12:8080/dashboard/
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-RCv1hVne-1610034806237)(C:\Users\wxs\Desktop\k8s\图片\image-20201216113249625.png)]
5:k8s的网络访问
[root@k8s-master key]# kubectl get endpoints
NAME ENDPOINTS AGE
kubernetes 10.0.0.11:6443
2d1h
mysql 172.18.57.2:3306 7h7m
myweb 172.18.57.3:8080 7h7m
nginx 26h
[root@k8s-master key]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.254.0.1
443/TCP 2d1h
mysql ClusterIP 10.254.1.45 3306/TCP 7h9m
myweb NodePort 10.254.31.67 8080:30008/TCP 7h9m
nginx NodePort 10.254.151.93 80:44473/TCP 26h
[root@k8s-master tomcat_demo]# kubectl delete -f mysql-rc.yml
replicationcontroller “mysql” deleted
[root@k8s-master tomcat_demo]# kubectl delete -f mysql-svc.yml
service “mysql” deleted
访问kubernetes ClusterIP 10.254.0.1
,最后映射到kubernetes 10.0.0.11:6443
.
可以做负载均衡
5.1 k8s的映射
endopoint和svc通过什么关联:同一个容器,名字相同
#准备数据库
[root@k8s-node1 ~]#yum install mariadb-server -y
[root@k8s-node1 ~]#systemctl start mariadb
mysql_secure_installation //安全初始化
[root@k8s-node1 ~]# mysql -uroot -p123456
mysql>grant all on . to root@’%’ identified by ‘123456’;
[root@k8s-master tomcat_demo]# vim mysql-yingshe.yaml
[root@k8s-master tomcat_demo]# vim mysql-yingshe.yaml
apiVersion: v1
kind: Endpoints
metadata:
name: mysql
namespace: default
subsets:
- addresses:
- ip: 10.0.0.12
ports:
- name: mysql
port: 3306
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
name: mysql
namespace: default
spec:
ports:
- name: mysql
port: 3306
targetPort: 3306
type: ClusterIP
[root@k8s-master tomcat_demo]# kubectl apply -f mysql-yingshe.yaml //更新
http://10.0.0.12:30008/demo/
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-drA2u5ta-1610034806238)(C:\Users\wxs\Desktop\k8s\图片\image-20201215203019750.png)]
[root@k8s-master yingshe]# cat mysql_svc.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql
spec:
ports:
- name: mysql
port: 3306
protocol: TCP
targetPort: 3306
type: ClusterIP
#web页面重新访问tomcat/demo
#验证
[root@k8s-node2 ~]# mysql -e ‘show databases;’
±-------------------+
| Database |
±-------------------+
| information_schema |
| HPE_APP |
| mysql |
| performance_schema |
±-------------------+
MariaDB [(none)]> use HPE_APP ;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [HPE_APP]> show tables;
±------------------+
| Tables_in_HPE_APP |
±------------------+
| T_USERS |
±------------------+
1 row in set (0.01 sec)
5.2 kube-proxy的ipvs模式
[root@k8s-node1 ~]# yum install conntrack-tools -y
[root@k8s-node1 ~]# yum install ipvsadm.x86_64 -y
[root@k8s-node1 ~]#vim /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
ExecStart=/usr/bin/kube-proxy \
–bind-address 10.0.0.12 \
–kubeconfig /etc/kubernetes/kube-proxy.kubeconfig
–cluster-cidr 172.18.0.0/16
–hostname-override 10.0.0.12
–proxy-mode ipvs
–logtostderr=false
–v=2
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
[root@k8s-node1 ~]systemctl daemon-reload
[root@k8s-node1 ~]systemctl restart kube-proxy.service
[root@k8s-node1 ~]ipvsadm -L -n
[root@k8s-node1 ~]# yum provides conntrack //查看这个命令的包
5.3 ingress
ingress:七层负载均衡 lvs
server:四层负载均衡 nginx
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-mbjyikeC-1610034806238)(C:\Users\wxs\Desktop\k8s\图片\image-20201215174305752.png)]
ingress-controlle //ingress控制器
pod跑起来的
实现方式:haproxy:负载均衡 nginx traefilk
Ingress 是什么
Ingress 公开了从集群外部到集群内服务的 HTTP 和 HTTPS 路由。 流量路由由 Ingress 资源上定义的规则控制。
下面是一个将所有流量都发送到同一 Service 的简单 Ingress 示例
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-pT5J911Y-1610034806239)(C:\Users\wxs\Desktop\k8s\图片\image-20201216092513230.png)]
可以将 Ingress 配置为服务提供外部可访问的 URL、负载均衡流量、终止 SSL/TLS,以及提供基于名称的虚拟主机等能力。 Ingress 控制器 通常负责通过负载均衡器来实现 Ingress,尽管它也可以配置边缘路由器或其他前端来帮助处理流量。
[root@k8s-master k8s_yaml]# vim k8s_test.yaml
apiVersion: extensions/v1beta1 kind: DaemonSet metadata: name: nginx-ds labels: addonmanager.kubernetes.io/mode: Reconcile spec: template: metadata: labels: app: nginx-ds spec: hostNetwork: true containers: - name: my-nginx image: nginx:1.13 ports: - containerPort: 80 hostPort: 80 ~
[root@k8s-master k8s_yaml]# kubectl create -f k8s_test.yaml
daemonset.extensions/nginx-ds created
[root@k8s-master k8s_yaml]# kubectl delete -f k8s_test.yaml
daemonset.extensions “nginx-ds” deleted
部署ingress
6: k8s弹性伸缩
使用hapster实现弹性伸缩
修改kube-controller-manager
–horizontal-pod-autoscaler-use-rest-clients=false
[root@k8s-master heapster]# ls
heapster.yml
[root@k8s-master k8s_yaml]# kubectl get all -A
7: 动态存储
cat nfs-client.yaml kind: Deployment apiVersion: apps/v1 metadata: name: nfs-client-provisioner spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 10.0.0.13 - name: NFS_PATH value: /data volumes: - name: nfs-client-root nfs: server: 10.0.0.13 path: /data vi nfs-client-sa.yaml iapiVersion: v1 kind: ServiceAccount metadata: name: nfs-client-provisioner --- kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-client-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["endpoints"] verbs: ["create", "delete", "get", "list", "watch", "patch", "update"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-client-provisioner subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io vi nfs-client-class.yaml iapiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: course-nfs-storage provisioner: fuseim.pri/ifs 修改pvc的配置文件 metadata: namespace: tomcat name: pvc-01 annotations: volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
创pvc系统自动创pv
pv是全局资源,pvc局部资源
动态存储
:只需要说明你需要一个pvc,系统自动创建pv,并却绑定.
部署动态存储的前提是实现静态存储
所有的节点安装nfs
yum -y install nfs-utils
[root@k8s-master ~]# mkdir /data
man exports
/no_root_squash
[root@k8s-master ~]# vim /data/exports
/data 10.0.0.0/24(rw,sync,no_root_squash,no_all_squash)
[root@k8s-master ~]# systemctl start rpcbind
[root@k8s-master ~]# systemctl enable rpcbind
[root@k8s-master ~]# systemctl start nfs
[root@k8s-master ~]# systemctl enable nfs
8:增加计算节点
计算节点服务: docker kubelet kube-proxy flannel
9: 污点和容忍度
主要辅助调度策略
污点:node节点的属性,打标签实现的.
[root@k8s-master ~]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS 10.0.0.12 Ready <none> 17h v1.15.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.12,kubernetes.io/os=linux 10.0.0.13 Ready <none> 16h v1.15.4 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.0.13,kubernetes.io/os=linux
[root@k8s-master ~]# kubectl label nodes 10.0.0.12 node-role.kubernetes.io/node= //增加标签
node/10.0.0.12 labeled
[root@k8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
10.0.0.12 Ready node 17h v1.15.4
10.0.0.13 Ready 16h v1.15.4
[root@k8s-master ~]# kubectl label nodes 10.0.0.12 node-role.kubernetes.io/node- //删除标签
node/10.0.0.12 labeled
污点: 给node节点加污点
deployment必须指定副本数
#污点的类型:
NoSchedule:不要往这个node节点调度,不会影响已有的pod
PreferNoSchedule:备用,尽量不要往这个节点调度.
NoExecute:清场了,驱逐,应用于这个节点下线
NoSchedule
#添加污点的例子
kubectl taint node 10.0.0.12 disk=sshd:NoSchedule
#检查
[root@k8s-master ~]# kubectl describe nodes 10.0.0.12|grep -i taint
Taints: node-role.kubernetes.io=master:NoExecute
取消污点
kubectl taint node 10.0.0.12 disk-
NoExecute
//强制清场
kubectl taint node 10.0.0.12 disk=sshd:NoExecute
kubectl get pod -o wide
容忍度
在pod里面加yaml文件
#添加在pod的spec下
tolerations:
- key: "node-role.kubernetes.io"
operator: "Exists"
value: "master"
effect: "NoExecute"
练习污点和容忍度
[root@k8s-master k8s_yaml]# vi k8s_deploy.yaml
spec:
replicas: 3
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
tolerations:
- key: "node-role.kubernetes.io"
operator: "Exists"
value: "master"
effect: "NoExecute"
containers:
- name: nginx
image: 10.0.0.11:5000/nginx:1.13
ports:
- containerPort: 80
[root@k8s-master k8s_yaml]# kubectl taint node 10.0.0.12 disk-
node/10.0.0.12 untainted
[root@k8s-master k8s_yaml]# kubectl taint node 10.0.0.12 disk=ssd:NoSchedule
node/10.0.0.12 tainted
[root@k8s-master k8s_yaml]# kubectl create -f k8s_deploy.yaml
kubectl get nodes -o wide
解决一个etcd启动不起来
etcdctl cluster-health
etcdctl member list
etcdctl member
etcdctl member remove 55fcbe0adaa45350
etcdctl member list
etcdctl member add
etcdctl member add node3 “https://10.0.0.13:2380”
立即删除:
kubectl delete -n kube-system pod traefik-ingress-controller-c9dt2 --force --grace-period 0
更多推荐
所有评论(0)