k8s简单部署操作实录
1.设置主机主机名############################################## hostnamectl set-hostname k8s-master;bash hostnamectl set-hostname k8s-node1;bash hostnamectl set-hostname k8s-node2;bash 2.三台服务器分别添加主机名解析################################### cat >> /etc/hosts << EOF 192.168.0.124 k8s-master 192.168.0.164 k8s-node1 192.168.0.165 k8s-node2 EOF 3.三台服务器分别执行以下语句进行免秘钥登录######################### ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa for i in 192.168.0.124 192.168.0.164 192.168.0.165 k8s-master k8s-node1 k8s-node2;do expect -c " spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@$i expect { \"*yes/no*\" {send \"yes\r\"; exp_continue} \"*password*\" {send \"123456\r\"; exp_continue} \"*Password*\" {send \"123456\r\";} } " done ############################################################# 4.部署Etcd集群使用cfssl来生成自签证书,先下载cfssl工具: (1)安装cfssl工具 mkdir -p /opt/{cfssl,etcd-ssl} && cd /opt/cfssl && wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 && wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 && wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 && chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo (2)创建以下三个文件: #创建ca-config.json文件 cd /opt/etcd-ssl cat>ca-config.json<<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "www": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF #创建ca-csr.json文件 cat >ca-csr.json<<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing" } ] } EOF #创建server-csr.json文件并修改etcd集群的主机IP地址 cat>server-csr.json<<EOF { "CN": "etcd", "hosts": [ "192.168.135.128", "192.168.135.129", "192.168.135.130" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF #修改ip地址 sed -i 's/192.168.135.128/192.168.0.124/g' server-csr.json sed -i 's/192.168.135.129/192.168.0.164/g' server-csr.json sed -i 's/192.168.135.130/192.168.0.165/g' server-csr.json cat server-csr.json 生成证书: cfssl gencert -initca ca-csr.json | cfssljson -bare ca - && cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server [root@k8s-master etcd-ssl]# ls *pem ca-key.pem ca.pem server-key.pem server.pem 安装Etcd二进制包下载地址: https://github.com/coreos/etcd/releases/tag/v3.2.12 以下部署步骤在规划的三个etcd节点操作一样,唯一不同的是etcd配置文件中的服务器IP要写当前的: 解压二进制包: mkdir /opt/etcd/{bin,cfg,ssl,tools} -p cd /opt/etcd/tools wget https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz tar zxvf etcd-v3.2.12-linux-amd64.tar.gz mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/ 创建etcd配置文件: cat >/opt/etcd/cfg/etcd<<EOF #[Member] ETCD_NAME="etcd01" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.0.196:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.0.196:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.0.196:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.0.196:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.196:2380,etcd02=https://192.168.0.144:2380,etcd03=https://192.168.0.156:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF #其他节点需要[Member]和Clustering中的URLS指定的ip地址为本地ip地址。 #ETCD_NAME 节点名称 #ETCD_DATA_DIR 数据目录 #ETCD_LISTEN_PEER_URLS 集群通信监听地址 # ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址 #ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址 #ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址 # ETCD_INITIAL_CLUSTER 集群节点地址 # ETCD_INITIAL_CLUSTER_TOKEN 集群Token #ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群 systemd管理etcd: cat >/usr/lib/systemd/system/etcd.service<<EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=/opt/etcd/cfg/etcd ExecStart=/opt/etcd/bin/etcd \ --name=\${ETCD_NAME} \ --data-dir=\${ETCD_DATA_DIR} \ --listen-peer-urls=\${ETCD_LISTEN_PEER_URLS} \ --listen-client-urls=\${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \ --advertise-client-urls=\${ETCD_ADVERTISE_CLIENT_URLS} \ --initial-advertise-peer-urls=\${ETCD_INITIAL_ADVERTISE_PEER_URLS} \ --initial-cluster=\${ETCD_INITIAL_CLUSTER} \ --initial-cluster-token=\${ETCD_INITIAL_CLUSTER_TOKEN} \ --initial-cluster-state=new \ --cert-file=/opt/etcd/ssl/server.pem \ --key-file=/opt/etcd/ssl/server-key.pem \ --peer-cert-file=/opt/etcd/ssl/server.pem \ --peer-key-file=/opt/etcd/ssl/server-key.pem \ --trusted-ca-file=/opt/etcd/ssl/ca.pem \ --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF 把刚才生成的证书拷贝到配置文件中的位置: cd /opt/etcd-ssl cp ca*pem server*pem /opt/etcd/ssl yum install -y rsync rsync -avzP /opt/* k8s-node1:/opt/ rsync -avzP /opt/* k8s-node2:/opt/ rsync -avzP /usr/lib/systemd/system/etcd.service k8s-node1:/usr/lib/systemd/system/ rsync -avzP /usr/lib/systemd/system/etcd.service k8s-node2:/usr/lib/systemd/system/ #注意etcd.server文件发送过去后检查文件格式是否OK。 手动修改:ETCD_INITIAL_CLUSTER地址。其他可通过命令修改 第一节点执行以下命令: sed -i '/ETCD_INITIAL_CLUSTER=/d' /opt/etcd/cfg/etcd sed -i '/ETCD_ADVERTISE_CLIENT_URLS=/a\ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.124:2380,etcd02=https://192.168.0.164:2380,etcd03=https://192.168.0.165:2380"' /opt/etcd/cfg/etcd sed -i '/URLS/{s/192.168.0.196/192.168.0.124/g}' /opt/etcd/cfg/etcd 第二节点执行以下命令: sed -i '/ETCD_INITIAL_CLUSTER=/d' /opt/etcd/cfg/etcd sed -i '/ETCD_ADVERTISE_CLIENT_URLS=/a\ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.124:2380,etcd02=https://192.168.0.164:2380,etcd03=https://192.168.0.165:2380"' /opt/etcd/cfg/etcd sed -i '/URLS/{s/192.168.0.196/192.168.0.164/g}' /opt/etcd/cfg/etcd sed -i '/NAME/{s/etcd01/etcd02/g}' /opt/etcd/cfg/etcd 第三节点执行以下命令: sed -i '/ETCD_INITIAL_CLUSTER=/d' /opt/etcd/cfg/etcd sed -i '/ETCD_ADVERTISE_CLIENT_URLS=/a\ETCD_INITIAL_CLUSTER="etcd01=https://192.168.0.124:2380,etcd02=https://192.168.0.164:2380,etcd03=https://192.168.0.165:2380"' /opt/etcd/cfg/etcd sed -i '/URLS/{s/192.168.0.196/192.168.0.165/g}' /opt/etcd/cfg/etcd sed -i '/NAME/{s/etcd01/etcd03/g}' /opt/etcd/cfg/etcd 修改后cat 确认配置文件是否修改正确: cat /opt/etcd/cfg/etcd 在每个节点执行以下语句开启etcd服务 systemctl daemon-reload systemctl start etcd systemctl enable etcd systemctl status etcd 都部署完成后,检查etcd集群状态: [root@k8s-master etcd-ssl]# /opt/etcd/bin/etcdctl --ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --endpoints="https://192.168.0.124:2379,https://192.168.0.164:2379,https://192.168.0.165:2379" cluster-health 出现以下内容表示集群创建成功;集群创建失败提示:网络超时等.请检查防火墙是否开启,关闭防火墙再次进行测试 member 644c5469087216c8 is healthy: got healthy result from https://192.168.0.124:2379 member 7f51f4cdf2e7f45d is healthy: got healthy result from https://192.168.0.165:2379 member 9e279b14d0a43431 is healthy: got healthy result from https://192.168.0.164:2379 cluster is healthy 5.在Node节点安装Docker yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install docker-ce -y curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io systemctl start docker systemctl enable docker 6.部署Flannel网络,master(可以不安装),node节点都可以安装 Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段: /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.0.124:2379,https://192.168.0.164:2379,https://192.168.0.165:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}' 下载二进制包: wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz tar zxvf flannel-v0.10.0-linux-amd64.tar.gz mkdir -pv /opt/kubernetes/bin mv flanneld mk-docker-opts.sh /opt/kubernetes/bin 配置Flannel: mkdir -pv /opt/kubernetes/cfg/ cat>/opt/kubernetes/cfg/flanneld<<EOF FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.0.124:2379,https://192.168.0.164:2379,https://192.168.0.165:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF systemd管理Flannel: cat> /usr/lib/systemd/system/flanneld.service<<EOF [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOF 配置Docker启动指定子网段: cat>/usr/lib/systemd/system/docker.service<<EOF [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify EnvironmentFile=/run/flannel/subnet.env ExecStart=/usr/bin/dockerd \$DOCKER_NETWORK_OPTIONS ExecReload=/bin/kill -s HUP \$MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target EOF #注意flanel需要ETCD连接证书,需要保证其他节点都有ETCD证书。前面我们rynsc推送了opt所以一下操作无需操作 #mkdir -pv /opt/etcd/ssl/ #scp /opt/etcd/ssl/* k8s-node2:/opt/etcd/ssl/ 重启flannel和docker: systemctl daemon-reload systemctl start flanneld systemctl enable flanneld systemctl status flanneld systemctl restart docker 检查是否生效: [root@k8s-master etcd-ssl]# ps -ef |grep docker root 8040 1 0 10:47 ? 00:00:00 /usr/bin/dockerd --bip=172.17.47.1/24 --ip-masq=false --mtu=1450 root 8196 5383 0 10:48 pts/0 00:00:00 grep --color=auto docker 部署Flannel实现容器之间在同一网段且能够互相通信。 测试:查看M,S,S,Flannel 网络ip通过ping命令测试是否可以互通 如果能通说明Flannel部署成功。如果不通检查下日志:journalctl -u flannel 查看IP [root@k8s-master etcd-ssl]# ssh k8s-master 'hostname -I' 192.168.0.124 172.17.47.1 172.17.47.0 [root@k8s-master etcd-ssl]# ssh k8s-node1 'hostname -I' 192.168.0.164 172.17.100.1 172.17.100.0 [root@k8s-master etcd-ssl]# ssh k8s-node2 'hostname -I' 192.168.0.165 172.17.22.1 172.17.22.0 Ping互通性测试 [root@k8s-master etcd-ssl]# ping 172.17.47.1 PING 172.17.47.1 (172.17.47.1) 56(84) bytes of data. [root@k8s-master etcd-ssl]# ping 172.17.100.1 64 bytes from 172.17.100.1: icmp_seq=1 ttl=64 time=0.204 ms [root@k8s-master etcd-ssl]# ping 172.17.22.1 64 bytes from 172.17.22.1: icmp_seq=2 ttl=64 time=0.131 ms #ETCD集群与Node节点中的Docker与Flannel网络部署完毕 部署master节点其他组件 在部署Kubernetes之前一定要确保etcd、flannel、docker是正常工作的,否则先解决问题再继续。 生成证书创建CA证书: mkdir -p /opt/kuber-ssl && cd /opt/kuber-ssl cat >ca-config.json<<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat >ca-csr.json<<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - echo $? 配置生成apiserver证书的配置文件: cat >server-csr.json<<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1",//这是后面dns要使用的虚拟网络的网关,不用改,就用这个 切忌(删除这行) "127.0.0.1", "192.168.236.128", "192.168.236.129", "192.168.236.130", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF sed -i 's/192.168.236.128/192.168.0.124/g' server-csr.json sed -i 's/192.168.236.129/192.168.0.164/g' server-csr.json sed -i 's/195.168.236.130/192.168.0.165/g' server-csr.json #生成api-server证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server echo $? 配置kube-proxy证书生成文件: cat>kube-proxy-csr.json<<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF 生成kube-proxy证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy echo $? 最终生成以下证书文件: ls *pem ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem 7.部署apiserver组件 下载二进制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.11.md 下载这个包(kubernetes-server-linux-amd64.tar.gz)就够了,包含了所需的所有组件。 #wget https://dl.k8s.io/v1.11.10/kubernetes-server-linux-amd64.tar.gz#需要翻墙 mkdir -p /opt/kubernetes/{bin,cfg,ssl,tools} -pv && cd /opt/kubernetes/tools/ wget http://resource.bestyunyan.club//server/tgz/kubernetes-server-linux-amd64.tarkubernetes-server-linux-amd64.tar.gz #tar zxvf kubernetes-server-linux-amd64.tar.gz tar -xvzf kubernetes-server-linux-amd64.tarkubernetes-server-linux-amd64.tar.gz cd kubernetes/server/bin cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin 从生成证书的机器拷贝证书到master1,master2: cd /opt/kuber-ssl && cp server.pem server-key.pem ca.pem ca-key.pem /opt/kubernetes/ssl/ # scp server.pem server-key.pem ca.pem ca-key.pem k8s-master1:/opt/kubernetes/ssl/ # scp server.pem server-key.pem ca.pem ca-key.pem k8s-master2:/opt/kubernetes/ssl/ 创建token文件,后面会讲到: cat >/opt/kubernetes/cfg/token.csv<<EOF 674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF #第一列:随机字符串,自己可生成 第二列:用户名 第三列:UID 第四列:用户组 创建apiserver配置文件: cat> /opt/kubernetes/cfg/kube-apiserver <<EOF KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://192.168.0.124:2379,https://192.168.0.164:2379,https://192.168.0.165:2379 \ --bind-address=192.168.0.124 \ --secure-port=6443 \ --advertise-address=192.168.0.124 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF 参数说明: * --logtostderr 启用日志 * --v 日志等级 * --etcd-servers etcd集群地址 * --bind-address 监听地址 * --secure-port https安全端口 * --advertise-address 集群通告地址 * --allow-privileged 启用授权 * --service-cluster-ip-range Service虚拟IP地址段 * --enable-admission-plugins 准入控制模块 * --authorization-mode 认证授权,启用RBAC授权和节点自管理 * --enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到 * --token-auth-file token文件 * --service-node-port-range Service Node类型默认分配端口范围 systemd管理apiserver: cat> /usr/lib/systemd/system/kube-apiserver.service<<EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF 启动API: systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver systemctl status kube-apiserver 8.部署schduler组件 创建schduler配置文件: cat> /opt/kubernetes/cfg/kube-scheduler <<EOF KUBE_SCHEDULER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect" EOF 参数说明: * --master 连接本地apiserver * --leader-elect 当该组件启动多个时,自动选举(HA) systemd管理schduler组件: cat>/usr/lib/systemd/system/kube-scheduler.service<<EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF 启动: systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler systemctl status kube-scheduler 9.部署controller-manager组件 创建controller-manager配置文件: cat >/usr/lib/systemd/system/kube-controller-manager.service<<EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF cat >/opt/kubernetes/cfg/kube-controller-manager<<EOF KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem" EOF systemd管理controller-manager组件: cat> /usr/lib/systemd/system/kube-controller-manager.service<<EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF 启动: systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager systemctl status kube-controller-manager 查看所有服务状态 systemctl status etcd |awk '/Active/{print $3}' systemctl status kube-apiserver|awk '/Active/{print $3}' systemctl status kube-scheduler |awk '/Active/{print $3}' systemctl status kube-controller-manager|awk '/Active/{print $3}' 表示OK /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} ----------------------下面这些操作在master节点完成:--------------------------- 将kubelet-bootstrap用户绑定到系统集群角色 /opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap \ --clusterrole=system:node-bootstrapper \ --user=kubelet-bootstrap 创建kubeconfig文件: 在生成kubernetes证书的目录下执行以下命令生成kubeconfig文件: cd /opt/kuber-ssl 指定apiserver 内网负载均衡地址 KUBE_APISERVER="https://192.168.0.124:6443" BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc # 设置集群参数 /opt/kubernetes/bin/kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=bootstrap.kubeconfig # 设置客户端认证参数 /opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \ --token=${BOOTSTRAP_TOKEN} \ --kubeconfig=bootstrap.kubeconfig # 设置上下文参数 /opt/kubernetes/bin/kubectl config set-context default \ --cluster=kubernetes \ --user=kubelet-bootstrap \ --kubeconfig=bootstrap.kubeconfig # 设置默认上下文 /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig # 创建kube-proxy kubeconfig文件 /opt/kubernetes/bin/kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig ls bootstrap.kubeconfig kube-proxy.kubeconfig 将这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下。 scp *.kubeconfig k8s-node1:/opt/kubernetes/cfg/ scp *.kubeconfig k8s-node2:/opt/kubernetes/cfg/ ----------------------下面这些操作在node节点完成:--------------------------- 10.部署kubelet组件,在master节点执行rsync推送命令至node节点 将前面下载的二进制包中的kubelet和kube-proxy拷贝到/opt/kubernetes/bin目录下。 rsync -avzP /opt/kubernetes/tools/kubernetes/server/bin/* k8s-node1:/opt/kubernetes/bin/ rsync -avzP /opt/kubernetes/tools/kubernetes/server/bin/* k8s-node2:/opt/kubernetes/bin/ 创建kubelet配置文件: cat >/opt/kubernetes/cfg/kubelet<<EOF KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.0.164 \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --config=/opt/kubernetes/cfg/kubelet.config \ --cert-dir=/opt/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF 参数说明: * --hostname-override 在集群中显示的主机名 * --bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件,kubelet第一次启动时会读取我们提前配置生成的文件bootstrap.kubeconfig(中指定的用户及master_ip)向master请求颁发证书等 * --kubeconfig 指定kubeconfig文件位置,会自动生成。证书签发成功后会自动生成一个自己的配置文件kubelet.kubeconfig.此文件内容大致与bootstrap.kubeconfig相同。 * --cert-dir 颁发证书存放位置 * --pod-infra-container-image 管理Pod网络的镜像 其中/opt/kubernetes/cfg/kubelet.config配置文件如下: cat >/opt/kubernetes/cfg/kubelet.config<<EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 192.168.0.164 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true webhook: enabled: false EOF systemd管理kubelet组件: cat> /usr/lib/systemd/system/kubelet.service<<EOF [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/opt/kubernetes/cfg/kubelet ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF 启动: systemctl daemon-reload systemctl enable kubelet systemctl start kubelet systemctl status kubelet 在Master审批Node加入集群: 启动后还没加入到集群中,需要手动允许该节点才可以。 在Master节点查看请求签名的Node: /opt/kubernetes/bin/kubectl get csr 没颁发证书前状态:Pending 颁发后:Approved,Issued状态 /opt/kubernetes/bin/kubectl certificate approve XXXXID /opt/kubernetes/bin/kubectl get node 查看: /opt/kubernetes/bin/kubectl get node NAME STATUS ROLES AGE VERSION 192.168.236.129 Ready <none> 1h v1.11.6 192.168.236.130 Ready <none> 2m v1.11.6 11.部署kube-proxy组件创建kube-proxy配置文件: cat >/opt/kubernetes/cfg/kube-proxy<<EOF KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=192.168.0.164 \ --cluster-cidr=10.0.0.0/24 \ //不要改,就是这个ip --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig" EOF systemd管理kube-proxy组件: cat >/usr/lib/systemd/system/kube-proxy.service<<EOF [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF 启动: systemctl daemon-reload systemctl enable kube-proxy systemctl restart kube-proxy systemctl status kube-proxy 查看集群状态 /opt/kubernetes/bin/kubectl get node NAME STATUS ROLES AGE VERSION 192.168.236.129 Ready <none> 2h v1.11.6 192.168.236.130 Ready <none> 46m v1.11.6 /opt/kubernetes/bin/kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} etcd-2 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} 12.运行一个测试示例 创建一个Nginx Web,判断集群是否正常工作: /opt/kubernetes/bin/kubectl run nginx --image=nginx --replicas=3 /opt/kubernetes/bin/kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort 查看Pod,Service: /opt/kubernetes/bin/kubectl get pods NAME READY STATUS RESTARTS AGE nginx-64f497f8fd-fjgt2 1/1 Running 3 28d nginx-64f497f8fd-gmstq 1/1 Running 3 28d nginx-64f497f8fd-q6wk9 1/1 Running 3 28d 查看pod详细信息: /opt/kubernetes/bin/kubectl describe nginx-64f497f8fd-fjgt2 /opt/kubernetes/bin/kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 27m nginx NodePort 10.0.0.9 <none> 88:37817/TCP 39s 对外端口为37817 访问任意node节点ip+port即可。k8s自动实现node节点pod的负载均衡 测试: node1节点:操作 docker ps CONTAINER ID IMAGE 209301338ab9 nginx hostname -I 192.168.0.164 172.17.100.1 172.17.100.0 echo "我是nginx-164-node1" >index.html docker cp ./index.html 20:/usr/share/nginx/html/ node2节点:操作 [root@k8s-node2 etcd-ssl]# hostname -I 192.168.0.165 172.17.22.1 172.17.22.0 [root@k8s-node2 etcd-ssl]# docker ps CONTAINER ID IMAGE 7495e46b89ff nginx a2484aa751d5 nginx echo "我是nginx-165-node2-1">index.html docker cp ./index.html 74:/usr/share/nginx/html/ echo "我是nginx-165-node2-2">index.html docker cp ./index.html a2:/usr/share/nginx/html/ 结果显示 [root@k8s-master kuber-ssl]# for i in `seq 10`;do curl 192.168.0.164:37817 >>./test.txt;sleep 1;done 2>/dev/null && cat test.txt 我是nginx-164-node1 我是nginx-165-node2-2 我是nginx-164-node1 我是nginx-164-node1 我是nginx-165-node2-2 我是nginx-164-node1 我是nginx-165-node2-2 我是nginx-165-node2-1 我是nginx-164-node1 我是nginx-165-node2-1 13.k8s-UI-Dashboard安装部署 由于默认镜像在谷歌所以这里用国内镜像源建议在每个node先将镜像pod下来 docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3 在master下载yaml文件:(已经修改过的版本) mkdir -p /opt/yaml && cd /opt/yaml wget http://resource.bestyunyan.club//server/yaml/kubernetes-dashboard.yamlkubernetes-dashboard.yaml #文件内部需要修改的地方标记为红色 完整版本: cat > kubernetes-dashboard.yaml << EOF # Copyright 2017 The Kubernetes Authors. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Configuration to deploy release version of the Dashboard UI compatible with # Kubernetes 1.8. # # Example usage: kubectl create -f <this_file> # ------------------- Dashboard Secret ------------------- # apiVersion: v1 kind: Secret metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard-certs namespace: kube-system type: Opaque --- # ------------------- Dashboard Service Account ------------------- # apiVersion: v1 kind: ServiceAccount metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Role & Role Binding ------------------- # kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: kubernetes-dashboard-minimal namespace: kube-system rules: # Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret. - apiGroups: [""] resources: ["secrets"] verbs: ["create"] # Allow Dashboard to create 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] verbs: ["create"] # Allow Dashboard to get, update and delete Dashboard exclusive secrets. - apiGroups: [""] resources: ["secrets"] resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"] verbs: ["get", "update", "delete"] # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map. - apiGroups: [""] resources: ["configmaps"] resourceNames: ["kubernetes-dashboard-settings"] verbs: ["get", "update"] # Allow Dashboard to get metrics from heapster. - apiGroups: [""] resources: ["services"] resourceNames: ["heapster"] verbs: ["proxy"] - apiGroups: [""] resources: ["services/proxy"] resourceNames: ["heapster", "http:heapster:", "https:heapster:"] verbs: ["get"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: kubernetes-dashboard-minimal namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubernetes-dashboard-minimal subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system --- # ------------------- Dashboard Deployment ------------------- # kind: Deployment apiVersion: apps/v1beta2 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: replicas: 1 revisionHistoryLimit: 10 selector: matchLabels: k8s-app: kubernetes-dashboard template: metadata: labels: k8s-app: kubernetes-dashboard spec: containers: - name: kubernetes-dashboard image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3 ports: - containerPort: 8443 protocol: TCP args: - --auto-generate-certificates # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:port volumeMounts: - name: kubernetes-dashboard-certs mountPath: /certs # Create on-disk volume to store exec logs - mountPath: /tmp name: tmp-volume livenessProbe: httpGet: scheme: HTTPS path: / port: 8443 initialDelaySeconds: 30 timeoutSeconds: 30 volumes: - name: kubernetes-dashboard-certs secret: secretName: kubernetes-dashboard-certs - name: tmp-volume emptyDir: {} serviceAccountName: kubernetes-dashboard # Comment the following tolerations if Dashboard must not be deployed on master tolerations: - key: node-role.kubernetes.io/master effect: NoSchedule --- # ------------------- Dashboard Service ------------------- # kind: Service apiVersion: v1 metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-system spec: type: NodePort ports: - port: 443 targetPort: 8443 nodePort: 30000 selector: k8s-app: kubernetes-dashboard EOF #添加nodeport方式访问并设置端口为30000 #创建rbac授权yaml cat > dashboard-admin.yaml << EOF apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: kubernetes-dashboard labels: k8s-app: kubernetes-dashboard roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: kubernetes-dashboard namespace: kube-system EOF 创建dashboard.yaml和rbac kubectl create -f kubernetes-dashboard.yaml kubectl create -f dashboard-admin.yaml 查看pods [root@k8s-master key]# kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE default nginx-64f497f8fd-6ksl9 1/1 Running 0 2h 172.17.1.2 192.168.0.164 <none> default nginx-64f497f8fd-87scv 1/1 Running 0 2h 172.17.46.2 192.168.0.165 <none> default nginx-64f497f8fd-r2pj6 1/1 Running 0 2h 172.17.46.3 192.168.0.165 <none> kube-system kubernetes-dashboard-b644d546b-ftpb9 1/1 Running 0 19m 172.17.1.3 192.168.0.164 <none> 默认情况下部署成功后可以直接访问 https://NODE_IP:配置的端口 访问,但是想要登录进去查看的话需要使用 kubeconfig 或者 access token 的方式; 这里选择access token 生成token并复制 kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token Name: admin-user-token-6gk2h Type: kubernetes.io/service-account-token token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTZnazJoIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI2M2JlYzIzYS03YzY5LTExZTktODc4MS0wMDBjMjk0NjFjYjEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.OFm-xaTL4eiRDGP44PUVVEViSNCeDlswboATLfZ3YUW7VACaHqFcZRnr6t-2Wp_jCgeJ6HldBE52KS43LSFISKlV4YfJ62KPKV-D4l9BLM4uXDal3dFn7Xc9cK7fa1S7zbkWCqVs97Q51YWTtf0tOpPCcfIkcBTrnyswmiyP6EUA9qt9vM4qnrqUuLQSeuEqUAzjrPnAYzWt5z_zjinjDv0S3yXiqnHP0mbjkwQFeA8C_4m6jrWm2jxTPDlms1QPQ5WrP3hyWGHKKyDN_CORGoUwG8CW37QD46WI637TB8iyq5-rbGJRuUC17DJ_F5uGFp0ntDABO_1yCPEX1HuTpQ 浏览器访问选择token粘贴进入 浏览器访问出现您的链接不是私密链接等警告时参考https://www.jianshu.com/p/40c0405811ee进行处理
所有评论(0)