k8s学习笔记(9)--- kubernetes之安装部署(二进制包方式)
kubernetes之安装部署(二进制包方式)一、安装部署简介二、kubernetes(二进制包方式)安装部署1.部署etcd集群1.1 生成etcd证书1.2下载/解压etcd二进制包1.3 创建etcd配置文件1.4 systemd管理etcd,创建etcd服务文件1.5 启动并设置开启启动etcd服务1.6 验证etcd部署成功2.安装Docker3.部署Flannel网络3.1etcd写入
kubernetes之安装部署(二进制包方式)
一、安装部署简介
本次采用二进制包方式安装部署kubernetes,其必须的组件有etcd、apiserver、scheduler、controller-manager、kubelet、kube-proxy、flannel
等组件。
- 操作系统:centos7.5
- 机器1台:172.27.19.143(即作master也作为node节点)
- etcd文件目录结构
/opt
├ ─ ─ etcd
├ ─ ─ bin(存放etcd相关可执行文件或命令,如etcd、etcdctl)
├ ─ ─ cfg(存放etcd相关配置文件,如etcd.conf配置文件)
├ ─ ─ ssl(存放etcd证书相关文件,如*.pem文件)
├ ─ ─ logs(存放etcd服务日志文件)
├ ─ ─ data(存放etcd相关数据文件) - kubernetes文件目录结构
/opt
├ ─ ─ kubernetes
├ ─ ─ bin(存放kubernetes相关可执行文件或命令,如kube-apiserver、kubelet、kubectl等等)
├ ─ ─ cfg(存放kubernetes相关配置文件)
├ ─ ─ ssl(存放kubernetes证书相关文件,如*.pem文件)
├ ─ ─ logs(存放kubernetes服务日志文件)
├ ─ ─ data(存放kubernetes相关数据文件) - 前期准备
1.关闭防火墙、关闭selinux、关闭swapoff -a;
2.下载cfssl工具 - kubernetes环境安装部署流程
一、部署etcd集群
├ ─ ─ 1.生成etcd证书
├ ─ ─ 1.1创建CA证书配置ca-config.json
├ ─ ─ 1.2创建CA证书签名请求配置ca-csr.json
├ ─ ─ 1.3创建etcd证书签名请求配置server-csr.json
├ ─ ─ 1.4执行cfssl命令生成证书
├ ─ ─ 1.5把生成的证书放到/opt/etcd/ssl目录下
├ ─ ─ 2.下载/解压etcd二进制包
├ ─ ─ 3.创建etcd配置文件
├ ─ ─ 4.systemd管理etcd,创建etcd服务文件
├ ─ ─ 5.启动并设置开启启动etcd服务
├ ─ ─ 6.验证etcd部署成功
二、安装Docker
三、部署Flannel网络
├ ─ ─ 1.Flannel要用etcd存储自身一个子网信息,写入预定义子网段
├ ─ ─ 2.下载/解压flannel二进制包
├ ─ ─ 3.配置Flannel
├ ─ ─ 4.systemd管理Flannel,创建flannel服务
├ ─ ─ 5.配置Docker启动指定子网段,重新创建docker.service
├ ─ ─ 6.重启flannel和docker
├ ─ ─ 7.验证flannel和docker是否生效
四、安装部署kubernetes(在master节点上部署apiserver、scheduler、controller-manager组件)
├ ─ ─ 1.生成证书
├ ─ ─ 1.1 创建CA证书配置ca-config.json
├ ─ ─ 1.2 创建CA证书签名请求配置ca-csr.json
├ ─ ─ 1.3 创建kubernetes证书签名请求配置server-csr.json
├ ─ ─ 1.4 创建kube-proxy证书签名请求配置kube-proxy-csr.json
├ ─ ─ 1.5 执行cfssl命令生成证书
├ ─ ─ 1.6 把生成的证书放到/opt/kubernetes/ssl目录下
├ ─ ─ 2.下载/解压kubernetes二进制包
├ ─ ─ 3.master节点部署apiserver
├ ─ ─ 3.1 创建token文件token.csv
├ ─ ─ 3.2 创建apiserver配置文件kube-apiserver.conf
├ ─ ─ 3.3 systemd管理apiserver,创建kube-apiserver服务
├ ─ ─ 3.4 启动kube-apiserver服务
├ ─ ─ 3.5 验证kube-apiserver服务是否启动成功
├ ─ ─ 4.master节点部署schduler.conf
├ ─ ─ 4.1 创建schduler配置文件kube-scheduler
├ ─ ─ 4.2 systemd管理schduler,创建kube-scheduler服务
├ ─ ─ 4.3 启动kube-scheduler服务
├ ─ ─ 4.4 验证kube-scheduler服务是否启动成功
├ ─ ─ 5.master节点部署controller-manager
├ ─ ─ 5.1 创建controller-manager配置文件kube-controller-manager.conf
├ ─ ─ 5.2 systemd管理controller-manager,创建kube-controller-manager服务
├ ─ ─ 5.3 启动kube-controller-manager服务
├ ─ ─ 5.4 验证kube-controller-manager服务是否启动成功
├ ─ ─ 6.所有组件都启动成功后,通过kubectl工具查看当前集群组件状态
五、在master节点上绑定系统集群角色
├ ─ ─ 1.将kubelet-bootstrap用户绑定到系统集群角色
├ ─ ─ 2.创建kubeconfig文件
六、把master节点作为node添加进来(部署kubelet、kube-proxy组件)
├ ─ ─ 1.master节点上部署kubelet组件
├ ─ ─ 1.1 创建kubelet配置文件kubelet.conf
├ ─ ─ 1.2 systemd管理kubelet,创建kubelet服务
├ ─ ─ 1.3 启动kubelet服务
├ ─ ─ 1.4 验证kubelet服务是否启动成功
├ ─ ─ 1.5 在Master审批Node加入集群
├ ─ ─ 1.6 给master节点添加roles,以便跟node区分(一般不用于调度pod)
├ ─ ─ 2.master节点上部署kube-proxy
├ ─ ─ 2.1 创建kube-proxy kubeconfig文件
├ ─ ─ 2.2 创建kube-proxy配置文件kube-proxy.conf
├ ─ ─ 2.3 systemd管理kube-proxy,创建kube-proxy服务
├ ─ ─ 2.4 启动kube-proxy服务
├ ─ ─ 2.5 验证kube-proxy服务是否启动成功
注意:添加新node进集群同理,在node节点上部署kubelet和kube-proxy。若要在node节点上访问集群,则还需要安装部署kubectl。
七、创建一个Nginx Web,判断集群是否正常工作
二、kubernetes(二进制包方式)安装部署
-
提前关闭防火墙、关闭selinux、关闭swapoff -a
# 关闭防火墙 systemctl stop firewalld systemctl disable firewalld # 关闭selinux,临时关闭 setenforce 0 # 关闭selinux,永久关闭 sed -i 's/enforcing/disabled/' /etc/selinux/config # 关闭swap swapoff -a # 同步系统时间 ntpdate time.windows.com
-
提前下载cfssl工具
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64 mv cfssl_linux-amd64 /usr/local/bin/cfssl mv cfssljson_linux-amd64 /usr/local/bin/cfssljson mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
cfss和证书相关知识可以参考:
https://blog.51cto.com/liuzhengwei521/2120535?utm_source=oschina-app
https://www.jianshu.com/p/944f2003c829
https://www.bbsmax.com/A/RnJWLj8R5q/
1.部署etcd集群
-
创建etcd存放相关数据的目录
mkdir /opt/etcd/{bin,cfg,ssl,data,logs} -p
1.1 生成etcd证书
-
创建CA证书配置ca-config.json
# 创建CA(Certificate Authority)配置文件 cat > /opt/etcd/data/ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "etcd": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
知识点:
expiry
:表示过期时间,如果不写以default中的为准;
ca-config.json
:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;此实例只有一个etcd模板。
signing
:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;
server auth
:表示client可以用该 CA 对server提供的证书进行验证;
client auth
:表示server可以用该CA对client提供的证书进行验证;
注意标点符号,最后一个字段一般是没有都好的。
-
创建CA证书签名请求配置ca-csr.json
# CA证书签名请求文件 cat > /opt/etcd/data/ca-csr.json <<EOF { "CN": "etcd CA", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "Beijing", "L": "Beijing" } ] } EOF
知识点:
CN
:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name)
key
:生成证书的算法;
names
:一些其它的属性,如C
、ST
、L
分别代表国家、省份、城市;而O
:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group),进行RBAC绑定;
-
创建etcd证书签名请求配置server-csr.json
# 注意hosts需要修改为etcd集群的主机IP地址 cat > /opt/etcd/data/server-csr.json <<EOF { "CN": "etcd", "hosts": [ "172.27.19.143" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing" } ] } EOF
知识点:
hosts
:表示哪些主机名(域名)或者IP可以使用此csr申请的证书,为空或者""表示所有的都可以使用;
-
执行cfssl命令生成证书
# 先进入存放etcd证书配置文件的存放目录/opt/etcd/data cd /opt/etcd/data # 初始化CA,生成证书:ca-key.pem(私钥)、ca.pem(公钥)、ca.csr(证书签名请求) cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # 使用现有私钥, 重新生成证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=etcd server-csr.json | cfssljson -bare server # 查看生成的证书(ca-key.pem、ca.pem、server-key.pem、server.pem) ls *.pem
知识点:cfssljson只是整理json格式,-bare主要的意义在于生成证书文件的命名
-
把生成的证书放到/opt/etcd/ssl目录下
mv ca*pem server*pem /opt/etcd/ssl
1.2下载/解压etcd二进制包
# 返回家目录,把安装包下载在家目录
cd ~
# 下载etcd二进制包
wget https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz
# 解压etcd二进制包
tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
# 把etcd,etcdctl移到/opt/etcd/bin下
mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
1.3 创建etcd配置文件
注意:ETCD_DATA_DIR的路径
/var/lib/etcd/default.etcd
需要提前创建,否则可能会报错;
cat > /opt/etcd/cfg/etcd.conf <<EOF
#[Member]
ETCD_NAME="etcd"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.27.19.143:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.27.19.143:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.27.19.143:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.27.19.143:2379"
ETCD_INITIAL_CLUSTER="etcd=https://172.27.19.143:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
* ETCD_NAME 节点名称
* ETCD_DATA_DIR 数据目录
* ETCD_LISTEN_PEER_URLS 集群通信监听地址
* ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
* ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
* ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
* ETCD_INITIAL_CLUSTER 集群节点地址
* ETCD_INITIAL_CLUSTER_TOKEN 集群Token
* ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群
1.4 systemd管理etcd,创建etcd服务文件
cat > /usr/lib/systemd/system/etcd.service <<EOF
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd.conf
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF
1.5 启动并设置开启启动etcd服务
systemctl daemon-reload
systemctl start etcd
systemctl enable etcd
1.6 验证etcd部署成功
/opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://172.27.19.143:2379" \
cluster-health
2.安装Docker
-
直接通过yum源安装docker
yum install docker -y systemctl start docker systemctl enable docker
-
配置yun源后再安装docker
# 配置镜像源二选一 # 配置阿里镜像源 yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 配置Docker官方镜像源 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo # 安装docker # 显示docker-ce所有可安装版本 yum list docker-ce --showduplicates | sort -r # 安装指定docker版本 yum install docker-ce-18.06.1.ce-3.el7 -y # 启动docker并设置docker开机启动 systemctl daemon-reload systemctl enable docker systemctl start docker # 查看docker服务是否启动成功 systemctl status docker
-
本地rpm包安装
# 下载(17版本请把docker-ce-selinux也一起下载) wget https://download.docker.com/linux/centos/7/x86_64/stable/Packages/ # 创建挂在目录以及阿里源的文件 mkdir -p /data/docker-root mkdir -p /etc/docker touch /etc/docker/daemon.json chmod 700 /etc/docker/daemon.json cat > /etc/docker/daemon.json << EOF { "graph":"/data/docker-root", "registry-mirrors": ["https://7bezldxe.mirror.aliyuncs.com"] } EOF # 安装docker yum localinstall ./docker* -y # 启动docker并设置docker开机启动 systemctl enable docker systemctl start docker systemctl status docker
3.部署Flannel网络
3.1etcd写入预定义子网段
# Falnnel要用etcd存储自身一个子网信息,所以要保证能成功连接Etcd,写入预定义子网段:
/opt/etcd/bin/etcdctl \
--ca-file=/opt/etcd/ssl/ca.pem --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem \
--endpoints="https://172.27.19.143:2379" \
set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
3.2下载/解压flannel二进制包
# 返回家目录,把安装包下载在家目录
cd ~
# 下载flannel二进制包
wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
# 解压flannel二进制包
tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
# 把flanneld mk-docker-opts.sh移到/opt/kubernetes/bin下
mv flanneld mk-docker-opts.sh /opt/kubernetes/bin
3.3配置Flannel
cat > /opt/kubernetes/cfg/flanneld.conf <<EOF
FLANNEL_OPTIONS="-etcd-endpoints=https://172.27.19.143:2379 \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"
EOF
3.4systemd管理Flannel,创建flannel服务
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld.conf
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
3.5配置Docker启动指定子网段,重新创建docker.service
cat > /usr/lib/systemd/system/docker.service <<EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
EOF
3.6重启flannel和docker
# 启动flannel
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
# 查看flannel是否启动成功
systemctl status flanneld
# 重启docker
systemctl restart docker
systemctl status docker
3.7验证flannel和docker是否生效
# 确保docker0与flannel.1在同一网段
ps -ef |grep docker
# 测试不同节点互通,在当前节点访问另一个Node节点docker0 IP
ping docker0 IP
# 如果能通说明Flannel部署成功。如果不通检查下日志:journalctl -u flannel
4.安装部署kubernetes
-
创建kubernetes存放相关数据的目录
mkdir /opt/kubernetes/{bin,cfg,ssl,data,logs} -p
4.1生成证书
-
创建CA证书配置ca-config.json
# 创建CA(Certificate Authority)配置文件 cat > /opt/kubernetes/data/ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF
-
创建CA证书签名请求配置ca-csr.json
# CA证书签名请求文件 cat > /opt/kubernetes/data/ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF
-
创建kubernetes证书签名请求配置server-csr.json
# 生成kubernetes证书 cat > /opt/kubernetes/data/server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", # 这是后面dns要使用的虚拟网络的网关,不用改,就用这个 切忌(删除这行) "127.0.0.1", # 这个是本地localhost,不用改,就用这个切忌(删除这行) "172.27.19.143", # 这个可修改,与添加进来的机器ip一致 "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
知识点:
hosts
:表示哪些主机名(域名)或者IP可以使用此csr申请的证书,为空或者""表示所有的都可以使用;
-
创建kube-proxy证书签名请求配置kube-proxy-csr.json
# 生成kube-proxy证书 cat > /opt/kubernetes/data/kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
-
执行cfssl命令生成证书
# 先进入存放kubernetes证书配置文件的存放目录/opt/kubernetes/data cd /opt/kubernetes/data # 初始化CA,生成证书:ca-key.pem(私钥)、ca.pem(公钥)、ca.csr(证书签名请求) cfssl gencert -initca ca-csr.json | cfssljson -bare ca - # 使用现有私钥, 重新生成kubernetes证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server # 使用现有私钥, 重新生成kube-proxy证书 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy #查看生成的证书 ls *pem
-
把生成的证书放到/opt/kubernetes/ssl目录下
mv *.pem /opt/kubernetes/ssl
4.2下载/解压kubernetes二进制包
# 注意,linux无法直接下载。可在windows下载后先上传至linux
# wget https://dl.k8s.io/v1.11.10/kubernetes-server-linux-amd64.tar.gz
# 下载地址:https://github.com/kubernetes/kubernetes (包含了k8s所必须的组件,如kube-apiserver、kubelet、kube-scheduler、kube-controller-manager等等)
# 1.在windows下进入https://github.com/kubernetes/kubernetes后,点击如CHANGELOG-1.16.md文件查看对应的版本(1.16版本)和下载的文件;
# 2.选择Server Binaries的kubernetes-server-linux-amd64.tar.gz下载
# 3.在windows下载后通过lrzsz工具执行rz上传到linux上;
# 4.解压后生成kubernetes目录,把/kubernetes/service/bin/下的可执行文件复制到/opt/kubernetes/bin下
# 返回家目录,把安装包下载在家目录
cd ~
# kubernetes二进制包
tar zxvf kubernetes-server-linux-amd64.tar.gz
#把/kubernetes/service/bin/下的可执行文件复制到/opt/kubernetes/bin下
cp kubernetes/server/bin/{kube-apiserver,kube-scheduler,kube-controller-manager,kubectl,kube-proxy,kubelet} /opt/kubernetes/bin
4.3master节点部署apiserver
-
创建token文件token.csv
# 第一列:随机字符串,自己可生成; 第二列:用户命;第三列:UID;第四列:用户组; cat > /opt/kubernetes/cfg/token.csv <<EOF 674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap" EOF
-
创建apiserver配置文件kube-apiserver.conf
cat > /opt/kubernetes/cfg/kube-apiserver.conf <<EOF KUBE_APISERVER_OPTS="--logtostderr=true \ --v=4 \ --etcd-servers=https://172.27.19.143:2379 \ --bind-address=172.27.19.143 \ --secure-port=6443 \ --advertise-address=172.27.19.143 \ --allow-privileged=true \ --service-cluster-ip-range=10.0.0.0/24 \ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \ --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/cfg/token.csv \ --service-node-port-range=30000-50000 \ --tls-cert-file=/opt/kubernetes/ssl/server.pem \ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \ --client-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \ --etcd-cafile=/opt/etcd/ssl/ca.pem \ --etcd-certfile=/opt/etcd/ssl/server.pem \ --etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF
-
systemd管理apiserver,创建kube-apiserver服务
cat > /usr/lib/systemd/system/kube-apiserver.service <<EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver.conf ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
-
启动kube-apiserver服务
systemctl daemon-reload systemctl enable kube-apiserver systemctl start kube-apiserver
-
验证kube-apiserver服务是否启动成功
systemctl status kube-apiserver
4.4master节点部署schduler
-
创建schduler配置文件kube-scheduler.conf
cat > /opt/kubernetes/cfg/kube-scheduler.conf <<EOF KUBE_SCHEDULER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect" EOF
-
systemd管理schduler,创建kube-scheduler服务
cat > /usr/lib/systemd/system/kube-scheduler.service <<EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler.conf ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
-
启动kube-scheduler服务
systemctl daemon-reload systemctl enable kube-scheduler systemctl start kube-scheduler
-
验证kube-scheduler服务是否启动成功
systemctl status kube-scheduler
4.5master节点部署controller-manager
-
创建controller-manager配置文件kube-controller-manager.conf
cat > /opt/kubernetes/cfg/kube-controller-manager.conf <<EOF KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ --v=4 \ --master=127.0.0.1:8080 \ --leader-elect=true \ --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 \ --cluster-name=kubernetes \ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \ --root-ca-file=/opt/kubernetes/ssl/ca.pem \ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem" EOF
-
systemd管理controller-manager,创建kube-controller-manager服务
cat > /usr/lib/systemd/system/kube-controller-manager.service <<EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager.conf ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
-
启动kube-controller-manager服务
systemctl daemon-reload systemctl enable kube-controller-manager systemctl start kube-controller-manager
-
验证kube-controller-manager服务是否启动成功
systemctl status kube-controller-manager
4.6查看当前集群组件状态
# kubectl软连接到/usr/bin/后,便可直接使用kubectl命令
ln -s /opt/kubernetes/bin/kubectl /usr/bin/
# 查看scheduler、etcd、controller-manager组件状态
kubectl get cs -o yaml
5.在master节点上绑定系统集群角色
# 先进入 /opt/kubernetes/ssl目录
cd /opt/kubernetes/ssl
5.1将kubelet-bootstrap用户绑定到系统集群角色
/opt/kubernetes/bin/kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
5.2创建kubeconfig文件
# 指定apiserver 地址(如果apiserver做了负载均衡,则填写负载均衡地址)
KUBE_APISERVER="https://172.27.19.143:6443"
BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc
# 设置集群参数
/opt/kubernetes/bin/kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
/opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
/opt/kubernetes/bin/kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
/opt/kubernetes/bin/kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
# 把bootstrap.kubeconfig放到/opt/kubernetes/cfg目录下
mv *.kubeconfig /opt/kubernetes/cfg
ls /opt/kubernetes/cfg/*.kubeconfig
6.把master节点作为node添加进来
node节点添加进集群其配置基本组件为kubelet
就可以了,但是node节点主要用来调度Pod,若其service服务需要提供clusterip或nodeport访问方式,还需要部署kube-proxy
组件。
6.1.master节点上部署kubelet组件
-
创建kubelet配置文件kubelet.conf
cat > /opt/kubernetes/cfg/kubelet.conf <<EOF KUBELET_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=172.27.19.143 \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --config=/opt/kubernetes/cfg/kubelet.yaml\ --cert-dir=/opt/kubernetes/ssl \ --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0" EOF # 其中/opt/kubernetes/cfg/kubelet.yaml配置文件如下 cat > /opt/kubernetes/cfg/kubelet.yaml<<EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 172.27.19.143 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: ["10.0.0.2"] clusterDomain: cluster.local. failSwapOn: false authentication: anonymous: enabled: true webhook: enabled: false EOF
-
systemd管理kubelet,创建kubelet服务
cat > /usr/lib/systemd/system/kubelet.service <<EOF [Unit] Description=Kubernetes Kubelet After=docker.service Requires=docker.service [Service] EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS Restart=on-failure KillMode=process [Install] WantedBy=multi-user.target EOF
-
启动kubelet服务
systemctl daemon-reload systemctl enable kubelet systemctl start kubelet
-
验证kubelet服务是否启动成功
systemctl status kubelet
-
在Master审批Node加入集群
# 启动后还没加入到集群中,需要手动允许该节点才可以。在Master节点查看请求签名的Node # /opt/kubernetes/bin/kubectl get csr # /opt/kubernetes/bin/kubectl certificate approve XXXXID # /opt/kubernetes/bin/kubectl get node
-
给master节点添加roles,以便跟node区分(一般不用于调度pod)
# 修改node的role标签 kubectl label node 172.27.19.143 node-role.kubernetes.io/master=172.27.19.143
6.2master节点上部署kube-proxy
-
创建kube-proxy kubeconfig文件
#在/opt/kubernetes/ssl目录下执行以下命令 /opt/kubernetes/bin/kubectl config set-cluster kubernetes \ --certificate-authority=./ca.pem \ --embed-certs=true \ --server=${KUBE_APISERVER} \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config set-credentials kube-proxy \ --client-certificate=./kube-proxy.pem \ --client-key=./kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig /opt/kubernetes/bin/kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig mv *.kubeconfig /opt/kubernetes/cfg ls /opt/kubernetes/cfg/*.kubeconfig
-
创建kube-proxy配置文件kube-proxy.conf
cat > /opt/kubernetes/cfg/kube-proxy.conf <<EOF KUBE_PROXY_OPTS="--logtostderr=true \ --v=4 \ --hostname-override=172.27.19.143 \ --cluster-cidr=10.0.0.0/24 \ --config=/opt/kubernetes/cfg/kube-proxy-config.yaml" EOF #其中kube-proxy-config.yaml文件如下: cat > /opt/kubernetes/cfg/kube-proxy-conf.yaml <<EOF apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 172.27.19.143 clientConnection: kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig clusterCIDR: 10.0.0.0/24 healthzBindAddress: 172.27.19.143:10256 hostnameOverride: 172.27.19.143 kind: KubeProxyConfiguration metricsBindAddress: 172.27.19.143:10249 mode: "ipvs" EOF
-
systemd管理kube-proxy,创建kube-proxy服务
cat > /usr/lib/systemd/system/kube-proxy.service <<EOF [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy.conf ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
-
启动kube-proxy服务
systemctl daemon-reload systemctl enable kube-proxy systemctl start kube-proxy
-
验证kube-proxy服务是否启动成功
systemctl status kube-proxy
7.创建一个Nginx Web,判断集群是否正常工作
-
创建一个Nginx Web
# 运行一个nginx的depoyment /opt/kubernetes/bin/kubectl run nginx --image=docker.io/nginx:latest --replicas=1 --image-pull-policy=IfNotPresent # 创建一个nginx的service /opt/kubernetes/bin/kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
-
查看Pod,Service
[root@VM_19_143_centos ~]# kubectl get pod NAME READY STATUS RESTARTS AGE nginx-7c5cf9bcfc-t992w 1/1 Running 0 21m [root@VM_19_143_centos ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 3d16h nginx NodePort 10.0.0.156 <none> 88:35419/TCP 8m4s
-
访问nginx服务
[root@VM_19_143_centos ~]# curl 172.27.19.143:35419 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
注意: 在pod所在机器172.27.19.143上直接通过curl 127.0.0.1:35419或curl localhost:35419访问不通可能是哪里少了配置。
更多推荐
所有评论(0)