摘要:

自己在测试环境搞了一波,然后实现k8s集群的搭建,并且支持rbac。然后用prometheus+grafana+node-expoter进行监控。

基础环境:

用途

IP

组件

 

 

 

 

 

远程kubectl,nfs

10.10.88.18

kubectl,nfs

 

master

10.10.88.17

kube-apiserver,kube-controller-manager,kube-scheduler,etcd

 

node

10.10.88.19

kubelet,kube-proxy,docker,flannel

 

node

10.10.88.20

kubelet,kube-proxy,docker,flannel

 

node

10.10.88.21

kubelet,kube-proxy,docker,flannel

 

node

10.10.88.22

kubelet,kube-proxy,docker,flannel

 

 

基础组件:

服务

版本

 

 

Linux系统

Centos7.5 3.10.0-862.6.3.el7.x86_64

Kubernetes

1.9.3

Docker

1.12.6

Etcd

3

 

基础设置:

关闭selinux,关闭swap

一.集群规划

1.1. 服务器规划

目前一共有6台机器,一台做master,然后有4台node,然后分出来一台机器作为kubectl来做远程控制。

1.2. 集群架构

master上运行的组件包括:etcd,kube-scheduler,kube-controller-manager,kube-apiserver组件。

node节点上运行的组件包括docker,flanneld,kubelet,kube-proxy。其中node节点flanneld通过连接2379端口实现对etcd节点的监听;kubelet通过kubeconfig文件连接6443端口实现对kube-apiserver的访问。

master和node上分别将50G的硬盘,其中用于创建direct-lvm(docker采用devicemapper存储驱动)和挂载/opt目录(kubernetes主目录)。

 

集群内网络规划如下:

pod ip段为:172.20.0.0/16

cluster-ip段为:172.24.0.0/16

flanneld网络采用vxlan+directrouting模式

 

1.3. 软件版本

组件名称

版本

组件名称

版本

 

 

 

 

 

 

apiserver

1.9.3

docker

1.12.6

 

etcd

3.2.12

flanneld

0.9.1

 

 

1.4. 目录结构

目录路径

功能

 

 

/opt/kubernetes

kubernetes配置主目录

/opt/kubernetes/bin

kubernetes及etcd命令目录

/opt/kubernetes/etcd

etcd数据目录

/opt/kubernetes/cfg

kubernetes及etcd配置文件目录

/opt/kubernetes/ssl

kubernetes及etcd证书目录

/opt/kubernetes/cfg/config

生成证书的json文件位置

/opt/kubernetes/image

存放master节点组件镜像文件

 

二.集群基础设置

1. 创建相关目录

1.1. master节点

mkdir -p /opt/kubernetes/{cfg,etcd,ssl,bin,image}

mkdir /opt/kubernetes/ssl/config

echo "PATH=$PATH:/opt/kubernetes/bin" >> /etc/profile

source /etc/profile

1.2. Node节点

mkdir -p /opt/kubernetes/{cfg,ssl,bin}

2. 创建direct-lvm

由于目前采用的docker存储驱动为devicemapper,所以在master和node节点上都要创建一个direct-lvm分区供docker使用

 

data_dev="/dev/sda4"

pvcreate -y ${data_dev}

vgcreate docker ${data_dev}

lvcreate --wipesignatures y -n thinpool docker -l 95%VG

lvcreate --wipesignatures y -n thinpoolmeta docker -l 1%VG

lvconvert -y --zero n -c 512K --thinpool docker/thinpool --poolmetadata docker/thinpoolmeta

 

cat > /etc/lvm/profile/docker-thinpool.profile << EOF

activation { \

thin_pool_autoextend_threshold=80 \

thin_pool_autoextend_percent=20 \

} \

EOF

 

lvchange --metadataprofile docker-thinpool docker/thinpool

lvs -o+seg_monitor

 

3. 安装docker

chattr -i /etc/group

yum install -y yum-utils device-mapper-persistent-data lvm2

 

vim /etc/yum.repos.d/docker.repo

[dockerrepo]

name=Docker Repository

baseurl=https://yum.dockerproject.org/repo/main/centos/\$releasever/

enabled=1

gpgcheck=0

gpgkey=https://yum.dockerproject.org/gpg

 

yum makecache fast

yum install -y docker-engine-1.12.6-1.el7.centos.x86_64

chattr +i /etc/group

mkdir /etc/docker

 

vim /etc/docker/daemon.json

{

      "log-driver": "json-file",

      "log-opts": {

              "max-size": "100m",

              "max-file": "5"

      },

 

      "default-ulimit": ["nofile=102400:102400"],

      "ipv6": false,

      "debug": true,

      "log-level": "debug",

 

      "storage-driver": "devicemapper",

      "storage-opts": [

        "dm.thinpooldev=/dev/mapper/docker-thinpool",

        "dm.use_deferred_removal=true",

        "dm.use_deferred_deletion=true"

      ],

 

      "selinux-enabled": false,

 

      "registry-mirrors": ["registry.xxxxxx.cn"]     #这个可以写自己已经搭建好的镜像仓库,或者不填

}

 

echo "net.bridge.bridge-nf-call-ip6tables = 1" >> /etc/sysctl.conf

echo "net.bridge.bridge-nf-call-iptables = 1" >> /etc/sysctl.conf

sysctl -p

systemctl start docker

systemctl enable docker

 

4. 安装iptables并且设置规则

在master和node节点上都需要配置iptables规则以便后续服务能够互相通信。

yum remove -y firewalld

yum install -y iptables iptables-services

 

vim /etc/sysconfig/iptables

*filter

:INPUT ACCEPT [0:0]

:FORWARD ACCEPT [0:0]

:OUTPUT ACCEPT [0:0]

:RH-Firewall-1-INPUT - [0:0]

-A INPUT -j RH-Firewall-1-INPUT

-A FORWARD -j RH-Firewall-1-INPUT

-A RH-Firewall-1-INPUT -i lo -j ACCEPT

-A RH-Firewall-1-INPUT -p icmp --icmp-type any -j ACCEPT

-A RH-Firewall-1-INPUT -s 10.10.0.0/16  -j ACCEPT

-A RH-Firewall-1-INPUT -s 172.24.0.0/16  -j ACCEPT

-A RH-Firewall-1-INPUT -s 172.20.0.0/16  -j ACCEPT

-A RH-Firewall-1-INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT

-A RH-Firewall-1-INPUT -j REJECT --reject-with icmp-host-prohibited

COMMIT

 

systemctl start iptables

systemctl enable iptables

 

三.安装证书

1. 安装相关证书工具

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -P /usr/local/src

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -P /usr/local/src

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -P /usr/local/src

chmod +x /usr/local/src/cfssl_linux-amd64 /usr/local/src/cfssljson_linux-amd64

mv /usr/local/src/cfssl_linux-amd64 /usr/local/bin/cfssl

mv /usr/local/src/cfssljson_linux-amd64 /usr/local/bin/cfssljson

mv /usr/local/src/cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

 

2.1. CA证书生成

 

cat > /opt/kubernetes/ssl/config/ca-config.json <<EOF

{

  "signing": {

    "default": {

      "expiry": "87600h"

    },

    "profiles": {

      "kubernetes": {

         "expiry": "87600h",

         "usages": [

            "signing",

            "key encipherment",

            "server auth",

            "client auth"

        ]

      }

    }

  }

}

EOF

 

cat > /opt/kubernetes/ssl/config/ca-csr.json <<EOF

{

    "CN": "kubernetes",

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "Shenzhen",

            "ST": "Guangzhou",

            "O": "公司名称",

            "OU": "System"

        }

    ]

}

EOF

 

cfssl gencert -initca /opt/kubernetes/ssl/config/ca-csr.json | cfssljson -bare ca -

 

2.2.server证书生成

 

将${host_ip}替换为使用该证书服务器的ip,${hostname}替换为使用该证书的主机名。

cat > /opt/kubernetes/ssl/config/server-csr.json <<EOF

{

    "CN": "kubernetes",

    "hosts": [

      "127.0.0.1",

      "${host_ip}",

      "172.24.0.1",

      "172.18.1.252",

      "kubernetes",

      "kubernetes.default",

      "kubernetes.default.svc",

      "kubernetes.default.svc.cluster",

      "kubernetes.default.svc.cluster.local"

    ],

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

        {

            "C": "CN",

            "L": "Shenzhen",

            "ST": "Guangzhou",

            "O": "公司名称",

            "OU": "System"

        }

    ]

}

EOF

cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/config/ca-config.json -profile=kubernetes /opt/kubernetes/ssl/config/server-csr.json | cfssljson -bare ${hostname}

 

mv /opt/kubernetes/ssl/config/${hostname}-key.pem /opt/kubernetes/ssl

mv /opt/kubernetes/ssl/config/${hostname}.pem /opt/kubernetes/ssl

 

2.3.admin证书和私钥生成

 

cat > /opt/kubernetes/ssl/config/admin-csr.json <<EOF

{

  "CN": "admin",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "L": "Shenzhen",

      "ST": "Guangzhou",

      "O": "公司名称",

      "OU": "System"

    }

  ]

}

EOF

 

cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/config/ca-config.json -profile=kubernetes /opt/kubernetes/ssl/config/admin-csr.json | cfssljson -bare admin

 

mv /opt/kubernetes/ssl/config/{admin-key.pem,admin.pem} /opt/kubernetes/ssl/

 

2.4.kube-proxy证书和私钥生成

 

cat > /opt/kubernetes/ssl/config/kube-proxy-csr.json <<EOF

{

  "CN": "system:kube-proxy",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "L": "Shenzhen",

      "ST": "Guangzhou",

      "O": "公司名称",

      "OU": "System"

    }

  ]

}

EOF

 

cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem -ca-key=/opt/kubernetes/ssl/ca-key.pem -config=/opt/kubernetes/ssl/config/ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

 

mv /opt/kubernetes/ssl/config/{kube-proxy.pem,kube-proxy-key.pem} /opt/kubernetes/ssl/

 

四.部署master

1. 安装etcd(更改相应masterip)

wget http://xxx:xxx@xxx.sandai.net/tmp/k8s/1.9.3/etcd/etcd-v3.2.12-linux-amd64.tar.gz -P /usr/local/src

tar -zxvf /usr/local/src/etcd-v3.2.12-linux-amd64.tar.gz

mv /usr/local/src/etcd-v3.2.12-linux-amd64/etcd /opt/kubernetes/bin/

mv /usr/local/src/etcd-v3.2.12-linux-amd64/etcdctl /opt/kubernetes/bin/

 

vim /opt/kubernetes/cfg/etcd

#[Member]

ETCD_NAME=test151vm17

ETCD_DATA_DIR="/opt/kubernetes/etcd/data"

ETCD_LISTEN_PEER_URLS="https://10.10.88.17:2380"

ETCD_LISTEN_CLIENT_URLS="https://10.10.88.17:2379"

 

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.88.17:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://10.10.88.17:2379"

ETCD_INITIAL_CLUSTER="test151vm17=https://10.10.88.17:2380"

ETCD_INITIAL_CLUSTER_TOKEN="test151-etcd"

ETCD_INITIAL_CLUSTER_STATE="new"

 

vim /usr/lib/systemd/system/etcd.service

#[Member]

ETCD_NAME=test151vm17

ETCD_DATA_DIR="/opt/kubernetes/etcd/data"

ETCD_LISTEN_PEER_URLS="https://10.10.88.17:2380"

ETCD_LISTEN_CLIENT_URLS="https://10.10.88.17:2379"

 

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.10.88.17:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://10.10.88.17:2379"

ETCD_INITIAL_CLUSTER="test151vm17=https://10.10.88.17:2380"

ETCD_INITIAL_CLUSTER_TOKEN="test151-etcd"

ETCD_INITIAL_CLUSTER_STATE="new"

[root@test151vm17 plmxs]# cat /usr/lib/systemd/system/etcd.service

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

 

[Service]

Type=notify

EnvironmentFile=-/opt/kubernetes/cfg/etcd

ExecStart=/opt/kubernetes/bin/etcd \

--name=${ETCD_NAME} \

--data-dir=${ETCD_DATA_DIR} \

--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \

--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \

--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \

--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \

--initial-cluster=${ETCD_INITIAL_CLUSTER} \

--initial-cluster-token=${ETCD_INITIAL_CLUSTER} \

--initial-cluster-state=new \

--cert-file=/opt/kubernetes/ssl/test151vm17.pem \

--key-file=/opt/kubernetes/ssl/test151vm17-key.pem \

--peer-cert-file=/opt/kubernetes/ssl/test151vm17.pem \

--peer-key-file=/opt/kubernetes/ssl/test151vm17-key.pem \

--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \

--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem

Restart=on-failure

LimitNOFILE=65536

 

[Install]

WantedBy=multi-user.target

 

systemctl daemon-reload

systemctl start etcd.service

systemctl enable etcd.service

 

2. 安装kubectl

wget http://xxx:xxxxi@xxxsandai.net/tmp/k8s/1.9.3/kubectl/kubectl -P /opt/kubernetes/bin/

chmod +x /opt/kubernetes/bin/kubectl

 

 

3. 创建kubeconfig文件

创建时需要将${apiserver_vip}替换为lvs的vip。

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')

cat > /opt/kubernetes/ssl/config/token.csv <<EOF

${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"

EOF

export KUBE_APISERVER="https://${apiserver_vip}:6443"

mv /opt/kubernetes/ssl/config/token.csv /opt/kubernetes/ssl

 

kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=bootstrap.kubeconfig

kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=bootstrap.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=bootstrap.kubeconfig

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

mv bootstrap.kubeconfig /opt/kubernetes/ssl/config

 

kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem --embed-certs=true --server=${KUBE_APISERVER} --kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials kube-proxy --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

mv kube-proxy.kubeconfig /opt/kubernetes/ssl/config

 

4. 下载二进制包并部署组件

wget https://dl.k8s.io/v1.9.0/kubernetes-server-linux-amd64.tar.gz

解压后从中找出kube-apiserver kube-controller-manager kubectl kube-scheduler这些文件,放在master节点上。

mv kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/ chmod +x /opt/kubernetes/bin/{kube-apiserver,kube-controller-manager,kube-scheduler}

5. 配置kube-apiserver

5.1. 指定master和etcd的IP地址(如果etcd有多个可以用逗号隔开)

MASTER_ADDRESS="10.10.88.17"

ETCD_SERVERS="https://10.10.88.17:2379"

5.2. 生成kube-apiserver配置文件

cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true \\ --v=4 \\ --etcd-servers=${ETCD_SERVERS} \\ --insecure-bind-address=127.0.0.1 \\ --bind-address=${MASTER_ADDRESS} \\ --insecure-port=8080 \\ --secure-port=6443 \\ --advertise-address=${MASTER_ADDRESS} \\ --allow-privileged=true \\ --service-cluster-ip-range=172.24.0.0/16 \\ --admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \ --authorization-mode=RBAC,Node \\ --kubelet-https=true \\ --enable-bootstrap-token-auth \\ --token-auth-file=/opt/kubernetes/cfg/token.csv \\ --service-node-port-range=30000-50000 \\ --tls-cert-file=/opt/kubernetes/ssl/server.pem \\ --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\ --client-ca-file=/opt/kubernetes/ssl/ca.pem \\ --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --etcd-cafile=/opt/kubernetes/ssl/ca.pem \\ --etcd-certfile=/opt/kubernetes/ssl/server.pem \\ --etcd-keyfile=/opt/kubernetes/ssl/server-key.pem" EOF

5.3. 生成kube-apiserver启动程序

cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF

5.4. 复制token文件到k8s安装目录下的ssl目录中

cp /root/ssl/token.csv /opt/kubernetes/cfg/

5.5. 启动kube-apiserver服务

systemctl daemon-reload

systemctl start kube-apiserver.service

systemctl status kube-apiserver.service

systemctl enable kube-apiserver.service

6. Kube-controller-manager配置

6.1配置Kube-controller-manager配置文件

cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\ --v=4 \\ --master=127.0.0.1:8080 \\ --leader-elect=true \\ --address=127.0.0.1 \\ --service-cluster-ip-range=172.24.0.0/16 \\ --cluster-name=kubernetes \\ --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\ --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \\ --root-ca-file=/opt/kubernetes/ssl/ca.pem" EOF

 

6.2配置Kube-controller-manager启动文件

cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF

 

systemctl daemon-reload

systemctl start kube-controller-manager.service

systemctl status kube-controller-manager.service

systemctl enable kube-controller-manager.service

7. 配置kube-scheduler

7.1. 创建kube-scheduler配置文件

cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true \\ --v=4 \\ --master=127.0.0.1:8080 \\ --leader-elect" EOF

7.2. 创建kube-scheduler启动文件

cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF

 

systemctl daemon-reload

systemctl start kube-scheduler.service

systemctl status kube-scheduler.service

systemctl enable kube-scheduler.service

8. 角色绑定

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

kubectl create clusterrolebinding kubelet-nodes --clusterrole=system:node --group=system:nodes

kubectl create clusterrolebinding system:anonymous --clusterrole=system:node-bootstrapper --user=system:anonymous

 

9. Pod网络划分

这一步将划分pod的网段,只需要在一个etcd节点上执行即可,执行时将${pod-ip-range}替换为规划的pod网段;${hostname}替换为主机名;${host-ip}替换为所在etcd节点的ip。

 

etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/${hostname}.pem --key-file=/opt/kubernetes/ssl/${hostname}-key.pem --endpoints="https://${host-ip}:2379" set /coreos.com/network/config '{"Network":"${pod-ip-range}","Backend":{"Type":"vxlan","VNI":1,"DirectRouting":true}}'

查看划分的结果可以使用如下的命令:

 

etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/${hostname}.pem --key-file=/opt/kubernetes/ssl/${hostname}-key.pem --endpoints="https://${host-ip}:2379" get /coreos.com/network/config

五.Node节点部署

node节点正常运行需要依赖一些证书文件,包括:kubeconfig文件、ca.pem、node节点server证书。

所以在开始部署前,请确保以上证书全部存在于node节点/opt/kubernetes/ssl下,需要对不同的node节点生成server证书。

1. 安装kubelet

从master上面前面现在的那个包里面找到kubelet然后复制到node的 /opt/kubernetes/bin/目录

chmod +x /opt/kubernetes/bin/kubelet

1.1. 配置kubelet配置文件

vim /opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \

--v=4 \

--address=10.10.88.19 \

--hostname-override=10.10.88.19 \

--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \

--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \

--cert-dir=/opt/kubernetes/ssl \

--allow-privileged=true \

--cluster-domain=test151 \

--fail-swap-on=false \

--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

 

1.2. 配置kubelet启动文件

vim /usr/lib/systemd/system/kubelet.service

[Unit]

Description=Kubernetes Kubelet

After=docker.service

Requires=docker.service

 

[Service]

EnvironmentFile=-/opt/kubernetes/cfg/kubelet

ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS

Restart=on-failure

KillMode=process

 

[Install]

WantedBy=multi-user.target

 

 

systemctl daemon-reload

systemctl start kubelet.service

systemctl enable kubelet.service

 

2. 安装kube-proxy

从master上面前面现在的那个包里面找到kube-proxy然后复制到node的 /opt/kubernetes/bin/目录

chmod +x /opt/kubernetes/bin/kube-proxy

2.1. 配置kube-proxy配置文件

vim /opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \

--v=4 \

--hostname-override=10.10.88.19 \

--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

 

2.2. 配置kube-proxy启动文件

vim /usr/lib/systemd/system/kube-proxy.service

KUBE_PROXY_OPTS="--logtostderr=true \

--v=4 \

--hostname-override=10.10.88.19 \

--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

[root@test151vm19 cfg]# cat /usr/lib/systemd/system/kube-proxy.service

[Unit]

Description=Kubernetes Proxy

After=network.target

 

[Service]

EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy

ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS

Restart=on-failure

 

[Install]

WantedBy=multi-user.target

 

systemctl daemon-reload

systemctl start kube-proxy.service

systemctl enable kube-proxy.service

 

3. flanneld部署

wget https://github.com/coreos/flannel/releases/download/v0.9.1/flannel-v0.9.1-linux-amd64.tar.gz tar zxf flannel-v0.9.1-linux-amd64.tar.gz // 解压之后得到两个文件:flanneld和mk-docker-opts.sh mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/

 

3.1. 配置flannel配置文件

cat <<EOF >/opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=https://10.10.88.17:2379 \ -etcd-cafile=/opt/kubernetes/ssl/ca.pem \

-etcd-certfile=/opt/kubernetes/ssl/server.pem \

-etcd-keyfile=/opt/kubernetes/ssl/server-key.pem" EOF

3.2. 配置flannel启动文件

cat <<EOF >/usr/lib/systemd/system/flanneld.service

[Unit]

Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service

 

[Service]

Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure

 

[Install]

WantedBy=multi-user.target EOF

 

 

 

 

vim /usr/lib/systemd/system/docker.service

[Unit]

Description=Docker Application Container Engine

Documentation=https://docs.docker.com

After=network-online.target firewalld.service

Wants=network-online.target

 

[Service]

Type=notify

EnvironmentFile=/run/flannel/subnet.env

ExecStart=/usr/bin/dockerd  $DOCKER_NETWORK_OPTIONS

ExecReload=/bin/kill -s HUP $MAINPID

LimitNOFILE=infinity

LimitNPROC=infinity

LimitCORE=infinity

TimeoutStartSec=0

Delegate=yes

KillMode=process

Restart=on-failure

StartLimitBurst=3

StartLimitInterval=60s

 

[Install]

WantedBy=multi-user.target

 

 

 

systemctl daemon-reload

systemctl start flanneld.service

systemctl enable flanneld.service

systemctl restart docker

 

六.测试和验证

1. 服务检查

检查master和node节点上各服务的状态都应为running状态,etcd已经在/opt/kubernetes/etcd下生成了数据目录,node节点docker0网桥和flanneld网桥处于同一网段中。

2. 查看并将node加入集群

执行的时候将${node-id}替换为查看到的第一个字段。

kubectl get csr

kubectl certificate approve ${node-id}

3. 查看节点

在master上执行下面命令可以查看node节点及状态。node节点状态处于Ready状态说明node节点正在正常工作。

kubectl get node

至此:我们的k8s集群基本上是搭建好了,后面咱们会来一篇关于监控的

 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐