1 bin文件下载

由于版本不一样我这里不提供链接,需要注意的是k8s网址可能会被和谐。需要自行科学上网进行下载,
下载后解压到
/opt/kubernetes/bin

2 创建kube-apiserver使用的客户端token文件

mkdir /opt/kubernetes/token
#//生成一个程序登录3rd_session

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')    
cat > /opt/kubernetes/token/bootstrap-token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

3 创建Kube API Server启动文件

vi /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/conf/apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

mkdir /var/log/kubernetes/apiserver

4 创建kube API Server配置文件并启动

vi /opt/kubernetes/conf/apiserver.conf

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://hadoop1:2379,https://hadoop2:2379,https://hadoop3:2379 \
--bind-address=10.20.24.15 \
--secure-port=6443 \
--advertise-address=10.20.24.15 \
--allow-privileged=true \
--service-cluster-ip-range=10.1.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/token/bootstrap-token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/kubernetes.pem \
--tls-private-key-file=/opt/kubernetes/ssl/kubernetes-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \
--etcd-certfile=/opt/kubernetes/ssl/kubernetes.pem \
--etcd-keyfile=/opt/kubernetes/ssl/kubernetes-key.pem \

#–logtostderr=true 是吧日志输出到syslog, 如果输出到目录需要配置为 false

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

5 创建Kube Controller Manager启动文件

vi /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/conf/controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

mkdir /var/log/kubernetes/controller-manager

6 创建kube Controller Manager配置文件并启动

#vi /opt/kubernetes/conf/controller-manager.conf

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false\
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--bind-address=127.0.0.1 \
--service-cluster-ip-range=10.1.0.0/24 \
--cluster-cidr=10.2.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \ 
--log-dir=/var/log/kubernetes/controller-manager"

注:–service-cluster-ip-range参数指定Cluster中Service 的CIDR范围,该网络在各Node间必须路由不可达,必须和kube-apiserver中的参数一致,–cluster-cidr参数指定pod网段
–cluster-cidr= 这个是集群pod的网段
部署中出现下属问题,集群启动的时候造成的。 执行kubectl delete svc kubernetes之后重新生成就正常了。
通过查看svc 发现 443转发到了apiserver的6443.也就是apiserver

in Kubernetes, all three of the following arguments must be equal to, or contain, the Calico IP pool CIDRs:

  • kube-apiserver: --pod-network-cidr
  • kube-proxy: --cluster-cidr
  • kube-controller-manager: --cluster-cidr

[root@hadoop1 ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 443/TCP 23s

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

7 创建Kube Scheduler启动文件

#vi /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/conf/scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

mkdir /var/log/kubernetes/scheduler

8创建Kube Scheduler配置文件并启动

#vi /opt/kubernetes/conf/scheduler.conf

###
#kubernetes scheduler config
#
#default config should be adequate
#
#Add your own!

KUBE_SCHEDULER_OPTS="--logtostderr=false --v=4 \
   --master=127.0.0.1:8080 --leader-elect=true \
   --log-dir=/var/log/kubernetes/scheduler"

–address:在 127.0.0.1:10251(默认是10251) 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;
–kubeconfig:指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;
–leader-elect=true:集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
systemctl status kube-scheduler

9 创建kubectl kubeconfig文件

设置集群参数

kubectl config set-cluster kubernetes --certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true --server=https://k8s-master1:6443

Cluster “kubernetes” set.Cluster “kubernetes” set.

设置客户端认证参数

kubectl config set-credentials admin --client-certificate=/opt/kubernetes/ssl/admin.pem \
--embed-certs=true --client-key=/opt/kubernetes/ssl/admin-key.pem

User “admin” set.

设置上下文参数

kubectl config set-context kubernetes --cluster=kubernetes --user=admin

Context “kubernetes” created.

设置默认上下文

kubectl config use-context kubernetes

Switched to context “kubernetes”.

验证各组件健康状况

#kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {“health”:“true”}
etcd-1 Healthy {“health”:“true”}
etcd-2 Healthy {“health”:“true”}

创建角色绑定

kubectl create clusterrolebinding --user system:serviceaccount:kube-system:default kube-system-cluster-admin --clusterrole cluster-admin

clusterrolebindings.rbac.authorization.k8s.io “kube-system-cluster-admin”

注:在kubernetes-1.11.0以前使用如下命令

#kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap

clusterrolebinding.rbac.authorization.k8s.io “kubelet-bootstrap” created

10 创建kubelet bootstrapping kubeconfig文件

#创建 TLS Bootstrapping Token 启动kube-api的时候这步已经完成

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
cat > token.csv <<EOF
${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

#创建kubelet bootstrapping kubeconfig

export KUBE_APISERVER=“https://k8s-master1:6443”

设置集群参数

kubectl config set-cluster kubernetes \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem \
  --embed-certs=true \
  --server=${KUBE_APISERVER} \
  --kubeconfig=bootstrap.kubeconfig

设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=bootstrap.kubeconfig

设置上下文参数

kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=bootstrap.kubeconfig

设置默认上下文

kubectl config use-context default \
--kubeconfig=bootstrap.kubeconfig

11创建kube-proxy kubeconfig文件

现在的理解是通过kube-proxy证书 生成集群认证的kubeconfig。启动参数中不需要使用指定pem文件了

设置集群参数

kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true \
--server=https://k8s-master1:6443 \
--kubeconfig=kube-proxy.kubeconfig

Cluster “kubernetes” set.

设置客户端认证参数

kubectl config set-credentials kube-proxy \
--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \
--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

User “kube-proxy” set.

设置上下文参数

kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

Context “default” created.

设置默认上下文

kubectl config use-context default \
--kubeconfig=kube-proxy.kubeconfig

Switched to context “default”.

分发kube-proxy.kubeconfig文件到worker节点

12 worker节点部署kubelet和kube-proxy

创建kubelet工作目录(各node节点做相同操作)

mkdir /var/lib/kubelet

1创建kubelet配置文件

#vi /opt/kubernetes/conf/kubelet.conf

KUBELET_OPTS="--logtostderr=false \
--v=4 \
--kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/conf/bootstrap.kubeconfig \
--root-dir=/var/lib/kubelet \   #这个是kubelet的工作目录,emptydir 的目录也是创建在这里。
--config=/opt/kubernetes/conf/kubelet.config \
--log-dir=/var/log/kubernetes/kubelet \
--cert-dir=/opt/kubernetes/ssl \
--allow-privileged=true \
--network-plugin=cni \
~~--cni-conf-dir=/etc/cni/net.d \
--cni-bin-dir=/opt/cni/bin \~~ 
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

删除线部分是kubernetes第一次加入集群的时候要去掉,不然会报没有对应的文件夹。等到calico部署以后再修正过来,使用cni插件,否则pod获取的IP是默认docker0的。
PS

  1. KUBELET_ADDRESS设置为各节点本机IP,KUBELET_HOSTNAME设为各节点的主机名,KUBELET_POD_INFRA_CONTAINER可设置为私有容器仓库地址,如有可设置为KUBELET_POD_INFRA_CONTAINER="–pod_infra_container_image={私有镜像仓库ip}:80/k8s/pause-amd64:v3.0",cni-bin-dir值的路径在创建calico网络时会自动添加
  2. 没有cni的配置docker还是默认docker0的网卡
  3. kubelet是不用手动生成证书的,默认会根据/opt/kubernetes/conf/bootstrap.kubeconfig 申请到。

2 创建Kubelet配置文件并启动

#vi /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service


[Service]
EnvironmentFile=-/opt/kubernetes/conf/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process


[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

3 查看CSR证书请求(在k8s-master上执行)

kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-0Vg_d__0vYzmrMn7o2S7jsek4xuQJ2v_YuCKwWN9n7M 4h kubelet-bootstrap Pending

4 批准kubelet 的 TLS 证书请求(在k8s-master上执行)

kubectl certificate approve xxx

批量批准
#kubectl get csr|grep ‘Pending’ | awk ‘NR>0{print $1}’| xargs kubectl certificate approve

certificatesigningrequest.certificates.k8s.io “node-csr-0Vg_d__0vYzmrMn7o2S7jsek4xuQJ2v_YuCKwWN9n7M” approved

5 查看节点状态如果是Ready的状态就说明一切正常(在k8s-master上执行)

#kubectl get node

NAME STATUS ROLES AGE VERSION
work-node01 Ready 11h v1.10.4
work-node02 Ready 11h v1.10.4

6 创建kube-proxy的启动文件

#vi /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/opt/kubernetes/conf/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

11)创建kube-proxy配置文件并启动

#vi /opt/kubernetes/conf/kube-proxy.conf

KUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--cluster-cidr=10.2.0.0/24 \
--proxy-mode=ipvs  \
--masquerade-all=true \
--kubeconfig=/opt/kubernetes/conf/kube-proxy.kubeconfig"

--proxy-mode=ipvs  \
--masquerade-all=true 

注:bind-address值设为各node节点本机IP,hostname-override值设为各node节点主机名

分发kube-proxy.conf到各node节点

scp /opt/kubernetes/conf/kube-proxy.conf root@hadoop3:/opt/kubernetes/conf/

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy

规划需要注意,proxy有两种模式 ipvm 和iptavle。 参数后续整理出来

12)查看LVS状态 --我看不到 TCP 10.1.0.1:443 rr persistent 10800

#ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.1.0.1:443 rr persistent 10800
-> 10.20.24.15:6443 Masq 1 0 0

至此k8s集群已经搭建好了,docker pod 也可以进行部署,但是网络组件还没有部署好,pod之间网络还会有问题。网络组件部署请看下一章

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐