系统环境

NodeIPCentOSkernelcpumemory
node1192.168.159.4CentOS Linux release 7.4.1708 (Core)3.10.0-693.el7.x86_64Intel® Core™ i5-7500 CPU @ 3.40GHz * 12G
node2192.168.159.5CentOS Linux release 7.4.1708 (Core)3.10.0-693.el7.x86_64Intel® Core™ i5-7500 CPU @ 3.40GHz * 12G

Node 软件环境

NodeIPetcddockerflannelkubeletkube-proxy
node13.3.13192.168.159.419.03.10.11.01.15.21.15.2
node23.3.13192.168.159.519.03.10.11.01.15.21.15.2

kubelet 服务普通安装

kubelet文档

kubelet 动态配置文件

kubelet的--config参数指定文件,定义kubelet的初始化参数。

cat << EOF > kubelet.conf 
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 192.168.159.4
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS: ["10.0.0.2"]
clusterDomain: cluster.local.
failSwapOn: false
authentication:
  anonymous:
    enabled: true
EOF
  • 参数解释

    名称参数释义
    kind资源类型
    apiVersionAPI版本,kind类型需要与API版本对应
    addresskubelet监听IP,0.0.0.0监听所有
    portkubelet服务端口,默认10250
    readOnlyPort没有认证授权的只读端口,默认10255
    cgroupDriver容器cgroup驱动,docker使用docker info |grep 'Cgroup Driver'查看
    clusterDNSPOD的DNS服务地址,与--service-cluster-ip-range指定有关
    clusterDomain集群域名,如果设置kubelet将配置所有容器除搜索主机域外还搜索该域
    failSwapOn主机swap开启,则kubelet启动失败
    authentication用户认证,anonymous: enabled: true 开启匿名请求,默认开启

kubelet 服务配置文件

cat > /opt/k8s/node/etc/kubelet.service.conf << EOF
KUBELET_INSECURE_OPTS="--config=/opt/k8s/node/etc/kubelet.conf \
--kubeconfig=/opt/k8s/node/etc/bootstrap.kubeconfig \
--hostname-override=192.168.159.4 \
--logtostderr=true \
--log-dir=/opt/k8s/node/log/kubelet \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 \
--v=2"
EOF
  • 参数解释

    名称参数释义
    --configkubelet初始化配置文件
    --kubeconfig定义如何连接kube-apiserver
    --hostname-override主机名,kubectl get nodesNAME字段
    --pod-infra-container-imagepause容器,Pod中用于共享网络的基础容器

kubelet 服务文件

cat > /usr/lib/systemd/system/kubelet.service << EOF
[Unit]
Description=kubelet server
Documentation=https://kubernetes.io/docs/
After=docker.service
Requires=docker.service

[Service]
Type=notify
EnvironmentFile=/opt/k8s/node/etc/kubelet.service.conf
ExecStart=/usr/local/bin/kubelet $KUBELET_INSECURE_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

kubelet 服务启动

启动
systemctl daemon-reload && systemctl start kubelet
验证
[root@master etc]# kubectl get nodes
NAME            STATUS   ROLES    AGE    VERSION
192.168.159.4   Ready    <none>   3d5h   v1.15.2

[root@master pki]# kubectl get ev
LAST SEEN   TYPE     REASON                    OBJECT               MESSAGE
111s        Normal   Starting                  node/192.168.159.4   Starting kubelet.
110s        Normal   NodeHasSufficientMemory   node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasSufficientMemory
110s        Normal   NodeHasNoDiskPressure     node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasNoDiskPressure
110s        Normal   NodeHasSufficientPID      node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasSufficientPID
110s        Normal   NodeAllocatableEnforced   node/192.168.159.4   Updated Node Allocatable limit across pods
109s        Normal   RegisteredNode            node/192.168.159.4   Node 192.168.159.4 event: Registered Node 192.168.159.4 in Controller
100s        Normal   NodeReady                 node/192.168.159.4   Node 192.168.159.4 status is now: NodeReady
journalctl -xe日志错误结局

错误一:iptables规则无法创建

WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -j DOCKER-ISOLATION' failed: iptables v1.4.21: Couldn't load target `DOCKER-ISOLATION':No such file or directory

解决方案:

ip link delete docker0 && systemctl restart docker

错误二:iptables规则无法创建

9月 03 13:16:17 master kube-apiserver[2875]: E0903 13:16:17.741756    2875 prometheus.go:112] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
9月 03 13:16:17 master kube-apiserver[2875]: E0903 13:16:17.741765    2875 prometheus.go:126] failed to register unfinished metric admission_quota_controller: duplicate metrics collector registration attempted
9月 03 13:16:17 master kube-apiserver[2875]: E0903 13:16:17.741774    2875 prometheus.go:152] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
9月 03 13:16:17 master kube-apiserver[2875]: E0903 13:16:17.741783    2875 prometheus.go:164] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
9月 03 13:16:17 master kube-apiserver[2875]: E0903 13:16:17.741832    2875 prometheus.go:176] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
9月 03 13:16:17 master kube-apiserver[2875]: E0903 13:16:17.741878    2875 prometheus.go:188] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
9月 03 13:16:17 master kube-apiserver[2875]: E0903 13:16:17.741892    2875 prometheus.go:203] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
9月 03 13:16:17 master kube-apiserver[2875]: E0903 13:16:17.741902    2875 prometheus.go:216] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
9月 03 13:16:19 master systemd[1]: Started k8s apiserver.
9月 03 13:16:19 master kube-apiserver[2875]: E0903 13:16:19.410432    2875 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.159.3, ResourceVersion: 0, AdditionalErrorMsg:

解决方案:
目前还没解决,但不影响k8s集群使用,预计在kubernetes下一个版本可以解决。issues#76956

kube-proxy 服务普通安装

kube-proxy文档

kube-proxy 动态配置文件

可以通过配置--write-config-to=kube-proxy.config,试启动kube-proxy输出配置模板。

cat << EOF > kube-proxy.config 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /opt/k8s/node/etc/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 10.0.0.0/24
configSyncPeriod: 15m0s
conntrack:
  maxPerCore: 32768
  min: 0
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: 192.168.159.4
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  excludeCIDRs: null
  minSyncPeriod: 0s
  scheduler: ""
  strictARP: false
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "iptables"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
resourceContainer: /kube-proxy
udpIdleTimeout: 250ms
winkernel:
  enableDSR: false
  networkName: ""
  sourceVip: ""
EOF

kube-proxy 服务配置文件

cat << EOF > /opt/kubernetes/cfg/kube-proxy.service.conf
KUBE_PROXY_OPTS="--logtostderr=true \
--v=0 \
--kubeconfig=/opt/k8s/node/etc/kube-proxy.kubeconfig \
--config=/opt/k8s/node/etc/kube-proxy.config \
EOF
  • 参数解释

    名称参数释义
    --configkube-proxy动态配置文件,初始化kube-proxy启动参数,首次可以通过配置--write-config-to=[FileName]导出配置模板
    --kubeconfig定义如何连接kube-apiserver
    --hostname-override主机名,kubectl get nodesNAME字段

kube-proxy 服务文件

注意此处的服务Type不能设置为notify,否则无法正确显示服务状态

cat << EOF >  kube-proxy.service
[Unit]
Description=kube-proxy server
Documentation=https://kubernetes.io/docs/
After=network.target
Requires=network.target

[Service]
Type=simple
EnvironmentFile=/opt/k8s/node/etc/kube-proxy.service.conf
ExecStart=/usr/local/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

防火墙设置

firewall-cmd --zone=public --add-port=10256/udp --permanent
firewall-cmd reload

kube-proxy 服务启动

systemctl daemon-reload && systemctl start kube-proxy
验证
[root@master etc]# kubectl get ev
LAST SEEN   TYPE     REASON                    OBJECT               MESSAGE
56m         Normal   Starting                  node/192.168.159.4   Starting kubelet.
56m         Normal   NodeHasSufficientMemory   node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasSufficientMemory
56m         Normal   NodeHasNoDiskPressure     node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasNoDiskPressure
56m         Normal   NodeHasSufficientPID      node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasSufficientPID
56m         Normal   NodeAllocatableEnforced   node/192.168.159.4   Updated Node Allocatable limit across pods
56m         Normal   RegisteredNode            node/192.168.159.4   Node 192.168.159.4 event: Registered Node 192.168.159.4 in Controller
56m         Normal   NodeReady                 node/192.168.159.4   Node 192.168.159.4 status is now: NodeReady
38m         Normal   Starting                  node/192.168.159.4   Starting kubelet.
38m         Normal   NodeAllocatableEnforced   node/192.168.159.4   Updated Node Allocatable limit across pods
38m         Normal   NodeHasSufficientMemory   node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasSufficientMemory
38m         Normal   NodeHasNoDiskPressure     node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasNoDiskPressure
38m         Normal   NodeHasSufficientPID      node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasSufficientPID
26m         Normal   Starting                  node/192.168.159.4   Starting kubelet.
26m         Normal   NodeAllocatableEnforced   node/192.168.159.4   Updated Node Allocatable limit across pods
26m         Normal   NodeHasSufficientMemory   node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasSufficientMemory
26m         Normal   NodeHasNoDiskPressure     node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasNoDiskPressure
26m         Normal   NodeHasSufficientPID      node/192.168.159.4   Node 192.168.159.4 status is now: NodeHasSufficientPID

运行nginx示例

运行前环境

[root@master etc]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
192.168.159.4   Ready    <none>   71m   v1.15.2
[root@master etc]# kubectl get rc
No resources found.
[root@master etc]# kubectl get pods
No resources found.

创建Deployment

master运行

kubectl run --generator=run-pod/v1 nginx-test --image=nginx --replicas=1 --port=80

运行结果

[root@master etc]# kubectl get rs
NAME                    DESIRED   CURRENT   READY   AGE
nginx-test-86bdc44976   1         0         0       9m4s

错误提示

Error from server (ServerTimeout): No API token found for service account "default", retry after the token is automatically created and added to the service account

错误原因
issues#11355

在kube-apiserver的准入控件选项中包含ServiceAccount,必须在kube-apiserver启动参数中配置--service-account-key-file,
并且在kube-controller-manager启动参数中配置--service_account_private_key_file

解决方案

#生成api token
openssl genrsa -out /opt/k8s/master/pki/serviceaccount.key 2048

#修改kube-apiserver启动参数
cat << EOF > /opt/k8s/master/etc/kube-apiserver.conf \
#[Options]
KUBE_APISERVER_INSECURE_OPTS="--etcd-servers=https://192.168.159.3:2379,https://192.168.159.4:2379 \
--etcd-cafile=/opt/etcd/pki/ca.pem \
--etcd-certfile=/opt/etcd/pki/etcdctl.pem \
--etcd-keyfile=/opt/etcd/pki/etcdctl-key.pem \
--insecure-bind-address=0.0.0.0 \
--insecure-port=8080 \
--service-cluster-ip-range=10.0.0.0/24 \
--service-node-port-range=30000-50000 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--logtostderr=true \
--service-account-key-file=/opt/k8s/master/pki/serviceaccount.key \
--log-dir=/opt/k8s/master/log/kube-apiserver \
--v=2"
EOF
#重启kube-apiserver
systemctl daemon-reload && systemctl restart kube-apiserver

#修改kube-controller-manager启动参数
cat << EOF > /opt/k8s/master/etc/kube-controller-manager.conf 
#[Options]
KUBE_CONTROLLER_MANAGER_INSECURE_OPTS="--master=192.168.159.3:8080 \
--leader-elect=true \
--address=0.0.0.0 \
--port=10252 \
--service-cluster-ip-range=10.0.0.0/24 \
--service-account-private-key-file=/opt/k8s/master/pki/serviceaccount.key \
--logtostderr=true \
--log-dir=/opt/k8s/master/log/kube-controller-manager \
--v=2"
EOF
#重启kube-controller-manager
systemctl daemon-reload && systemctl restart kube-controller-manager
#删除已创建rs,此时将自动启动新rs
kubectl delete rs nginx-test-86bdc44976

查看状态

# 此时pods在创建阶段
[root@master etc]# kubectl get rs
NAME                    DESIRED   CURRENT   READY   AGE
nginx-test-86bdc44976   1         1         0       3s
[root@master etc]# kubectl get pods
NAME                          READY   STATUS              RESTARTS   AGE
nginx-test-86bdc44976-4mpqp   0/1     ContainerCreating   0          23s

# 等待一会儿后再次查看
[root@master etc]# kubectl get rs
NAME                    DESIRED   CURRENT   READY   AGE
nginx-test-86bdc44976   1         1         1       92s
[root@master etc]# kubectl get pods
NAME                          READY   STATUS    RESTARTS   AGE
nginx-test-86bdc44976-4mpqp   1/1     Running   0          2m38s

查看docker运行容器,此时docker上运行了nginx和pause两个镜像容器

[root@node1 etc]# docker ps -a
CONTAINER ID        IMAGE                                                                 COMMAND                  CREATED             STATUS              PORTS               NAMES
9701588f688c        nginx                                                                 "nginx -g 'daemon of…"   7 minutes ago       Up 7 minutes                            k8s_nginx-test_nginx-test-86bdc44976-4mpqp_default_f9ad575a-083f-495d-bafe-3c0350825976_0
29ce807cf3eb        registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0   "/pause"                 8 minutes ago       Up 8 minutes                            k8s_POD_nginx-test-86bdc44976-4mpqp_default_f9ad575a-083f-495d-bafe-3c0350825976_0

创建nginx对外服务

[root@master etc]# kubectl expose rs nginx-test-86bdc44976 --port=80 --target-port=80 --type=LoadBalancer
service/nginx-test-86bdc44976 exposed

[root@master etc]# kubectl get svc
NAME                    TYPE           CLUSTER-IP   EXTERNAL-IP   PORT(S)        AGE
kubernetes              ClusterIP      10.0.0.1     <none>        443/TCP        18h
nginx-test-86bdc44976   LoadBalancer   10.0.0.249   <pending>     80:49869/TCP   20m

查看服务信息
[root@master etc]# kubectl describe svc nginx-test-86bdc44976
Name:                     nginx-test-86bdc44976
Namespace:                default
Labels:                   pod-template-hash=86bdc44976
                          run=nginx-test
Annotations:              <none>
Selector:                 pod-template-hash=86bdc44976,run=nginx-test
Type:                     LoadBalancer
IP:                       10.0.0.249
Port:                     <unset>  80/TCP
TargetPort:               80/TCP
NodePort:                 <unset>  49869/TCP
Endpoints:                172.112.0.2:80
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

查看pods运行在哪个Node节点
[root@master etc]# kubectl get pod -o wide
NAME                          READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
nginx-test-86bdc44976-4mpqp   1/1     Running   1          16h   172.112.0.2   192.168.159.4   <none>           <none>

在node1节点开放对应的NodePort

firewall-cmd --zone public --add-port 49869 --permanent
firewall-cmd --reload

通过浏览器访问nginx
在这里插入图片描述

至此,k8s普通集群服务搭建完毕,接下来我们将继续探讨集群的token和TLS验证以及k8s滚动升级等问题。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐