今天天气不错,打算运行pod 在另外一个node 上,先看一下方法是在另外一个node 上设置一个label, 然后再设置这个pod含有这个label 。

发现kubectl get node 根本就没有这个node.

下面先把这个node 加进来,网上查了一下,发现跟kubelet 这个配置有关系:下面解决方法:

1:这个文件要新建的:

创建 kubelet 的service配置文件

文件位置/etc/systemd/system/kubelet.serivce

[Unit]
Description=Kubernetes Kubelet Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
ExecStart=/usr/bin/kubelet \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBELET_API_SERVER \
        $KUBELET_ADDRESS \
        $KUBELET_PORT \
        $KUBELET_HOSTNAME \
        $KUBE_ALLOW_PRIV \
        $KUBELET_POD_INFRA_CONTAINER \
        $KUBELET_ARGS
Restart=on-failure

[Install]
WantedBy=multi-user.target

2:然后保存后,再对下面一个文件进行编辑:

注意:需要先创建/var/lib/kubelet目录,不然稍后启动kubelet会报如下错误:
Failed at step CHDIR spawning /usr/bin/kubelet: No such file or directory
kubelet的配置文件/etc/kubernetes/kubelet其中的IP地址更改为你的每台node节点的IP地址

注意,我就原来这个ip 没有配成node 的IP, 所以看不到node 的hostname.

把文件改成如下:

[root@k8s-node system]# cat /etc/kubernetes/kubelet
###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=192.168.122.234"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://192.168.122.168:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

3: 重新启动kubelet 服务:

[root@k8s-node system]# systemctl daemon-reload

[root@k8s-node system]# systemctl enable kubelet
Failed to execute operation: File exists
[root@k8s-node system]# systemctl restart kubelet

下面去master 机器上看一下node;

[root@k8s-master kubernetes]# kubectl get node
NAME              STATUS    AGE
192.168.122.168   Ready     3d
192.168.122.234   Ready     17s

成功啦。

再看一下pod 运行情况:

[root@k8s-master kubernetes]# kubectl get pod -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP            NODE
mysql-4144028371-l6d0m   1/1       Unknown   1          18h       172.17.64.2   192.168.122.168
mysql-4144028371-qr1dg   1/1       Running   0          11m       172.17.64.2   192.168.122.234
myweb-3659005716-p6lwf   1/1       Unknown   1          17h       172.17.64.3   192.168.122.168
myweb-3659005716-xmq4d   1/1       Running   0          11m       172.17.64.3   192.168.122.234
[root@k8s-master kubernetes]#

4: 为了实现流量控制,node 上加入kube-proxy 的服务:

配置 kube-proxy

创建 kube-proxy 的service配置文件

文件路径/etc/systemd/system/kube-proxy.service

[root@k8s-node system]# cat /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy \
        $KUBE_LOGTOSTDERR \
        $KUBE_LOG_LEVEL \
        $KUBE_MASTER \
        $KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

下面重起一下kube-proxy:

# systemctl daemon-reload
# systemctl enable kube-proxy
# systemctl start kube-proxy
# systemctl status kube-proxy

[root@k8s-node system]# systemctl status kube-proxy
● kube-proxy.service - Kubernetes Kube-Proxy Server
   Loaded: loaded (/etc/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2019-10-22 23:09:31 EDT; 5min ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 21828 (kube-proxy)
    Tasks: 8
   Memory: 51.4M
   CGroup: /system.slice/kube-proxy.service
           └─21828 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://192.168.122.168:8080

Oct 22 23:09:31 k8s-node kube-proxy[21828]: E1022 23:09:31.811947   21828 server.go:421] Can't get Node "k8s-node", assuming iptables proxy, err: nodes "k... not found
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.813840   21828 server.go:215] Using iptables Proxier.
Oct 22 23:09:31 k8s-node kube-proxy[21828]: W1022 23:09:31.815094   21828 server.go:468] Failed to retrieve node info: nodes "k8s-node" not found
Oct 22 23:09:31 k8s-node kube-proxy[21828]: W1022 23:09:31.815185   21828 proxier.go:248] invalid nodeIP, initialize kube-proxy with 127.0.0.1 as nodeIP
Oct 22 23:09:31 k8s-node kube-proxy[21828]: W1022 23:09:31.815192   21828 proxier.go:253] clusterCIDR not specified, unable to distinguish between interna...al traffic
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.815233   21828 server.go:227] Tearing down userspace rules.
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.831103   21828 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.832439   21828 conntrack.go:66] Setting conntrack hashsize to 32768
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.836280   21828 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Oct 22 23:09:31 k8s-node kube-proxy[21828]: I1022 23:09:31.836344   21828 conntrack.go:81] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Hint: Some lines were ellipsized, use -l to show in full.

虽然有些报错,接着研究啦。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐