k8s pod has unbound immediate PersistentVolumeClaims
pod 一直处于Pending 状态pod has unbound immediate PersistentVolumeClaimskubectl logs nfs-client-provisionerxxx (pv NFS)pod selfLink was empty, can’t make reference[root@hadoop03 NFS]# kubectlget pod -n nfs-
·
pod 一直处于Pending 状态
pod has unbound immediate PersistentVolumeClaims
kubectl logs nfs-client-provisionerxxx (pv NFS)
pod selfLink was empty, can’t make reference
[root@hadoop03 NFS]# kubectl get pod -n nfs-client
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-764f44f754-gww47 1/1 Running 0 38m
test-pod 0/1 Pending 0 11m
#############################
[root@hadoop03 NFS]# kubectl describe pod test-pod -n nfs-client
Name: test-pod
Namespace: nfs-client
Priority: 0
Node: <none>
Labels: <none>
Annotations: <none>
Status: Pending
IP:
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 14m default-scheduler 0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims.
### test-pod
[root@hadoop03 NFS]# kubectl logs test-pod -n nfs-client
# log为空
### NFS
[root@hadoop03 NFS]# kubectl logs nfs-client-provisioner-764f44f754-gww47 -n nfs-client
...
I1118 08:54:01.229576 1 controller.go:987] provision "nfs-client/test-claim" class "managed-nfs-storage": started
E1118 08:54:01.242391 1 controller.go:1004] provision "nfs-client/test-claim" class "managed-nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference
### 修改 kube-apiserver.yaml ###
### 添加 - --feature-gates=RemoveSelfLink=false
[root@hadoop03 NFS]# cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.153.103:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.153.103
### 添加这行 ###
- --feature-gates=RemoveSelfLink=false
### kube-apiserver.yaml ###
[root@hadoop03 NFS]# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
###
[root@hadoop03 NFS]# kubectl get pvc -n nfs-client
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-a8158cfe-5950-4ef5-b90c-0941a8fa082c 1Mi RWX managed-nfs-storage 58m
[root@hadoop03 NFS]# kubectl get pv -n nfs-client
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-a8158cfe-5950-4ef5-b90c-0941a8fa082c 1Mi RWX Delete Bound nfs-client/test-claim managed-nfs-storage 8m2s
###
[root@hadoop03 data]# pwd
/nfs/data
[root@hadoop03 data]# ll
total 0
drwxrwxrwx 2 root root 20 2021-11-18 17:05:30 nfs-client-test-claim-pvc-a8158cfe-5950-4ef5-b90c-0941a8fa082c
[root@hadoop03 data]#
删除api-server pod 再启动
-bash-4.2# kubectl edit pod kube-apiserver-sealos-k8s-node-05 -n kube-system -o yaml
Edit cancelled, no changes made.
-bash-4.2# kubectl edit pod kube-apiserver-sealos-k8s-node-01 -n kube-system -o yaml
Edit cancelled, no changes made.
-bash-4.2# kubectl edit pod kube-apiserver-sealos-k8s-node-06 -n kube-system -o yaml
Edit cancelled, no changes made.
-bash-4.2# kubectl get po -n kube-system -A -o wide | grep api
default dbapi-debug-node-c8v4k 1/1 Running 0 3d4h 10.244.27.21 glusterfs-node-02 <none> <none>
default dbapi-node-ffrxr 1/1 Running 0 5d5h 10.244.3.176 sealos-k8s-node-02 <none> <none>
kube-system kube-apiserver-sealos-k8s-node-01 1/1 Running 1 (14m ago) 4m22s 172.16.34.125 sealos-k8s-node-01 <none> <none>
kube-system kube-apiserver-sealos-k8s-node-05 1/1 Running 0 2m21s 172.16.34.120 sealos-k8s-node-05 <none> <none>
kube-system kube-apiserver-sealos-k8s-node-06 1/1 Running 0 73s 172.16.34.121 sealos-k8s-node-06 <none> <none>
-bash-4.2# kubectl get po -n kube-system -A -o wide | grep api
default dbapi-debug-node-c8v4k 1/1 Running 0 3d4h 10.244.27.21 glusterfs-node-02 <none> <none>
default dbapi-node-ffrxr 1/1 Running 0 5d5h 10.244.3.176 sealos-k8s-node-02 <none> <none>
kube-system kube-apiserver-sealos-k8s-node-01 1/1 Running 1 (14m ago) 4m24s 172.16.34.125 sealos-k8s-node-01 <none> <none>
kube-system kube-apiserver-sealos-k8s-node-05 1/1 Running 0 2m23s 172.16.34.120 sealos-k8s-node-05 <none> <none>
kube-system kube-apiserver-sealos-k8s-node-06 1/1 Running 0 75s 172.16.34.121 sealos-k8s-node-06 <none> <none>
-bash-4.2# kubectl get po -n kube-system -A -o wide | grep api
default dbapi-debug-node-c8v4k 1/1 Running 0 3d4h 10.244.27.21 glusterfs-node-02 <none> <none>
default dbapi-node-ffrxr 1/1 Running 0 5d5h 10.244.3.176 sealos-k8s-node-02 <none> <none>
kube-system kube-apiserver-sealos-k8s-node-01 1/1 Running 1 (14m ago) 4m25s 172.16.34.125 sealos-k8s-node-01 <none> <none>
kube-system kube-apiserver-sealos-k8s-node-05 1/1 Running 0 2m24s 172.16.34.120 sealos-k8s-node-05 <none> <none>
kube-system kube-apiserver-sealos-k8s-node-06 1/1 Running 0 76s 172.16.34.121 sealos-k8s-node-06 <none> <none>
更多推荐
所有评论(0)