I am trying to implement autoscaling of pods in my cluster. I have tried with a "dummy" deployment and hpa, and I didn't have problem. Now, I am trying to integrate it into our "real" microservices and it keeps returning
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale
ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: missing request for memory
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedGetResourceMetric 18m (x5 over 19m) horizontal-pod-autoscaler unable to get metrics for resource memory: no metrics returned from resource metrics API
Warning FailedComputeMetricsReplicas 18m (x5 over 19m) horizontal-pod-autoscaler failed to get memory utilization: unable to get metrics for resource memory: no metrics returned from resource metrics API
Warning FailedComputeMetricsReplicas 16m (x7 over 18m) horizontal-pod-autoscaler failed to get memory utilization: missing request for memory
Warning FailedGetResourceMetric 4m38s (x56 over 18m) horizontal-pod-autoscaler missing request for memory
Here is my hpa:
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: #{Name}
namespace: #{Namespace}
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: #{Name}
minReplicas: 2
maxReplicas: 5
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
The deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: #{Name}
namespace: #{Namespace}
spec:
replicas: 2
selector:
matchLabels:
app: #{Name}
template:
metadata:
annotations:
linkerd.io/inject: enabled
labels:
app: #{Name}
spec:
containers:
- name: #{Name}
image: #{image}
resources:
limits:
cpu: 500m
memory: "300Mi"
requests:
cpu: 100m
memory: "200Mi"
ports:
- containerPort: 80
name: #{ContainerPort}
I can see both memory and cpu when I do kubectl top pods. I can see the requests and limits as well when I do kubectl describe pod.
Limits:
cpu: 500m
memory: 300Mi
Requests:
cpu: 100m
memory: 200Mi
The only difference I can think of is that my dummy service didn't have linkerd sidecar.
所有评论(0)