-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bug] Unknown current metrics but HPA is actually working #270
Comments
The deployment:apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "4"
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Deployment","metadata":{"annotations":{},"name":"tea","namespace":"gaoce"},"spec":{"replicas":3,"selector":{"matchLabels":{"app":"tea"}},"template":{"metadata":{"labels":{"app":"tea"}},"spec":{"containers":[{"image":"nginxdemos/hello:plain-text","name":"tea","ports":[{"containerPort":80}]}]}}}}
creationTimestamp: 2019-05-09T12:24:32Z
generation: 11
labels:
app: tea
name: tea
namespace: gaoce
resourceVersion: "4796775"
selfLink: /apis/extensions/v1beta1/namespaces/gaoce/deployments/tea
uid: 662ac519-7255-11e9-9c5d-525400c68fb9
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: tea
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: tea
spec:
containers:
- image: nginxdemos/hello:plain-text
imagePullPolicy: IfNotPresent
name: tea
ports:
- containerPort: 80
protocol: TCP
resources:
limits:
cpu: 1500m
memory: 50Mi
requests:
cpu: "1"
memory: 40Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2019-05-09T13:11:13Z
lastUpdateTime: 2019-05-09T13:11:13Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 11
readyReplicas: 1
replicas: 1
updatedReplicas: 1 The HPA:apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: tea
namespace: gaoce
spec:
maxReplicas: 2
minReplicas: 1
scaleTargetRef:
apiVersion: extensions/v1beta1
kind: Deployment
name: tea
metrics:
- type: Resource
resource:
name: cpu
targetAverageValue: 1005m
- type: Resource
resource:
name: memory
targetAverageValue: 300Mi The log of metrics server:
The logs of controller-manager:
|
Same issue here on Kubernetes 1.13.5 (Digital Ocean). It still working but is showing
and
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Same issue here on Kubernetes 1.11.3
HPA:
|
Hey, can you verify that Metrics Server is working using "kubectl top pod" and looking into metrics server logs? |
@serathius You can see the log here #270 (comment) |
Sorry for not noticing it. I don't see any problems in metrics-server logs. Could you verify that Metrics API returns correct results for pods under HPA
If metrics are correct the problem is not with Metrics Server but with HPA. Please create issue in https://github.com/kubernetes/kubernetes |
Sure. But it cannot be always reproduced. I will have a look when I meet it again. |
I am running autoscaling/v2beta1 HPA in Kubernetes 1.12. The status.CurrentMetrics is empty, but it seems that the HPA can work according to memory usage.
I run a deployment with one pod which uses 2Mi memory. Then I set:
Then it scaled up to 2. After a while I set it to
Then it scaled down to 1.
I am wondering why the HPA works but the status is empty. Is it caused by the metrics server or the HPA controller?
I'd appreciate it if you could help me.
The text was updated successfully, but these errors were encountered: