You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Cluster Autoscaler Monitoring Non-Tagged Nodes and Generating Errors - Failed to check cloud provider has instance for ip-*: node is not present in aws: could not find instance
#7839
The Cluster Autoscaler is configured with AWS auto-discovery based on specific tags. However, it is also monitoring nodes that do not have the required discovery tags. These nodes are self-managed and should be ignored by the autoscaler. As a result, we are receiving numerous errors in the logs, which are triggering false alerts.
W0214 14:09:15.647336 1 clusterstate.go:1084] Failed to check cloud provider has instance for ip-10-215-4-59.eu-west-2.compute.internal: node is not present in aws: could not find instance {aws:///eu-west-2a/i-068492674ff8b32a6 i-068492674ff8b32a6}
W0214 14:09:15.647340 1 clusterstate.go:1084] Failed to check cloud provider has instance for ip-10-215-9-69.eu-west-2.compute.internal: node is not present in aws: could not find instance {aws:///eu-west-2b/i-0dfe837fcb45cc00d i-0dfe837fcb45cc00d}
W0214 14:09:15.647347 1 clusterstate.go:1084] Failed to check cloud provider has instance for ip-10-215-12-15.eu-west-2.compute.internal: node is not present in aws: could not find instance {aws:///eu-west-2c/i-071de7e9bd5a797f2 i-071de7e9bd5a797f2}
W0214 14:09:15.647352 1 clusterstate.go:1084] Failed to check cloud provider has instance for ip-10-215-13-23.eu-west-2.compute.internal: node is not present in aws: could not find instance {aws:///eu-west-2c/i-0468b443f93d9182d i-0468b443f93d9182d}
W0214 14:09:15.647361 1 clusterstate.go:1084] Failed to check cloud provider has instance for ip-10-215-13-13.eu-west-2.compute.internal: node is not present in aws: could not find instance {aws:///eu-west-2c/i-03bc772bb2f78c1bf i-03bc772bb2f78c1bf}
W0214 14:09:15.647366 1 clusterstate.go:1084] Failed to check cloud provider has instance for ip-10-215-6-185.eu-west-2.compute.internal: node is not present in aws: could not find instance {aws:///eu-west-2a/i-0ffdfcd664721fb68 i-0ffdfcd664721fb68}
These instances are Self Managed and not being tagged with autodiscovery tag.
Cluster autoscaler works good within the Autoscaling Group that is tagged with autodiscovery tags
Expected Behavior:
The Cluster Autoscaler should monitor only the nodes with the appropriate AWS auto-discovery tags.
Nodes without the auto-discovery tags should be completely ignored.
Actual Behavior:
The autoscaler monitors self-managed nodes without the auto-discovery tags.
It generates repetitive errors, impacting our alerting and log clarity.
Steps to Reproduce:
Deploy the Cluster Autoscaler with the above configuration.
Add unmanaged nodes without the discovery tags.
Observe the logs.
Suggested Solution:
Ensure the Cluster Autoscaler only considers nodes with the defined discovery tags.
Provide a configuration option to ignore nodes without the tags explicitly.
Additional Context:
AWS region: eu-west-2
Self-managed nodes have no kubernetes.io/cluster/<cluster-name> tags.
Sensitive Data:
AWS account IDs and instance IDs have been redacted.
Thank you for your support in resolving this issue.
The text was updated successfully, but these errors were encountered:
[BUG] Cluster Autoscaler Monitoring Non-Tagged Nodes and Generating Errors
Helm Chart Version: 9.45.0
Cloud Provider: AWS
Cluster Autoscaler Image:
cluster-autoscaler:v1.32.0
Issue Description:
The Cluster Autoscaler is configured with AWS auto-discovery based on specific tags. However, it is also monitoring nodes that do not have the required discovery tags. These nodes are self-managed and should be ignored by the autoscaler. As a result, we are receiving numerous errors in the logs, which are triggering false alerts.
Values File Configuration:
Error Logs:
These instances are Self Managed and not being tagged with autodiscovery tag.
Cluster autoscaler works good within the Autoscaling Group that is tagged with autodiscovery tags
Expected Behavior:
Actual Behavior:
Steps to Reproduce:
Suggested Solution:
Additional Context:
kubernetes.io/cluster/<cluster-name>
tags.Sensitive Data:
Thank you for your support in resolving this issue.
The text was updated successfully, but these errors were encountered: