-
Notifications
You must be signed in to change notification settings - Fork 66
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data streams are reconnecting errors when running hubble-ui with more than one replica #833
Comments
The same issue is in our environment. |
We've been having the same issue, and I'm pretty sure it started when we upgraded to cilium 1.4.7 and hubble 0.13. I just tried scaling down hubble-ui from 2 to 1 replicas, and that seems to fix it for us too. Currently running cilium 1.14.9 and hubble-ui v0.13.0 on kubernetes 1.29.3. |
We have the same issue. Scaling down replica from 2 to 1 solves issue. Kubernetes 1.25 |
We are running hubble-ui in single replica and we are facing with same issue. |
Cilium Version:
1.15.3
K8S Version:
1.29
I recently made an interesting discovery while investigating recurring "Data streams are reconnecting..." errors within the hubble-ui. In an effort to identify the underlying issue, I adjusted the replica count of the hubble-ui from '2' to '1'. Remarkably, this change resulted in the cessation of the aforementioned errors. Upon reverting the replica count to '2', the errors reemerged.
Further analysis involved examining the hubble-ui logs while operating with '2' replicas. The logs revealed session activity from both Pods, indicating that both replicas were concurrently processing client requests. This concurrent processing suggests a potential conflict in managing service maps data effectively (just my guess).
While reducing the replica count to '1' effectively resolves the error, this workaround compromises high availability (HA), making it a suboptimal solution.
The text was updated successfully, but these errors were encountered: