-
-
Notifications
You must be signed in to change notification settings - Fork 473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Warning seen using --no-lb #657
Comments
Hi @ajrpayne , thanks for opening this issue! |
I was just playing with the options. When I run it without --no-lb I get this INFO[0013] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap |
So it's surely no "usual" behavior. Does it happen every single time? Again (I'm currently evaluating this): Why do you want to disable the loadbalancer? |
It happens every time I use the --no-lb arg.
The cluster comes up. There is no lb as expected when the option is used. sudo k3d cluster create
INFO[0000] Prep: Network
INFO[0000] Re-using existing network 'k3d-k3s-default' (7acb398e53aee37d45979a9eaee832edd6a3c7a3099f7de42fcf864268c33c26)
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0001] Starting cluster 'k3s-default'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-k3s-default-server-0'
INFO[0007] Starting agents...
INFO[0007] Starting helpers...
INFO[0007] Starting Node 'k3d-k3s-default-serverlb'
INFO[0008] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
INFO[0012] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap
INFO[0012] Cluster 'k3s-default' created successfully!
INFO[0012] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0012] You can now use it like this:
kubectl config use-context k3d-k3s-default
kubectl cluster-info
sudo docker exec -it k3d-k3s-default-server-0 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.208.2 k3d-k3s-default-server-0
192.168.208.1 host.k3d.internal
sudo kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:33181
CoreDNS is running at https://0.0.0.0:33181/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:33181/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
sudo docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
256e85a7d9a0 rancher/k3d-proxy:v4.4.6 "/bin/sh -c nginx-pr…" About a minute ago Up About a minute 80/tcp, 0.0.0.0:33181->6443/tcp k3d-k3s-default-serverlb
2d06719e523d rancher/k3s:v1.21.1-k3s1 "/bin/entrypoint.sh …" About a minute ago Up About a minute k3d-k3s-default-server-0 vs sudo k3d cluster create --no-lb
INFO[0000] Prep: Network
INFO[0000] Re-using existing network 'k3d-k3s-default' (7acb398e53aee37d45979a9eaee832edd6a3c7a3099f7de42fcf864268c33c26)
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Starting cluster 'k3s-default'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-k3s-default-server-0'
INFO[0006] Starting agents...
INFO[0006] Starting helpers...
INFO[0006] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
WARN[0009] Failed to patch CoreDNS ConfigMap to include entry '192.168.208.1 host.k3d.internal': Exec process in node 'k3d-k3s-default-server-0' failed with exit code '1'
INFO[0009] Successfully added host record to /etc/hosts in 1/1 nodes
INFO[0009] Cluster 'k3s-default' created successfully!
INFO[0009] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0010] You can now use it like this:
kubectl config use-context k3d-k3s-default
kubectl cluster-info
sudo docker exec -it k3d-k3s-default-server-0 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.208.2 k3d-k3s-default-server-0
192.168.208.1 host.k3d.internal
sudo kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:46715
CoreDNS is running at https://0.0.0.0:46715/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:46715/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
sudo docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
af9698340d26 rancher/k3s:v1.21.1-k3s1 "/bin/entrypoint.sh …" About a minute ago Up About a minute 0.0.0.0:46715->6443/tcp k3d-k3s-default-server-0
was just playing with the options. |
This should be unrelated and doesn't really make sense to me even after looking into the code 🤔
... weird things happening here 🤔
Ok, perfect. Because I'm planning to make it the default for quite some new features soon 👍 So a fix for this (or actually a rewrite of this functionality) is landing in v5.0.0 soon-ish. |
When I use "k3d cluster create --no-lb test"
I get this warning
Is there a reason this fails when not using load balancing?
The text was updated successfully, but these errors were encountered: