Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warning seen using --no-lb #657

Closed
ajrpayne opened this issue Jun 28, 2021 · 5 comments
Closed

Warning seen using --no-lb #657

ajrpayne opened this issue Jun 28, 2021 · 5 comments
Assignees
Labels
DONE Issue solved, but not closed yet, due to pending release question Further information is requested
Milestone

Comments

@ajrpayne
Copy link

ajrpayne commented Jun 28, 2021

k3d --version
k3d version v4.4.6
k3s version v1.21.1-k3s1 (default)
uname -a
Linux 5.12.9-1-MANJARO SMP PREEMPT Thu Jun 3 14:56:42 UTC 2021 x86_64 GNU/Linux

When I use "k3d cluster create --no-lb test"

I get this warning

WARN[0010] Failed to patch CoreDNS ConfigMap to include entry '172.24.0.1 host.k3d.internal': Exec process in node 'k3d-test-server-0' failed with exit code '1'

Is there a reason this fails when not using load balancing?

@ajrpayne ajrpayne added the question Further information is requested label Jun 28, 2021
@iwilltry42 iwilltry42 added this to the Backlog milestone Jun 29, 2021
@iwilltry42
Copy link
Member

Hi @ajrpayne , thanks for opening this issue!
Soo.. for the record, this is not failing if you run it without --no-lb? 🤔
Implementation-wise this should be completely unrelated.
If I may ask: what's your reason for disabling the loadbalancer?

@ajrpayne
Copy link
Author

I was just playing with the options. When I run it without --no-lb I get this

INFO[0013] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap

@iwilltry42
Copy link
Member

So it's surely no "usual" behavior. Does it happen every single time?
If that exec process fails, it usually means, that the cluster is not ready yet or the node is failing.
Can you confirm that the cluster is up and running after you saw that warning?

Again (I'm currently evaluating this): Why do you want to disable the loadbalancer?

@ajrpayne
Copy link
Author

ajrpayne commented Jun 30, 2021

So it's surely no "usual" behavior. Does it happen every single time?

It happens every time I use the --no-lb arg.

Can you confirm that the cluster is up and running after you saw that warning?

The cluster comes up. There is no lb as expected when the option is used.

sudo k3d cluster create
INFO[0000] Prep: Network                                
INFO[0000] Re-using existing network 'k3d-k3s-default' (7acb398e53aee37d45979a9eaee832edd6a3c7a3099f7de42fcf864268c33c26) 
INFO[0000] Created volume 'k3d-k3s-default-images'      
INFO[0001] Creating node 'k3d-k3s-default-server-0'     
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb' 
INFO[0001] Starting cluster 'k3s-default'               
INFO[0001] Starting servers...                          
INFO[0001] Starting Node 'k3d-k3s-default-server-0'     
INFO[0007] Starting agents...                           
INFO[0007] Starting helpers...                          
INFO[0007] Starting Node 'k3d-k3s-default-serverlb'     
INFO[0008] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access 
INFO[0012] Successfully added host record to /etc/hosts in 2/2 nodes and to the CoreDNS ConfigMap 
INFO[0012] Cluster 'k3s-default' created successfully!  
INFO[0012] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false 
INFO[0012] You can now use it like this:                
kubectl config use-context k3d-k3s-default
kubectl cluster-info

sudo docker exec -it k3d-k3s-default-server-0 cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
192.168.208.2	k3d-k3s-default-server-0
192.168.208.1 host.k3d.internal

sudo kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:33181
CoreDNS is running at https://0.0.0.0:33181/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:33181/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

sudo docker container ls -a
CONTAINER ID   IMAGE                      COMMAND                  CREATED              STATUS              PORTS                             NAMES
256e85a7d9a0   rancher/k3d-proxy:v4.4.6   "/bin/sh -c nginx-pr…"   About a minute ago   Up About a minute   80/tcp, 0.0.0.0:33181->6443/tcp   k3d-k3s-default-serverlb
2d06719e523d   rancher/k3s:v1.21.1-k3s1   "/bin/entrypoint.sh …"   About a minute ago   Up About a minute                                     k3d-k3s-default-server-0

vs

sudo k3d cluster create --no-lb
INFO[0000] Prep: Network                                
INFO[0000] Re-using existing network 'k3d-k3s-default' (7acb398e53aee37d45979a9eaee832edd6a3c7a3099f7de42fcf864268c33c26) 
INFO[0000] Created volume 'k3d-k3s-default-images'      
INFO[0001] Creating node 'k3d-k3s-default-server-0'     
INFO[0001] Starting cluster 'k3s-default'               
INFO[0001] Starting servers...                          
INFO[0001] Starting Node 'k3d-k3s-default-server-0'     
INFO[0006] Starting agents...                           
INFO[0006] Starting helpers...                          
INFO[0006] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access 
WARN[0009] Failed to patch CoreDNS ConfigMap to include entry '192.168.208.1 host.k3d.internal': Exec process in node 'k3d-k3s-default-server-0' failed with exit code '1' 
INFO[0009] Successfully added host record to /etc/hosts in 1/1 nodes 
INFO[0009] Cluster 'k3s-default' created successfully!  
INFO[0009] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false 
INFO[0010] You can now use it like this:                
kubectl config use-context k3d-k3s-default
kubectl cluster-info

sudo docker exec -it k3d-k3s-default-server-0 cat /etc/hosts
127.0.0.1	localhost
::1	localhost ip6-localhost ip6-loopback
fe00::0	ip6-localnet
ff00::0	ip6-mcastprefix
ff02::1	ip6-allnodes
ff02::2	ip6-allrouters
192.168.208.2	k3d-k3s-default-server-0
192.168.208.1 host.k3d.internal

sudo kubectl cluster-info
Kubernetes control plane is running at https://0.0.0.0:46715
CoreDNS is running at https://0.0.0.0:46715/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://0.0.0.0:46715/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

sudo docker container ls -a
CONTAINER ID   IMAGE                      COMMAND                  CREATED              STATUS              PORTS                     NAMES
af9698340d26   rancher/k3s:v1.21.1-k3s1   "/bin/entrypoint.sh …"   About a minute ago   Up About a minute   0.0.0.0:46715->6443/tcp   k3d-k3s-default-server-0

Why do you want to disable the loadbalancer?

was just playing with the options.

@iwilltry42 iwilltry42 modified the milestones: Backlog, v5.0.0 Jul 1, 2021
@iwilltry42 iwilltry42 self-assigned this Jul 1, 2021
@iwilltry42
Copy link
Member

It happens every time I use the --no-lb arg.

This should be unrelated and doesn't really make sense to me even after looking into the code 🤔
However, in one of my feature branches for v5, I made some changes to the way this host entry injection works (with retries, etc.), making it way more robust and less error prone, as people start to rely on it.

The cluster comes up. There is no lb as expected when the option is used.

... weird things happening here 🤔

Why do you want to disable the loadbalancer?

was just playing with the options.

Ok, perfect. Because I'm planning to make it the default for quite some new features soon 👍

So a fix for this (or actually a rewrite of this functionality) is landing in v5.0.0 soon-ish.
I'll mark this as DONE (label only), as it'll be fixed by #656 but only release with the next major release.

@iwilltry42 iwilltry42 added the DONE Issue solved, but not closed yet, due to pending release label Jul 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DONE Issue solved, but not closed yet, due to pending release question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants