Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Output error vary for same command. #519

Closed
bukowa opened this issue Feb 25, 2021 · 2 comments
Closed

[BUG] Output error vary for same command. #519

bukowa opened this issue Feb 25, 2021 · 2 comments
Assignees
Labels
bug Something isn't working
Milestone

Comments

@bukowa
Copy link

bukowa commented Feb 25, 2021

What did you do

I entered the same command twice, one after another.
vuoKMSF1mo

  • How was the cluster created?

    • k3d cluster create --wait --verbose --no-lb -p "32080:32080@server[*]" -p "32443:32443@server[0,1]" --servers=2
  • What did you do afterwards?

    • k3d cluster create --wait --verbose --no-lb -p "32080:32080@server[*]" -p "32443:32443@server[0,1]" --servers=2

What did you expect to happen

  • i received the same output

What actually happend

  • i received different outputs

Screenshots or terminal output

1

Code_Dq00S6TACc

$ k3d cluster create --wait --verbose --no-lb -p "32080:32080@server[*]" -p "32443:32443@server[0,1]" --servers=2
←[37mDEBU←[0m[0000] Selected runtime is 'docker.Docker'
←[37mDEBU←[0m[0000] Configuration:
agents: 0
image: docker.io/rancher/k3s:v1.20.2-k3s1
network: ""
options:
  k3d:
    disablehostipinjection: false
    disableimagevolume: false
    disableloadbalancer: true
    disablerollback: false
    timeout: 0s
    wait: true
  k3s:
    extraagentargs: []
    extraserverargs: []
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    gpurequest: ""
registries:
  config: ""
  create: false
  use: []
servers: 2
token: ""
←[37mDEBU←[0m[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  labels: []
  ports:
  - 32080:32080@server[*]
  - 32443:32443@server[0
  - 1]
  volumes: []
←[37mDEBU←[0m[0000] ========== Simple Config ==========
{TypeMeta:{Kind: APIVersion:} Name: Servers:2 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.20.2-k3s1 Network: ClusterToken: Volumes:[] Ports:[] Labels:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:true DisableImageVolume:false NoRollback:false PrepDisableHostIPInjection:false NodeHookActions:[]} K3sOptions:{ExtraServerArgs:[] ExtraAgentArgs:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest:}} Env:[] Registries:{Use:[] Create:false Config:}}
==========================
←[37mDEBU←[0m[0000] Port Exposure Mapping didn't specify hostPort, choosing one randomly...
←[37mDEBU←[0m[0000] Got free port for Port Exposure: '62964'     
←[37mDEBU←[0m[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind: APIVersion:} Name: Servers:2 Agents:0 ExposeAPI:{Host: HostIP:0.0.0.0 HostPort:62964} Image:docker.io/rancher/k3s:v1.20.2-k3s1 Network: ClusterToken: Volumes:[] Ports:[{Port:32080:32080 NodeFilters:[server[*]]} {Port:32443:32443 NodeFilters:[server[0]} {Port:1] NodeFilters:[]}] Labels:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:true DisableImageVolume:false NoRollback:false PrepDisableHostIPInjection:false NodeHookActions:[]} K3sOptions:{ExtraServerArgs:[] ExtraAgentArgs:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest:}} Env:[] Registries:{Use:[] Create:false Config:}}
==========================
←[37mDEBU←[0m[0000] Disabling the load balancer
←[31mFATA←[0m[0000] Failed to parse node filters: invalid format or empty subset in 'server[0'

2

Code_DeGyrhmr9y

$ k3d cluster create --wait --verbose --no-lb -p "32080:32080@server[*]" -p "32443:32443@server[0,1]" --servers=2
←[37mDEBU←[0m[0000] Selected runtime is 'docker.Docker'
←[37mDEBU←[0m[0000] Configuration:
agents: 0
image: docker.io/rancher/k3s:v1.20.2-k3s1
network: ""
options:
  k3d:
    disablehostipinjection: false
    disableimagevolume: false
    disableloadbalancer: true
    disablerollback: false
    timeout: 0s
    wait: true
  k3s:
    extraagentargs: []
    extraserverargs: []
  kubeconfig:
    switchcurrentcontext: true
    updatedefaultkubeconfig: true
  runtime:
    gpurequest: ""
registries:
  config: ""
  create: false
  use: []
servers: 2
token: ""
←[37mDEBU←[0m[0000] Additional CLI Configuration:
cli:
  api-port: ""
  env: []
  labels: []
  ports:
  - 32080:32080@server[*]
  - 32443:32443@server[0
  - 1]
  volumes: []
←[37mDEBU←[0m[0000] ========== Simple Config ==========
{TypeMeta:{Kind: APIVersion:} Name: Servers:2 Agents:0 ExposeAPI:{Host: HostIP: HostPort:} Image:docker.io/rancher/k3s:v1.20.2-k3s1 Network: ClusterToken: Volumes:[] Ports:[] Labels:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:true DisableImageVolume:false NoRollback:false PrepDisableHostIPInjection:false NodeHookActions:[]} K3sOptions:{ExtraServerArgs:[] ExtraAgentArgs:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest:}} Env:[] Registries:{Use:[] Create:false Config:}}
==========================
←[37mDEBU←[0m[0000] Port Exposure Mapping didn't specify hostPort, choosing one randomly...
←[37mDEBU←[0m[0000] Got free port for Port Exposure: '62962'
←[37mDEBU←[0m[0000] ========== Merged Simple Config ==========
{TypeMeta:{Kind: APIVersion:} Name: Servers:2 Agents:0 ExposeAPI:{Host: HostIP:0.0.0.0 HostPort:62962} Image:docker.io/rancher/k3s:v1.20.2-k3s1 Network: ClusterToken: Volumes:[] Ports:[{Port:1] NodeFilters:[]} {Port:32080:32080 NodeFilters:[server[*]]} {Port:32443:32443 NodeFilters:[server[0]}] Labels:[] Options:{K3dOptions:{Wait:true Timeout:0s DisableLoadbalancer:true DisableImageVolume:false NoRollback:false PrepDisableHostIPInjection:false NodeHookActions:[]} K3sOptions:{ExtraServerArgs:[] ExtraAgentArgs:[]} KubeconfigOptions:{UpdateDefaultKubeconfig:true SwitchCurrentContext:true} Runtime:{GPURequest:}} Env:[] Registries:{Use:[] Create:false Config:}}
==========================
←[37mDEBU←[0m[0000] Disabling the load balancer
←[31mFATA←[0m[0000] Portmapping '1]' lacks a node filter, but there's more than one node

If applicable, add screenshots or terminal output (code block) to help explain your problem.

Which OS & Architecture

windows amd64

Which version of k3d

k3d version v4.2.0
k3s version v1.20.2-k3s1 (default)

Which version of docker

can provide if revelant
@bukowa bukowa added the bug Something isn't working label Feb 25, 2021
@iwilltry42 iwilltry42 self-assigned this Mar 8, 2021
@iwilltry42
Copy link
Member

Hi @bukowa , thanks for opening this issue 👍
This is a known issue right now and the same problem like appearing here: #506
It will be fixed as soon as we can update one of our dependencies as described here: #473

Since we're already following along there, can we close this issue as some sort of "duplicate"?

@iwilltry42 iwilltry42 added this to the Backlog milestone Mar 8, 2021
@bukowa
Copy link
Author

bukowa commented Mar 8, 2021

Hi @bukowa , thanks for opening this issue 👍
This is a known issue right now and the same problem like appearing here: #506
It will be fixed as soon as we can update one of our dependencies as described here: #473

Since we're already following along there, can we close this issue as some sort of "duplicate"?

yes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants