-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tested 2.0.0-rc.3 on Nomad, test failed #330
Comments
with 1.6.0:
|
I am no Nomad expert, so please take the following with a grain of salt. I will continue to investigate this with a coworker. I assume this happens because we changed our image to include two seperate binaries for To fix this, you would need to change your manifest to have seperate jobs for For node:
For controller: |
Thanks for the hint! I was able to got it running: controller job (only 1 alloc needed -> service) job "hcloud-csi-controller" {
datacenters = ["dc1"]
namespace = "default"
type = "service"
group "controller" {
task "plugin" {
driver = "docker"
config {
image = "hetznercloud/hcloud-csi-driver:2.0.0"
command = "bin/hcloud-csi-driver-controller"
}
env {
CSI_ENDPOINT = "unix://csi/csi.sock"
ENABLE_METRICS = true
HCLOUD_TOKEN = "..."
}
csi_plugin {
id = "csi.hetzner.cloud"
type = "controller"
mount_dir = "/csi"
}
resources {
cpu = 100
memory = 64
}
}
}
} node job: job "hcloud-csi-node" {
datacenters = ["dc1"]
namespace = "default"
type = "system"
group "node" {
task "plugin" {
driver = "docker"
config {
image = "hetznercloud/hcloud-csi-driver:2.0.0"
command = "bin/hcloud-csi-driver-node"
privileged = true
}
env {
CSI_ENDPOINT = "unix://csi/csi.sock"
ENABLE_METRICS = true
}
csi_plugin {
id = "csi.hetzner.cloud"
type = "node"
mount_dir = "/csi"
}
resources {
cpu = 100
memory = 64
}
}
}
} volume definition: type = "csi"
id = "volume-mixed"
name = "volume-mixed"
plugin_id = "csi.hetzner.cloud"
capability {
access_mode = "single-node-writer"
attachment_mode = "file-system"
}
mount_options {
fs_type = "ext4"
mount_flags = ["discard", "defaults"]
} create volume:
Verify with hcloud cli:
|
General note: We currently do not explicitly support Nomad. We do not test against Nomad. I will open an issue for this support, so we can make sure to test for any breakages and documented upgrade procedures in the future. The deployment should work with v2.0.0 with the following manifests: job "hcloud-csi-node" {
datacenters = ["dc1"]
namespace = "default"
type = "system"
group "hcloud-csi-node" {
task "plugin" {
driver = "docker"
config {
image = "hetznercloud/hcloud-csi-driver:2.0.0"
privileged = true
command = "/bin/hcloud-csi-driver-node"
}
env {
CSI_ENDPOINT = "unix://csi/csi.sock"
}
csi_plugin {
id = "csi.hetzner.cloud"
type = "node"
mount_dir = "/csi"
}
resources {
cpu = 100
memory = 64
}
}
}
}
job "hcloud-csi-controller" {
datacenters = ["dc1"]
namespace = "default"
group "hcloud-csi-controller" {
task "plugin" {
driver = "docker"
config {
image = "hetznercloud/hcloud-csi-driver:2.0.0"
privileged = true
command = "/bin/hcloud-csi-driver-controller"
}
env {
CSI_ENDPOINT = "unix://csi/csi.sock"
HCLOUD_TOKEN = "..."
}
csi_plugin {
id = "csi.hetzner.cloud"
type = "controller"
mount_dir = "/csi"
}
resources {
cpu = 100
memory = 64
}
}
}
} I am going to close the issue, if you still have problems with the deployment, please feel free to reopen or create a new issue. |
perfect, if you continue to create RCs, I will test them and give feedback. |
Actually, v2.0.0 is already released! |
Yes, I know. ;) I meant in future versions. |
Just for the record. I had an issue with CSI 2.0.0 today (after nomad upgrade 1.4.3). I was not able to deploy a job with a CSI volume:
I tried a few things:
But what helped was to downgrade to hcloud csi 1.6.0... |
Hi, I just wanted to give some feedback about the 2.0.0-rc.3 on Nomad. We have a working CSI plugin running version 1.6.0.
Nomad job:
after switching to 2.0.0-rc.3, job fails with
switching back to 1.6.0 recovered the job.
Let me know if I can help out to find any issues related to this.
The text was updated successfully, but these errors were encountered: