Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to start kubelet container on worker node #316

Closed
sabiodelhielo opened this issue Feb 5, 2018 · 7 comments
Closed

Failed to start kubelet container on worker node #316

sabiodelhielo opened this issue Feb 5, 2018 · 7 comments
Labels

Comments

@sabiodelhielo
Copy link

RKE version:
v0.1.1-rc1

Docker version: (docker version,docker info preferred)
1.12.6

Operating system and kernel: (cat /etc/os-release, uname -r preferred)
ubuntu-16.04

Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
AWS

cluster.yml file:

ignore_docker_version: true
ssh_key_path: /Users/tcordingly/Development/validation/.ssh/jenkins-rke-validation.pem
network:
  plugin: canal
nodes:
  - address: 18.221.199.152
    user: ubuntu
    role: [controlplane]
  - address: 18.219.87.147
    user: ubuntu
    role: [etcd]
  - address: 18.218.234.212
    user: ubuntu
    role: [worker]
services:
  etcd:
    image: quay.io/coreos/etcd:latest
  kube-api:
    image: rancher/k8s:v1.8.7-rancher1-1
  kube-controller:
    image: rancher/k8s:v1.8.7-rancher1-1
  scheduler:
    image: rancher/k8s:v1.8.7-rancher1-1
  kubelet:
    image: rancher/k8s:v1.8.7-rancher1-1
  kubeproxy:
    image: rancher/k8s:v1.8.7-rancher1-1

Results:
time="2018-02-05T15:01:27-07:00" level=info msg="Building Kubernetes cluster"
time="2018-02-05T15:01:27-07:00" level=info msg="[dialer] Setup tunnel for host [18.219.87.147]"
time="2018-02-05T15:01:29-07:00" level=info msg="[dialer] Setup tunnel for host [18.221.199.152]"
time="2018-02-05T15:01:30-07:00" level=info msg="[dialer] Setup tunnel for host [18.218.234.212]"
time="2018-02-05T15:01:32-07:00" level=info msg="[network] Deploying port listener containers"
time="2018-02-05T15:01:35-07:00" level=info msg="[network] Successfully pulled image [alpine:latest] on host [18.219.87.147]"
time="2018-02-05T15:01:38-07:00" level=info msg="[network] Successfully started [rke-etcd-port-listener] container on host [18.219.87.147]"
time="2018-02-05T15:01:40-07:00" level=info msg="[network] Successfully pulled image [alpine:latest] on host [18.221.199.152]"
time="2018-02-05T15:01:42-07:00" level=info msg="[network] Successfully started [rke-cp-port-listener] container on host [18.221.199.152]"
time="2018-02-05T15:01:44-07:00" level=info msg="[network] Successfully pulled image [alpine:latest] on host [18.218.234.212]"
time="2018-02-05T15:01:46-07:00" level=info msg="[network] Successfully started [rke-worker-port-listener] container on host [18.218.234.212]"
time="2018-02-05T15:01:46-07:00" level=info msg="[network] Port listener containers deployed successfully"
time="2018-02-05T15:01:46-07:00" level=info msg="[network] Running all -> etcd port checks"
time="2018-02-05T15:01:47-07:00" level=info msg="[network] Successfully started [rke-port-checker] container on host [18.221.199.152]"
time="2018-02-05T15:01:49-07:00" level=info msg="[network] Successfully started [rke-port-checker] container on host [18.218.234.212]"
time="2018-02-05T15:01:50-07:00" level=info msg="[network] Running control plane -> etcd port checks"
time="2018-02-05T15:01:51-07:00" level=info msg="[network] Successfully started [rke-port-checker] container on host [18.221.199.152]"
time="2018-02-05T15:01:51-07:00" level=info msg="[network] Running workers -> control plane port checks"
time="2018-02-05T15:01:52-07:00" level=info msg="[network] Successfully started [rke-port-checker] container on host [18.218.234.212]"
time="2018-02-05T15:01:52-07:00" level=info msg="[network] Checking KubeAPI port Control Plane hosts"
time="2018-02-05T15:01:52-07:00" level=info msg="[network] Removing port listener containers"
time="2018-02-05T15:01:53-07:00" level=info msg="[remove/rke-etcd-port-listener] Successfully removed container on host [18.219.87.147]"
time="2018-02-05T15:01:54-07:00" level=info msg="[remove/rke-cp-port-listener] Successfully removed container on host [18.221.199.152]"
time="2018-02-05T15:01:55-07:00" level=info msg="[remove/rke-worker-port-listener] Successfully removed container on host [18.218.234.212]"
time="2018-02-05T15:01:55-07:00" level=info msg="[network] Port listener containers removed successfully"
time="2018-02-05T15:01:55-07:00" level=info msg="[certificates] Attempting to recover certificates from backup on host [18.219.87.147]"
time="2018-02-05T15:01:56-07:00" level=info msg="[certificates] Successfully started [cert-fetcher] container on host [18.219.87.147]"
time="2018-02-05T15:01:56-07:00" level=info msg="[certificates] No Certificate backup found on host [18.219.87.147]"
time="2018-02-05T15:01:56-07:00" level=info msg="[certificates] Generating kubernetes certificates"
time="2018-02-05T15:01:56-07:00" level=info msg="[certificates] Generating CA kubernetes certificates"
time="2018-02-05T15:01:57-07:00" level=info msg="[certificates] Generating Kubernetes API server certificates"
time="2018-02-05T15:01:57-07:00" level=info msg="[certificates] Generating Kube Controller certificates"
time="2018-02-05T15:01:57-07:00" level=info msg="[certificates] Generating Kube Scheduler certificates"
time="2018-02-05T15:01:58-07:00" level=info msg="[certificates] Generating Kube Proxy certificates"
time="2018-02-05T15:01:58-07:00" level=info msg="[certificates] Generating Node certificate"
time="2018-02-05T15:01:59-07:00" level=info msg="[certificates] Generating admin certificates and kubeconfig"
time="2018-02-05T15:01:59-07:00" level=info msg="[certificates] Generating etcd-18.219.87.147 certificate and key"
time="2018-02-05T15:01:59-07:00" level=info msg="[certificates] Temporarily saving certs to etcd host [18.219.87.147]"
time="2018-02-05T15:02:04-07:00" level=info msg="[certificates] Successfully pulled image [rancher/rke-cert-deployer:v0.1.1] on host [18.219.87.147]"
time="2018-02-05T15:02:10-07:00" level=info msg="[certificates] Saved certs to etcd host [18.219.87.147]"
time="2018-02-05T15:02:10-07:00" level=info msg="[reconcile] Reconciling cluster state"
time="2018-02-05T15:02:10-07:00" level=info msg="[reconcile] This is newly generated cluster"
time="2018-02-05T15:02:10-07:00" level=info msg="[certificates] Deploying kubernetes certificates to Cluster nodes"
time="2018-02-05T15:02:15-07:00" level=info msg="[certificates] Successfully pulled image [rancher/rke-cert-deployer:v0.1.1] on host [18.221.199.152]"
time="2018-02-05T15:02:15-07:00" level=info msg="[certificates] Successfully pulled image [rancher/rke-cert-deployer:v0.1.1] on host [18.218.234.212]"
time="2018-02-05T15:02:21-07:00" level=info msg="Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]"
time="2018-02-05T15:02:21-07:00" level=info msg="[certificates] Successfully deployed kubernetes certificates to Cluster nodes"
time="2018-02-05T15:02:21-07:00" level=info msg="Pre-pulling kubernetes images"
time="2018-02-05T15:06:37-07:00" level=info msg="[pre-deploy] Successfully pulled image [rancher/k8s:v1.8.5-rancher4] on host [18.221.199.152]"
time="2018-02-05T15:06:45-07:00" level=info msg="[pre-deploy] Successfully pulled image [rancher/k8s:v1.8.5-rancher4] on host [18.219.87.147]"
time="2018-02-05T15:06:50-07:00" level=info msg="[pre-deploy] Successfully pulled image [rancher/k8s:v1.8.5-rancher4] on host [18.218.234.212]"
time="2018-02-05T15:06:50-07:00" level=info msg="Kubernetes images pulled successfully"
time="2018-02-05T15:06:50-07:00" level=info msg="[etcd] Building up Etcd Plane.."
time="2018-02-05T15:06:58-07:00" level=info msg="[etcd] Successfully pulled image [quay.io/coreos/etcd:latest] on host [18.219.87.147]"
time="2018-02-05T15:06:59-07:00" level=info msg="[etcd] Successfully started [etcd] container on host [18.219.87.147]"
time="2018-02-05T15:06:59-07:00" level=info msg="[etcd] Successfully started Etcd Plane.."
time="2018-02-05T15:06:59-07:00" level=info msg="[controlplane] Building up Controller Plane.."
time="2018-02-05T15:07:00-07:00" level=info msg="[sidekick] Successfully pulled image [rancher/rke-service-sidekick:v0.1.0] on host [18.221.199.152]"
time="2018-02-05T15:09:39-07:00" level=info msg="[controlplane] Successfully pulled image [rancher/k8s:v1.8.7-rancher1-1] on host [18.221.199.152]"
time="2018-02-05T15:09:39-07:00" level=info msg="[controlplane] Successfully started [kube-api] container on host [18.221.199.152]"
time="2018-02-05T15:09:39-07:00" level=info msg="[healthcheck] Start Healthcheck on service [kube-api] on host [18.221.199.152]"
time="2018-02-05T15:09:58-07:00" level=info msg="[healthcheck] service [kube-api] on host [18.221.199.152] is healthy"
time="2018-02-05T15:09:59-07:00" level=info msg="[controlplane] Successfully started [kube-controller] container on host [18.221.199.152]"
time="2018-02-05T15:09:59-07:00" level=info msg="[healthcheck] Start Healthcheck on service [kube-controller] on host [18.221.199.152]"
time="2018-02-05T15:10:00-07:00" level=info msg="[healthcheck] service [kube-controller] on host [18.221.199.152] is healthy"
time="2018-02-05T15:10:00-07:00" level=info msg="[controlplane] Successfully started [scheduler] container on host [18.221.199.152]"
time="2018-02-05T15:10:00-07:00" level=info msg="[healthcheck] Start Healthcheck on service [scheduler] on host [18.221.199.152]"
time="2018-02-05T15:10:01-07:00" level=info msg="[healthcheck] service [scheduler] on host [18.221.199.152] is healthy"
time="2018-02-05T15:10:01-07:00" level=info msg="[controlplane] Successfully started Controller Plane.."
time="2018-02-05T15:10:01-07:00" level=info msg="[authz] Creating rke-job-deployer ServiceAccount"
time="2018-02-05T15:10:02-07:00" level=info msg="[authz] rke-job-deployer ServiceAccount created successfully"
time="2018-02-05T15:10:02-07:00" level=info msg="[authz] Creating system:node ClusterRoleBinding"
time="2018-02-05T15:10:02-07:00" level=info msg="[authz] system:node ClusterRoleBinding created successfully"
time="2018-02-05T15:10:02-07:00" level=info msg="[certificates] Save kubernetes certificates as secrets"
time="2018-02-05T15:10:02-07:00" level=info msg="[certificates] Successfully saved certificates as kubernetes secret [k8s-certs]"
time="2018-02-05T15:10:02-07:00" level=info msg="[state] Saving cluster state to Kubernetes"
time="2018-02-05T15:10:02-07:00" level=info msg="[state] Successfully Saved cluster state to Kubernetes ConfigMap: cluster-state"
time="2018-02-05T15:10:02-07:00" level=info msg="[worker] Building up Worker Plane.."
time="2018-02-05T15:10:07-07:00" level=info msg="[worker] Successfully pulled image [rancher/rke-nginx-proxy:v0.1.1] on host [18.219.87.147]"
time="2018-02-05T15:10:07-07:00" level=info msg="[worker] Successfully started [nginx-proxy] container on host [18.219.87.147]"
time="2018-02-05T15:10:08-07:00" level=info msg="[sidekick] Successfully pulled image [rancher/rke-service-sidekick:v0.1.0] on host [18.219.87.147]"
time="2018-02-05T15:12:02-07:00" level=info msg="[worker] Successfully pulled image [rancher/k8s:v1.8.7-rancher1-1] on host [18.219.87.147]"
time="2018-02-05T15:12:09-07:00" level=fatal msg="[workerPlane] Failed to bring up Worker Plane: Failed to start [kubelet] container on host [18.219.87.147]: Error response from daemon: invalid header field value "oci runtime error: container_linux.go:247: starting container process caused \"process_linux.go:359: container init caused \\\"rootfs_linux.go:89: jailing process inside rootfs caused \\\\\\\"pivot_root invalid argument\\\\\\\"\\\"\"\n""

Stopped kubelet container on this worker host:

 container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:89: jailing process inside rootfs caused \\\"pivot_root invalid argument\\\"\""

Steps to Reproduce:
Set up three nodes os, ubuntu-16.04 with docker 1.12.6
run rke up

attempted this with two versions of k8s:
rancher/k8s:v1.8.5-rancher4
rancher/k8s:v1.8.7-rancher1-1

@sabiodelhielo sabiodelhielo changed the title Failed to start kublet container on worker node Failed to start kubelet container on worker node Feb 5, 2018
@sabiodelhielo
Copy link
Author

sabiodelhielo commented Feb 5, 2018

root@ip-172-31-13-204:/home/ubuntu# docker inspect kubelet

[
    {
        "Id": "cefa503670433390dc243d6e0ca62d7ca950242f93c82ba7d032374443426d4a",
        "Created": "2018-02-05T22:12:02.882159852Z",
        "Path": "/opt/rke/entrypoint.sh",
        "Args": [
            "kubelet",
            "--v=2",
            "--address=0.0.0.0",
            "--cluster-domain=cluster.local",
            "--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0",
            "--cgroups-per-qos=True",
            "--enforce-node-allocatable=",
            "--hostname-override=18.219.87.147",
            "--cluster-dns=10.233.0.3",
            "--network-plugin=cni",
            "--cni-conf-dir=/etc/cni/net.d",
            "--cni-bin-dir=/opt/cni/bin",
            "--resolv-conf=/etc/resolv.conf",
            "--allow-privileged=true",
            "--cloud-provider=",
            "--kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml",
            "--require-kubeconfig=True"
        ],
        "State": {
            "Status": "created",
            "Running": false,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 0,
            "ExitCode": 128,
            "Error": "invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:359: container init caused \\\\\\\"rootfs_linux.go:89: jailing process inside rootfs caused \\\\\\\\\\\\\\\"pivot_root invalid argument\\\\\\\\\\\\\\\"\\\\\\\"\\\"\\n\"",
            "StartedAt": "0001-01-01T00:00:00Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:19d22a72b09ff62fd4070004b90effb80c4c9c3fa17eb9c2836242c19b85a53f",
        "ResolvConfPath": "/etc/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/cefa503670433390dc243d6e0ca62d7ca950242f93c82ba7d032374443426d4a/hostname",
        "HostsPath": "/var/lib/docker/containers/cefa503670433390dc243d6e0ca62d7ca950242f93c82ba7d032374443426d4a/hosts",
        "LogPath": "/var/lib/docker/containers/cefa503670433390dc243d6e0ca62d7ca950242f93c82ba7d032374443426d4a/cefa503670433390dc243d6e0ca62d7ca950242f93c82ba7d032374443426d4a-json.log",
        "Name": "/kubelet",
        "RestartCount": 0,
        "Driver": "aufs",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/etc/kubernetes:/etc/kubernetes",
                "/usr/libexec/kubernetes/kubelet-plugins:/usr/libexec/kubernetes/kubelet-plugins",
                "/etc/cni:/etc/cni:ro",
                "/opt/cni:/opt/cni:ro",
                "/etc/resolv.conf:/etc/resolv.conf",
                "/sys:/sys",
                "/var/lib/docker:/var/lib/docker:rw",
                "/var/lib/kubelet:/var/lib/kubelet:shared",
                "/var/run:/var/run:rw",
                "/run:/run",
                "/etc/ceph:/etc/ceph",
                "/dev:/host/dev",
                "/var/log/containers:/var/log/containers",
                "/var/log/pods:/var/log/pods"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "host",
            "PortBindings": null,
            "RestartPolicy": {
                "Name": "always",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": [
                "service-sidekick"
            ],
            "CapAdd": null,
            "CapDrop": null,
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "host",
            "Privileged": true,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "label=disable"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": 0,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Name": "aufs",
            "Data": null
        },
        "Mounts": [
            {
                "Source": "/etc/cni",
                "Destination": "/etc/cni",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/sys",
                "Destination": "/sys",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/docker",
                "Destination": "/var/lib/docker",
                "Mode": "rw",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/etc/kubernetes",
                "Destination": "/etc/kubernetes",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/usr/libexec/kubernetes/kubelet-plugins",
                "Destination": "/usr/libexec/kubernetes/kubelet-plugins",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/etc/resolv.conf",
                "Destination": "/etc/resolv.conf",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/dev",
                "Destination": "/host/dev",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/log/pods",
                "Destination": "/var/log/pods",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/opt/cni",
                "Destination": "/opt/cni",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/run",
                "Destination": "/var/run",
                "Mode": "rw",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/run",
                "Destination": "/run",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/log/containers",
                "Destination": "/var/log/containers",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Name": "a2e6ae7bcbe59e0e1699fb0e9208897213ecda7c7cafb4669548fb941481613d",
                "Source": "/var/lib/docker/volumes/a2e6ae7bcbe59e0e1699fb0e9208897213ecda7c7cafb4669548fb941481613d/_data",
                "Destination": "/opt/rke",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            },
            {
                "Source": "/var/lib/kubelet",
                "Destination": "/var/lib/kubelet",
                "Mode": "shared",
                "RW": true,
                "Propagation": "shared"
            },
            {
                "Source": "/etc/ceph",
                "Destination": "/etc/ceph",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "ip-172-31-13-204",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "SSL_SCRIPT_COMMIT=98660ada3d800f653fc1f105771b5173f9d1a019",
                "CNI=v0.3.0-rancher3"
            ],
            "Cmd": null,
            "Image": "rancher/k8s:v1.8.7-rancher1-1",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": [
                "/opt/rke/entrypoint.sh",
                "kubelet",
                "--v=2",
                "--address=0.0.0.0",
                "--cluster-domain=cluster.local",
                "--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0",
                "--cgroups-per-qos=True",
                "--enforce-node-allocatable=",
                "--hostname-override=18.219.87.147",
                "--cluster-dns=10.233.0.3",
                "--network-plugin=cni",
                "--cni-conf-dir=/etc/cni/net.d",
                "--cni-bin-dir=/opt/cni/bin",
                "--resolv-conf=/etc/resolv.conf",
                "--allow-privileged=true",
                "--cloud-provider=",
                "--kubeconfig=/etc/kubernetes/ssl/kubecfg-kube-node.yaml",
                "--require-kubeconfig=True"
            ],
            "OnBuild": null,
            "Labels": {}
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "a927cf02b93ab762f32437378a57cfa6595421cab2c196f5ef9a75dd1a958b52",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": null,
            "SandboxKey": "/var/run/docker/netns/default",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "host": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "3ae6d5ba2461b902b1694f88dffb067d15db02dffd792b9fd65bfeda0f22fd23",
                    "EndpointID": "",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": ""
                }
            }
        }
    }
]

@moelsayed
Copy link
Contributor

This turned out to be an issue with the AMI we build specifically for testing.

I haven't been able to pinpoint the exact issue yet, however, it seems that restarting docker service before creating the AMI, or rebooting the instance after it's created from the AMI fixes the issue.

@lindenle
Copy link

@moelsayed Any news on this? We are seeing it as well.

@moelsayed
Copy link
Contributor

@lindenle Can you provide more details on your case?

@jamiebuxxx
Copy link

We have seen this issue recently as well. It turns out that the base AMI we were using to build had the unattended upgrade enabled. It appears the unattended upgrade was updating some kernel headers and lib packages AFTER Docker had been install, causing the need for a reboot.

We disabled it with Packer using sudo apt-get remove -y unattended-upgrades and everything is working as expected.

@moelsayed
Copy link
Contributor

@jamiebuxxx Thank you for the excellent feedback.

@lindenle Can you confirm that this works in your case ?

@galal-hussein
Copy link
Contributor

Confirmed on my side because of unattended-upgrades, after removing the package i don't see the problem anymore

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants