==> Audit <== |-----------|-----------------|----------|-------------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |-----------|-----------------|----------|-------------|---------|---------------------|---------------------| | start | | minikube | mrw-macmini | v1.34.0 | 11 Jan 25 23:14 CST | | | start | --driver=docker | minikube | mrw-macmini | v1.34.0 | 11 Jan 25 23:15 CST | | | start | | minikube | mrw-macmini | v1.34.0 | 11 Jan 25 23:18 CST | 11 Jan 25 23:20 CST | | kubectl | -- get po -A | minikube | mrw-macmini | v1.34.0 | 11 Jan 25 23:21 CST | 11 Jan 25 23:21 CST | | dashboard | | minikube | mrw-macmini | v1.34.0 | 11 Jan 25 23:22 CST | | |-----------|-----------------|----------|-------------|---------|---------------------|---------------------| ==> Last Start <== Log file created at: 2025/01/11 23:18:19 Running on machine: mrwdeMac-mini Binary: Built with gc go1.23.1 for darwin/arm64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0111 23:18:19.677322 3750 out.go:345] Setting OutFile to fd 1 ... I0111 23:18:19.679115 3750 out.go:397] isatty.IsTerminal(1) = true I0111 23:18:19.679117 3750 out.go:358] Setting ErrFile to fd 2... I0111 23:18:19.679119 3750 out.go:397] isatty.IsTerminal(2) = true I0111 23:18:19.679340 3750 root.go:338] Updating PATH: /Users/mrw-macmini/.minikube/bin W0111 23:18:19.680679 3750 root.go:314] Error reading config file at /Users/mrw-macmini/.minikube/config/config.json: open /Users/mrw-macmini/.minikube/config/config.json: no such file or directory I0111 23:18:19.681397 3750 out.go:352] Setting JSON to false I0111 23:18:19.699141 3750 start.go:129] hostinfo: {"hostname":"mrwdeMac-mini.local","uptime":1257741,"bootTime":1735350958,"procs":643,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"15.2","kernelVersion":"24.2.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"89e39479-277e-5066-9899-6c4f064981b7"} W0111 23:18:19.699579 3750 start.go:137] gopshost.Virtualization returned error: not implemented yet I0111 23:18:19.705100 3750 out.go:177] 😄 minikube v1.34.0 on Darwin 15.2 (arm64) I0111 23:18:19.714723 3750 driver.go:394] Setting default libvirt URI to qemu:///system W0111 23:18:19.714723 3750 preload.go:293] Failed to list preload files: open /Users/mrw-macmini/.minikube/cache/preloaded-tarball: no such file or directory I0111 23:18:19.714838 3750 notify.go:220] Checking for updates... I0111 23:18:19.715506 3750 global.go:112] Querying for installed drivers using PATH=/Users/mrw-macmini/.minikube/bin:/Library/Frameworks/Python.framework/Versions/3.13/bin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Applications/VMware Fusion.app/Contents/Public I0111 23:18:19.715626 3750 global.go:133] virtualbox default: true priority: 6, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:unable to find VBoxManage in $PATH Reason: Fix:Install VirtualBox Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/ Version:} I0111 23:18:19.715706 3750 global.go:133] vmware default: false priority: 5, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0111 23:18:19.785613 3750 docker.go:123] docker version: linux-27.4.0:Docker Desktop 4.37.2 (179585) I0111 23:18:19.785689 3750 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0111 23:18:20.064663 3750 info.go:266] docker info: {ID:38fed41e-2b16-4be4-80e6-24c9d08910b8 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlayfs DriverStatus:[[driver-type io.containerd.snapshotter.v1]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:89 SystemTime:2025-01-11 15:18:20.050011135 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:17 KernelVersion:6.10.14-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:10 MemTotal:8217968640 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/mrw-macmini/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:/Users/mrw-macmini/.docker/cli-plugins/docker-ai SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:/Users/mrw-macmini/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:/Users/mrw-macmini/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:/Users/mrw-macmini/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:/Users/mrw-macmini/.docker/cli-plugins/docker-desktop SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:/Users/mrw-macmini/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/mrw-macmini/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:/Users/mrw-macmini/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/mrw-macmini/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:/Users/mrw-macmini/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/mrw-macmini/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:}} I0111 23:18:20.064747 3750 global.go:133] docker default: true priority: 9, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0111 23:18:20.064828 3750 global.go:133] parallels default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "prlctl": executable file not found in $PATH Reason: Fix:Install Parallels Desktop for Mac Doc:https://minikube.sigs.k8s.io/docs/drivers/parallels/ Version:} I0111 23:18:20.064868 3750 global.go:133] vfkit default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "vfkit": executable file not found in $PATH Reason: Fix:Run 'brew tap cfergeau/crc && brew install vfkit' Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vfkit/ Version:} I0111 23:18:20.064918 3750 global.go:133] podman default: true priority: 3, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "podman": executable file not found in $PATH Reason: Fix:Install Podman Doc:https://minikube.sigs.k8s.io/docs/drivers/podman/ Version:} I0111 23:18:20.064923 3750 global.go:133] ssh default: false priority: 4, state: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0111 23:18:20.065102 3750 global.go:133] hyperkit default: true priority: 8, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "hyperkit": executable file not found in $PATH Reason: Fix:Run 'brew install hyperkit' Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/ Version:} I0111 23:18:20.065288 3750 global.go:133] qemu2 default: true priority: 7, state: {Installed:false Healthy:false Running:false NeedsImprovement:false Error:exec: "qemu-system-aarch64": executable file not found in $PATH Reason: Fix:Install qemu-system Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/qemu/ Version:} I0111 23:18:20.065425 3750 driver.go:316] not recommending "vmware" due to default: false I0111 23:18:20.065426 3750 driver.go:316] not recommending "ssh" due to default: false I0111 23:18:20.065436 3750 driver.go:351] Picked: docker I0111 23:18:20.065438 3750 driver.go:352] Alternatives: [vmware ssh] I0111 23:18:20.065439 3750 driver.go:353] Rejects: [virtualbox parallels vfkit podman hyperkit qemu2] I0111 23:18:20.070104 3750 out.go:177] ✨ Automatically selected the docker driver. Other choices: vmware, ssh I0111 23:18:20.078291 3750 start.go:297] selected driver: docker I0111 23:18:20.078296 3750 start.go:901] validating driver "docker" against I0111 23:18:20.078301 3750 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0111 23:18:20.078383 3750 cli_runner.go:164] Run: docker system info --format "{{json .}}" I0111 23:18:20.150936 3750 info.go:266] docker info: {ID:38fed41e-2b16-4be4-80e6-24c9d08910b8 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlayfs DriverStatus:[[driver-type io.containerd.snapshotter.v1]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:56 OomKillDisable:false NGoroutines:89 SystemTime:2025-01-11 15:18:20.137748177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:17 KernelVersion:6.10.14-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:aarch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:10 MemTotal:8217968640 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=unix:///Users/mrw-macmini/Library/Containers/com.docker.docker/Data/docker-cli.sock] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined name=cgroupns] ProductLicense: Warnings:[WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:/Users/mrw-macmini/.docker/cli-plugins/docker-ai SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:/Users/mrw-macmini/.docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:/Users/mrw-macmini/.docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:/Users/mrw-macmini/.docker/cli-plugins/docker-debug SchemaVersion:0.1.0 ShortDescription:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:/Users/mrw-macmini/.docker/cli-plugins/docker-desktop SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:/Users/mrw-macmini/.docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:/Users/mrw-macmini/.docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:/Users/mrw-macmini/.docker/cli-plugins/docker-feedback SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:/Users/mrw-macmini/.docker/cli-plugins/docker-init SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:/Users/mrw-macmini/.docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:/Users/mrw-macmini/.docker/cli-plugins/docker-scout SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:}} I0111 23:18:20.151308 3750 start_flags.go:310] no existing cluster config was found, will generate one from the flags I0111 23:18:20.151384 3750 start_flags.go:393] Using suggested 4000MB memory alloc based on sys=16384MB, container=7837MB I0111 23:18:20.151773 3750 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true] I0111 23:18:20.156069 3750 out.go:177] 📌 Using Docker Desktop driver with root privileges I0111 23:18:20.160260 3750 cni.go:84] Creating CNI manager for "" I0111 23:18:20.160461 3750 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0111 23:18:20.160464 3750 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni I0111 23:18:20.160497 3750 start.go:340] cluster config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0111 23:18:20.163914 3750 out.go:177] 👍 Starting "minikube" primary control-plane node in "minikube" cluster I0111 23:18:20.178242 3750 cache.go:121] Beginning downloading kic base image for docker with docker I0111 23:18:20.181995 3750 out.go:177] 🚜 Pulling base image v0.0.45 ... I0111 23:18:20.190187 3750 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon I0111 23:18:20.190332 3750 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker I0111 23:18:20.205717 3750 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache I0111 23:18:20.206100 3750 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory I0111 23:18:20.206198 3750 image.go:148] Writing gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache I0111 23:18:21.103903 3750 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 I0111 23:18:21.103956 3750 cache.go:56] Caching tarball of preloaded images I0111 23:18:21.104732 3750 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker I0111 23:18:21.110085 3750 out.go:177] 💾 Downloading Kubernetes v1.31.0 preload ... I0111 23:18:21.118091 3750 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ... I0111 23:18:22.185448 3750 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4?checksum=md5:90c22abece392b762c0b4e45be981bb4 -> /Users/mrw-macmini/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 I0111 23:19:08.748204 3750 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ... I0111 23:19:08.748313 3750 preload.go:254] verifying checksum of /Users/mrw-macmini/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4 ... I0111 23:19:09.167285 3750 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on docker I0111 23:19:09.168462 3750 profile.go:143] Saving config to /Users/mrw-macmini/.minikube/profiles/minikube/config.json ... I0111 23:19:09.168475 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.minikube/profiles/minikube/config.json: {Name:mk2e020c4c19a101b9b269e48a846a24df38ceed Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:19:53.949546 3750 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball I0111 23:19:53.949566 3750 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from local cache I0111 23:20:05.239606 3750 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from cached tarball I0111 23:20:05.239868 3750 cache.go:194] Successfully downloaded all kic artifacts I0111 23:20:05.240945 3750 start.go:360] acquireMachinesLock for minikube: {Name:mk7cf43fc01d2e18c38de43b35cdd05fbf02fde2 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0111 23:20:05.241322 3750 start.go:364] duration metric: took 329.709µs to acquireMachinesLock for "minikube" I0111 23:20:05.241599 3750 start.go:93] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0111 23:20:05.241733 3750 start.go:125] createHost starting for "" (driver="docker") I0111 23:20:05.245953 3750 out.go:235] 🔥 Creating docker container (CPUs=2, Memory=4000MB) ... I0111 23:20:05.246638 3750 start.go:159] libmachine.API.Create for "minikube" (driver="docker") I0111 23:20:05.246780 3750 client.go:168] LocalClient.Create starting I0111 23:20:05.250661 3750 main.go:141] libmachine: Creating CA: /Users/mrw-macmini/.minikube/certs/ca.pem I0111 23:20:05.349119 3750 main.go:141] libmachine: Creating client certificate: /Users/mrw-macmini/.minikube/certs/cert.pem I0111 23:20:05.426577 3750 cli_runner.go:164] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0111 23:20:05.443285 3750 cli_runner.go:211] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0111 23:20:05.443355 3750 network_create.go:284] running [docker network inspect minikube] to gather additional debugging logs... I0111 23:20:05.443363 3750 cli_runner.go:164] Run: docker network inspect minikube W0111 23:20:05.456077 3750 cli_runner.go:211] docker network inspect minikube returned with exit code 1 I0111 23:20:05.456095 3750 network_create.go:287] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error response from daemon: network minikube not found I0111 23:20:05.456103 3750 network_create.go:289] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error response from daemon: network minikube not found ** /stderr ** I0111 23:20:05.456202 3750 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0111 23:20:05.469522 3750 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x140019b97b0} I0111 23:20:05.469543 3750 network_create.go:124] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 65535 ... I0111 23:20:05.469588 3750 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=65535 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=minikube minikube I0111 23:20:05.520362 3750 network_create.go:108] docker network minikube 192.168.49.0/24 created I0111 23:20:05.520405 3750 kic.go:121] calculated static IP "192.168.49.2" for the "minikube" container I0111 23:20:05.520522 3750 cli_runner.go:164] Run: docker ps -a --format {{.Names}} I0111 23:20:05.532435 3750 cli_runner.go:164] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0111 23:20:05.544771 3750 oci.go:103] Successfully created a docker volume minikube I0111 23:20:05.544870 3750 cli_runner.go:164] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib I0111 23:20:06.390772 3750 oci.go:107] Successfully prepared a docker volume minikube I0111 23:20:06.390817 3750 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker I0111 23:20:06.390852 3750 kic.go:194] Starting extracting preloaded images to volume ... I0111 23:20:06.390980 3750 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/mrw-macmini/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir I0111 23:20:07.999340 3750 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/mrw-macmini/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir: (1.608269708s) I0111 23:20:07.999384 3750 kic.go:203] duration metric: took 1.608519792s to extract preloaded images to volume ... I0111 23:20:08.000570 3750 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'" I0111 23:20:09.265568 3750 cli_runner.go:217] Completed: docker info --format "'{{json .SecurityOptions}}'": (1.264955667s) I0111 23:20:09.265702 3750 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory=4000mb --memory-swap=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 I0111 23:20:09.457176 3750 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Running}} I0111 23:20:09.474806 3750 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0111 23:20:09.488775 3750 cli_runner.go:164] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0111 23:20:09.518042 3750 oci.go:144] the created container "minikube" has a running status. I0111 23:20:09.518245 3750 kic.go:225] Creating ssh key for kic: /Users/mrw-macmini/.minikube/machines/minikube/id_rsa... I0111 23:20:09.737113 3750 kic_runner.go:191] docker (temp): /Users/mrw-macmini/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0111 23:20:09.760515 3750 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0111 23:20:09.771986 3750 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0111 23:20:09.771996 3750 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0111 23:20:09.809609 3750 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0111 23:20:09.820875 3750 machine.go:93] provisionDockerMachine start ... I0111 23:20:09.821836 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:09.832619 3750 main.go:141] libmachine: Using SSH client type: native I0111 23:20:09.833402 3750 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1052ed4b0] 0x1052efcf0 [] 0s} 127.0.0.1 58999 } I0111 23:20:09.833405 3750 main.go:141] libmachine: About to run SSH command: hostname I0111 23:20:09.937707 3750 main.go:141] libmachine: SSH cmd err, output: : minikube I0111 23:20:09.937894 3750 ubuntu.go:169] provisioning hostname "minikube" I0111 23:20:09.938198 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:09.952725 3750 main.go:141] libmachine: Using SSH client type: native I0111 23:20:09.954187 3750 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1052ed4b0] 0x1052efcf0 [] 0s} 127.0.0.1 58999 } I0111 23:20:09.954192 3750 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0111 23:20:10.067511 3750 main.go:141] libmachine: SSH cmd err, output: : minikube I0111 23:20:10.068362 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:10.091490 3750 main.go:141] libmachine: Using SSH client type: native I0111 23:20:10.091665 3750 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1052ed4b0] 0x1052efcf0 [] 0s} 127.0.0.1 58999 } I0111 23:20:10.091672 3750 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0111 23:20:10.200365 3750 main.go:141] libmachine: SSH cmd err, output: : I0111 23:20:10.200399 3750 ubuntu.go:175] set auth options {CertDir:/Users/mrw-macmini/.minikube CaCertPath:/Users/mrw-macmini/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/mrw-macmini/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/mrw-macmini/.minikube/machines/server.pem ServerKeyPath:/Users/mrw-macmini/.minikube/machines/server-key.pem ClientKeyPath:/Users/mrw-macmini/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/mrw-macmini/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/mrw-macmini/.minikube} I0111 23:20:10.200434 3750 ubuntu.go:177] setting up certificates I0111 23:20:10.200531 3750 provision.go:84] configureAuth start I0111 23:20:10.201490 3750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0111 23:20:10.228550 3750 provision.go:143] copyHostCerts I0111 23:20:10.229065 3750 exec_runner.go:151] cp: /Users/mrw-macmini/.minikube/certs/ca.pem --> /Users/mrw-macmini/.minikube/ca.pem (1090 bytes) I0111 23:20:10.229319 3750 exec_runner.go:151] cp: /Users/mrw-macmini/.minikube/certs/cert.pem --> /Users/mrw-macmini/.minikube/cert.pem (1135 bytes) I0111 23:20:10.229621 3750 exec_runner.go:151] cp: /Users/mrw-macmini/.minikube/certs/key.pem --> /Users/mrw-macmini/.minikube/key.pem (1675 bytes) I0111 23:20:10.230663 3750 provision.go:117] generating server cert: /Users/mrw-macmini/.minikube/machines/server.pem ca-key=/Users/mrw-macmini/.minikube/certs/ca.pem private-key=/Users/mrw-macmini/.minikube/certs/ca-key.pem org=mrw-macmini.minikube san=[127.0.0.1 192.168.49.2 localhost minikube] I0111 23:20:10.300476 3750 provision.go:177] copyRemoteCerts I0111 23:20:10.300911 3750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0111 23:20:10.300947 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:10.317117 3750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58999 SSHKeyPath:/Users/mrw-macmini/.minikube/machines/minikube/id_rsa Username:docker} I0111 23:20:10.413365 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1090 bytes) I0111 23:20:10.434101 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/machines/server.pem --> /etc/docker/server.pem (1192 bytes) I0111 23:20:10.449042 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0111 23:20:10.460090 3750 provision.go:87] duration metric: took 259.319833ms to configureAuth I0111 23:20:10.460104 3750 ubuntu.go:193] setting minikube options for container-runtime I0111 23:20:10.460626 3750 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0 I0111 23:20:10.460694 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:10.476472 3750 main.go:141] libmachine: Using SSH client type: native I0111 23:20:10.476643 3750 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1052ed4b0] 0x1052efcf0 [] 0s} 127.0.0.1 58999 } I0111 23:20:10.476651 3750 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0111 23:20:10.578409 3750 main.go:141] libmachine: SSH cmd err, output: : overlay I0111 23:20:10.578418 3750 ubuntu.go:71] root file system type: overlay I0111 23:20:10.580506 3750 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ... I0111 23:20:10.580823 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:10.611154 3750 main.go:141] libmachine: Using SSH client type: native I0111 23:20:10.611338 3750 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1052ed4b0] 0x1052efcf0 [] 0s} 127.0.0.1 58999 } I0111 23:20:10.611386 3750 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %s "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0111 23:20:10.726509 3750 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0111 23:20:10.727138 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:10.755715 3750 main.go:141] libmachine: Using SSH client type: native I0111 23:20:10.755946 3750 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0x1052ed4b0] 0x1052efcf0 [] 0s} 127.0.0.1 58999 } I0111 23:20:10.755956 3750 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0111 23:20:11.112463 3750 main.go:141] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2024-08-27 14:13:43.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2025-01-11 15:20:10.724530006 +0000 @@ -1,46 +1,49 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com -After=network-online.target docker.socket firewalld.service containerd.service time-set.target -Wants=network-online.target containerd.service +BindsTo=containerd.service +After=network-online.target firewalld.service containerd.service +Wants=network-online.target Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutStartSec=0 -RestartSec=2 -Restart=always +Restart=on-failure -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. +LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0111 23:20:11.112483 3750 machine.go:96] duration metric: took 1.291584542s to provisionDockerMachine I0111 23:20:11.112491 3750 client.go:171] duration metric: took 5.865667167s to LocalClient.Create I0111 23:20:11.112525 3750 start.go:167] duration metric: took 5.865847208s to libmachine.API.Create "minikube" I0111 23:20:11.112534 3750 start.go:293] postStartSetup for "minikube" (driver="docker") I0111 23:20:11.112825 3750 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0111 23:20:11.112980 3750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0111 23:20:11.113046 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:11.140292 3750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58999 SSHKeyPath:/Users/mrw-macmini/.minikube/machines/minikube/id_rsa Username:docker} I0111 23:20:11.214033 3750 ssh_runner.go:195] Run: cat /etc/os-release I0111 23:20:11.216173 3750 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0111 23:20:11.216205 3750 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0111 23:20:11.216213 3750 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0111 23:20:11.216216 3750 info.go:137] Remote host: Ubuntu 22.04.4 LTS I0111 23:20:11.216222 3750 filesync.go:126] Scanning /Users/mrw-macmini/.minikube/addons for local assets ... I0111 23:20:11.216329 3750 filesync.go:126] Scanning /Users/mrw-macmini/.minikube/files for local assets ... I0111 23:20:11.216381 3750 start.go:296] duration metric: took 103.843ms for postStartSetup I0111 23:20:11.217125 3750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0111 23:20:11.241473 3750 profile.go:143] Saving config to /Users/mrw-macmini/.minikube/profiles/minikube/config.json ... I0111 23:20:11.241890 3750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0111 23:20:11.241932 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:11.258168 3750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58999 SSHKeyPath:/Users/mrw-macmini/.minikube/machines/minikube/id_rsa Username:docker} I0111 23:20:11.333960 3750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'" I0111 23:20:11.336803 3750 start.go:128] duration metric: took 6.095019292s to createHost I0111 23:20:11.336814 3750 start.go:83] releasing machines lock for "minikube", held for 6.09542975s I0111 23:20:11.336929 3750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0111 23:20:11.380090 3750 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0111 23:20:11.380260 3750 ssh_runner.go:195] Run: cat /version.json I0111 23:20:11.380330 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:11.381394 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:11.403887 3750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58999 SSHKeyPath:/Users/mrw-macmini/.minikube/machines/minikube/id_rsa Username:docker} I0111 23:20:11.404280 3750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58999 SSHKeyPath:/Users/mrw-macmini/.minikube/machines/minikube/id_rsa Username:docker} I0111 23:20:12.816694 3750 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (1.436539875s) I0111 23:20:12.816720 3750 ssh_runner.go:235] Completed: cat /version.json: (1.436427458s) I0111 23:20:12.819639 3750 ssh_runner.go:195] Run: systemctl --version I0111 23:20:12.826412 3750 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" I0111 23:20:12.831550 3750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ; I0111 23:20:12.855265 3750 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found I0111 23:20:12.855472 3750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0111 23:20:12.869558 3750 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s) I0111 23:20:12.869572 3750 start.go:495] detecting cgroup driver to use... I0111 23:20:12.869599 3750 detect.go:187] detected "cgroupfs" cgroup driver on host os I0111 23:20:12.871410 3750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0111 23:20:12.879073 3750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml" I0111 23:20:12.883913 3750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0111 23:20:12.888027 3750 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver... I0111 23:20:12.888100 3750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml" I0111 23:20:12.892031 3750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0111 23:20:12.895796 3750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0111 23:20:12.899341 3750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0111 23:20:12.902812 3750 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0111 23:20:12.907014 3750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0111 23:20:12.910499 3750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml" I0111 23:20:12.914099 3750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml" I0111 23:20:12.917684 3750 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0111 23:20:12.920720 3750 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0111 23:20:12.923941 3750 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0111 23:20:12.950379 3750 ssh_runner.go:195] Run: sudo systemctl restart containerd I0111 23:20:12.992260 3750 start.go:495] detecting cgroup driver to use... I0111 23:20:12.992277 3750 detect.go:187] detected "cgroupfs" cgroup driver on host os I0111 23:20:12.992406 3750 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0111 23:20:12.998070 3750 cruntime.go:279] skipping containerd shutdown because we are bound to it I0111 23:20:12.998187 3750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0111 23:20:13.003014 3750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0111 23:20:13.009482 3750 ssh_runner.go:195] Run: which cri-dockerd I0111 23:20:13.011187 3750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0111 23:20:13.015535 3750 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes) I0111 23:20:13.022938 3750 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0111 23:20:13.051403 3750 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0111 23:20:13.076388 3750 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver... I0111 23:20:13.078228 3750 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes) I0111 23:20:13.085171 3750 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0111 23:20:13.115435 3750 ssh_runner.go:195] Run: sudo systemctl restart docker I0111 23:20:13.247812 3750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket I0111 23:20:13.252451 3750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0111 23:20:13.256940 3750 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0111 23:20:13.284162 3750 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0111 23:20:13.311302 3750 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0111 23:20:13.338470 3750 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0111 23:20:13.367487 3750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0111 23:20:13.371551 3750 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0111 23:20:13.397141 3750 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service I0111 23:20:13.444792 3750 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock I0111 23:20:13.444958 3750 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0111 23:20:13.446940 3750 start.go:563] Will wait 60s for crictl version I0111 23:20:13.447030 3750 ssh_runner.go:195] Run: which crictl I0111 23:20:13.448560 3750 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0111 23:20:13.467367 3750 start.go:579] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 27.2.0 RuntimeApiVersion: v1 I0111 23:20:13.467517 3750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0111 23:20:13.485130 3750 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0111 23:20:13.498796 3750 out.go:235] 🐳 Preparing Kubernetes v1.31.0 on Docker 27.2.0 ... I0111 23:20:13.499479 3750 cli_runner.go:164] Run: docker exec -t minikube dig +short host.docker.internal I0111 23:20:13.587036 3750 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254 I0111 23:20:13.587175 3750 ssh_runner.go:195] Run: grep 192.168.65.254 host.minikube.internal$ /etc/hosts I0111 23:20:13.588936 3750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0111 23:20:13.593059 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0111 23:20:13.608296 3750 kubeadm.go:883] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ... I0111 23:20:13.608560 3750 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime docker I0111 23:20:13.608607 3750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0111 23:20:13.617352 3750 docker.go:685] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/pause:3.10 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0111 23:20:13.617357 3750 docker.go:615] Images already preloaded, skipping extraction I0111 23:20:13.617445 3750 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0111 23:20:13.623971 3750 docker.go:685] Got preloaded images: -- stdout -- registry.k8s.io/kube-apiserver:v1.31.0 registry.k8s.io/kube-scheduler:v1.31.0 registry.k8s.io/kube-controller-manager:v1.31.0 registry.k8s.io/kube-proxy:v1.31.0 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/pause:3.10 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0111 23:20:13.623979 3750 cache_images.go:84] Images are preloaded, skipping loading I0111 23:20:13.623998 3750 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 docker true true} ... I0111 23:20:13.624671 3750 kubeadm.go:946] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} I0111 23:20:13.624810 3750 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0111 23:20:13.646570 3750 cni.go:84] Creating CNI manager for "" I0111 23:20:13.646576 3750 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0111 23:20:13.646586 3750 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0111 23:20:13.646868 3750 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true} I0111 23:20:13.647356 3750 kubeadm.go:187] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.31.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%" nodefs.inodesFree: "0%" imagefs.available: "0%" failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0111 23:20:13.647446 3750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0 I0111 23:20:13.651089 3750 binaries.go:44] Found k8s binaries, skipping transfer I0111 23:20:13.651133 3750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0111 23:20:13.654238 3750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes) I0111 23:20:13.660459 3750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0111 23:20:13.666711 3750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes) I0111 23:20:13.672860 3750 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0111 23:20:13.674284 3750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0111 23:20:13.678083 3750 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0111 23:20:13.704848 3750 ssh_runner.go:195] Run: sudo systemctl start kubelet I0111 23:20:13.736665 3750 certs.go:68] Setting up /Users/mrw-macmini/.minikube/profiles/minikube for IP: 192.168.49.2 I0111 23:20:13.736672 3750 certs.go:194] generating shared ca certs ... I0111 23:20:13.736870 3750 certs.go:226] acquiring lock for ca certs: {Name:mk926997be3d19c005308a74fa419737e2cd0ded Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:13.737162 3750 certs.go:240] generating "minikubeCA" ca cert: /Users/mrw-macmini/.minikube/ca.key I0111 23:20:13.774992 3750 crypto.go:156] Writing cert to /Users/mrw-macmini/.minikube/ca.crt ... I0111 23:20:13.775000 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.minikube/ca.crt: {Name:mk646faa19138a2a3c738f928848523d975b1846 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:13.775212 3750 crypto.go:164] Writing key to /Users/mrw-macmini/.minikube/ca.key ... I0111 23:20:13.775215 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.minikube/ca.key: {Name:mk278680e1d367f8e515b7b1b1167e02d3280837 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:13.775331 3750 certs.go:240] generating "proxyClientCA" ca cert: /Users/mrw-macmini/.minikube/proxy-client-ca.key I0111 23:20:13.883024 3750 crypto.go:156] Writing cert to /Users/mrw-macmini/.minikube/proxy-client-ca.crt ... I0111 23:20:13.883028 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.minikube/proxy-client-ca.crt: {Name:mkd5ac74c622cc51c90f13ee4550165abeee723c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:13.883217 3750 crypto.go:164] Writing key to /Users/mrw-macmini/.minikube/proxy-client-ca.key ... I0111 23:20:13.883218 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.minikube/proxy-client-ca.key: {Name:mk9f4d7ddf80f2e9c716ff4437c773dffdca9c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:13.884009 3750 certs.go:256] generating profile certs ... I0111 23:20:13.884069 3750 certs.go:363] generating signed profile cert for "minikube-user": /Users/mrw-macmini/.minikube/profiles/minikube/client.key I0111 23:20:13.884085 3750 crypto.go:68] Generating cert /Users/mrw-macmini/.minikube/profiles/minikube/client.crt with IP's: [] I0111 23:20:13.929168 3750 crypto.go:156] Writing cert to /Users/mrw-macmini/.minikube/profiles/minikube/client.crt ... I0111 23:20:13.929171 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.minikube/profiles/minikube/client.crt: {Name:mk5ab1ab90b0e5c48266196ad999ce4e96fb7de3 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:13.929316 3750 crypto.go:164] Writing key to /Users/mrw-macmini/.minikube/profiles/minikube/client.key ... I0111 23:20:13.929320 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.minikube/profiles/minikube/client.key: {Name:mk7de9300538bfe87e250dbd0ddb8c2e08549222 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:13.929424 3750 certs.go:363] generating signed profile cert for "minikube": /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.key.7fb57e3c I0111 23:20:13.929430 3750 crypto.go:68] Generating cert /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.crt.7fb57e3c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2] I0111 23:20:13.961826 3750 crypto.go:156] Writing cert to /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.crt.7fb57e3c ... I0111 23:20:13.961831 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.crt.7fb57e3c: {Name:mk79d70da82f9162a085b4463d22d052a0184652 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:13.962001 3750 crypto.go:164] Writing key to /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.key.7fb57e3c ... I0111 23:20:13.962011 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.key.7fb57e3c: {Name:mkc60d0930cc818e9e26c1054bc3a9ff7418d7b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:13.962101 3750 certs.go:381] copying /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.crt.7fb57e3c -> /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.crt I0111 23:20:13.962310 3750 certs.go:385] copying /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.key.7fb57e3c -> /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.key I0111 23:20:13.962373 3750 certs.go:363] generating signed profile cert for "aggregator": /Users/mrw-macmini/.minikube/profiles/minikube/proxy-client.key I0111 23:20:13.962378 3750 crypto.go:68] Generating cert /Users/mrw-macmini/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0111 23:20:14.028086 3750 crypto.go:156] Writing cert to /Users/mrw-macmini/.minikube/profiles/minikube/proxy-client.crt ... I0111 23:20:14.028090 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.minikube/profiles/minikube/proxy-client.crt: {Name:mk7488ff7b32d7a8075e131a35299ffcb6c7821a Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:14.028238 3750 crypto.go:164] Writing key to /Users/mrw-macmini/.minikube/profiles/minikube/proxy-client.key ... I0111 23:20:14.028240 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.minikube/profiles/minikube/proxy-client.key: {Name:mkae2b58889fdcb190dc446a464c68915f82190c Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:14.028444 3750 certs.go:484] found cert: /Users/mrw-macmini/.minikube/certs/ca-key.pem (1675 bytes) I0111 23:20:14.028464 3750 certs.go:484] found cert: /Users/mrw-macmini/.minikube/certs/ca.pem (1090 bytes) I0111 23:20:14.028482 3750 certs.go:484] found cert: /Users/mrw-macmini/.minikube/certs/cert.pem (1135 bytes) I0111 23:20:14.028497 3750 certs.go:484] found cert: /Users/mrw-macmini/.minikube/certs/key.pem (1675 bytes) I0111 23:20:14.034067 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0111 23:20:14.057422 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0111 23:20:14.071504 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0111 23:20:14.083998 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0111 23:20:14.093925 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes) I0111 23:20:14.104176 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0111 23:20:14.113252 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0111 23:20:14.121822 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0111 23:20:14.129791 3750 ssh_runner.go:362] scp /Users/mrw-macmini/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0111 23:20:14.138591 3750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0111 23:20:14.144403 3750 ssh_runner.go:195] Run: openssl version I0111 23:20:14.146650 3750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0111 23:20:14.150631 3750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0111 23:20:14.151939 3750 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 11 15:20 /usr/share/ca-certificates/minikubeCA.pem I0111 23:20:14.151961 3750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0111 23:20:14.154421 3750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0111 23:20:14.157587 3750 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt I0111 23:20:14.159896 3750 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1 stdout: stderr: stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory I0111 23:20:14.159938 3750 kubeadm.go:392] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0111 23:20:14.159993 3750 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0111 23:20:14.166079 3750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0111 23:20:14.169326 3750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0111 23:20:14.172527 3750 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver I0111 23:20:14.172763 3750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0111 23:20:14.175714 3750 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0111 23:20:14.175719 3750 kubeadm.go:157] found existing configuration files: I0111 23:20:14.175750 3750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0111 23:20:14.178618 3750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/admin.conf: No such file or directory I0111 23:20:14.178640 3750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf I0111 23:20:14.181542 3750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0111 23:20:14.184554 3750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/kubelet.conf: No such file or directory I0111 23:20:14.184583 3750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf I0111 23:20:14.187676 3750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0111 23:20:14.190674 3750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/controller-manager.conf: No such file or directory I0111 23:20:14.190708 3750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0111 23:20:14.193670 3750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0111 23:20:14.196560 3750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: grep: /etc/kubernetes/scheduler.conf: No such file or directory I0111 23:20:14.196586 3750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0111 23:20:14.199690 3750 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0111 23:20:14.217640 3750 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0 I0111 23:20:14.217667 3750 kubeadm.go:310] [preflight] Running pre-flight checks I0111 23:20:14.248470 3750 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster I0111 23:20:14.248572 3750 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection I0111 23:20:14.248645 3750 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull' I0111 23:20:14.253073 3750 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0111 23:20:14.264240 3750 out.go:235] ▪ Generating certificates and keys ... I0111 23:20:14.264330 3750 kubeadm.go:310] [certs] Using existing ca certificate authority I0111 23:20:14.264404 3750 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk I0111 23:20:14.299782 3750 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key I0111 23:20:14.386957 3750 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key I0111 23:20:14.435018 3750 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key I0111 23:20:14.532057 3750 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key I0111 23:20:14.584461 3750 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key I0111 23:20:14.584550 3750 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0111 23:20:14.610968 3750 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key I0111 23:20:14.611071 3750 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost minikube] and IPs [192.168.49.2 127.0.0.1 ::1] I0111 23:20:14.723274 3750 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key I0111 23:20:14.842436 3750 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key I0111 23:20:14.947030 3750 kubeadm.go:310] [certs] Generating "sa" key and public key I0111 23:20:14.947099 3750 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0111 23:20:14.996340 3750 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file I0111 23:20:15.019360 3750 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file I0111 23:20:15.053490 3750 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0111 23:20:15.113772 3750 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0111 23:20:15.161211 3750 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0111 23:20:15.161470 3750 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0111 23:20:15.163315 3750 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0111 23:20:15.167359 3750 out.go:235] ▪ Booting up control plane ... I0111 23:20:15.167542 3750 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver" I0111 23:20:15.167653 3750 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0111 23:20:15.167757 3750 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler" I0111 23:20:15.181167 3750 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0111 23:20:15.183461 3750 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0111 23:20:15.183503 3750 kubeadm.go:310] [kubelet-start] Starting the kubelet I0111 23:20:15.234496 3750 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests" I0111 23:20:15.234593 3750 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s I0111 23:20:15.738419 3750 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 503.439792ms I0111 23:20:15.738511 3750 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s I0111 23:20:18.744920 3750 kubeadm.go:310] [api-check] The API server is healthy after 3.00643471s I0111 23:20:18.758363 3750 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0111 23:20:18.764599 3750 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster I0111 23:20:18.772580 3750 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs I0111 23:20:18.772714 3750 kubeadm.go:310] [mark-control-plane] Marking the node minikube as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers] I0111 23:20:18.775765 3750 kubeadm.go:310] [bootstrap-token] Using token: 8mmw1d.mm4q63bdl5ucpi66 I0111 23:20:18.781945 3750 out.go:235] ▪ Configuring RBAC rules ... I0111 23:20:18.782319 3750 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0111 23:20:18.783073 3750 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes I0111 23:20:18.799000 3750 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0111 23:20:18.799123 3750 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0111 23:20:18.799199 3750 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0111 23:20:18.799252 3750 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0111 23:20:19.160688 3750 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0111 23:20:19.562739 3750 kubeadm.go:310] [addons] Applied essential addon: CoreDNS I0111 23:20:20.157281 3750 kubeadm.go:310] [addons] Applied essential addon: kube-proxy I0111 23:20:20.158387 3750 kubeadm.go:310] I0111 23:20:20.158466 3750 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully! I0111 23:20:20.158473 3750 kubeadm.go:310] I0111 23:20:20.158579 3750 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user: I0111 23:20:20.158585 3750 kubeadm.go:310] I0111 23:20:20.158610 3750 kubeadm.go:310] mkdir -p $HOME/.kube I0111 23:20:20.158683 3750 kubeadm.go:310] sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config I0111 23:20:20.158725 3750 kubeadm.go:310] sudo chown $(id -u):$(id -g) $HOME/.kube/config I0111 23:20:20.158729 3750 kubeadm.go:310] I0111 23:20:20.158777 3750 kubeadm.go:310] Alternatively, if you are the root user, you can run: I0111 23:20:20.158799 3750 kubeadm.go:310] I0111 23:20:20.158858 3750 kubeadm.go:310] export KUBECONFIG=/etc/kubernetes/admin.conf I0111 23:20:20.158865 3750 kubeadm.go:310] I0111 23:20:20.158935 3750 kubeadm.go:310] You should now deploy a pod network to the cluster. I0111 23:20:20.159017 3750 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: I0111 23:20:20.159107 3750 kubeadm.go:310] https://kubernetes.io/docs/concepts/cluster-administration/addons/ I0111 23:20:20.159111 3750 kubeadm.go:310] I0111 23:20:20.159185 3750 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities I0111 23:20:20.159279 3750 kubeadm.go:310] and service account keys on each node and then running the following as root: I0111 23:20:20.159286 3750 kubeadm.go:310] I0111 23:20:20.159366 3750 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8mmw1d.mm4q63bdl5ucpi66 \ I0111 23:20:20.159482 3750 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:b1af4d95eba1a2e4a22f0d5b556b61f5e26b5780a8f7887ae947f7e044aa870c \ I0111 23:20:20.159509 3750 kubeadm.go:310] --control-plane I0111 23:20:20.159514 3750 kubeadm.go:310] I0111 23:20:20.159591 3750 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root: I0111 23:20:20.159597 3750 kubeadm.go:310] I0111 23:20:20.159684 3750 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 8mmw1d.mm4q63bdl5ucpi66 \ I0111 23:20:20.159790 3750 kubeadm.go:310] --discovery-token-ca-cert-hash sha256:b1af4d95eba1a2e4a22f0d5b556b61f5e26b5780a8f7887ae947f7e044aa870c I0111 23:20:20.161886 3750 kubeadm.go:310] W0111 15:20:14.215962 1800 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version. I0111 23:20:20.162195 3750 kubeadm.go:310] W0111 15:20:14.216269 1800 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version. I0111 23:20:20.162403 3750 kubeadm.go:310] [WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node I0111 23:20:20.162512 3750 kubeadm.go:310] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0111 23:20:20.162542 3750 cni.go:84] Creating CNI manager for "" I0111 23:20:20.162571 3750 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0111 23:20:20.170861 3750 out.go:177] 🔗 Configuring bridge CNI (Container Networking Interface) ... I0111 23:20:20.175313 3750 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0111 23:20:20.189109 3750 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes) I0111 23:20:20.202900 3750 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0111 23:20:20.203185 3750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0111 23:20:20.204022 3750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes minikube minikube.k8s.io/updated_at=2025_01_11T23_20_20_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=210b148df93a80eb872ecbeb7e35281b3c582c61 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true I0111 23:20:20.246078 3750 kubeadm.go:1113] duration metric: took 43.16525ms to wait for elevateKubeSystemPrivileges I0111 23:20:20.246102 3750 ops.go:34] apiserver oom_adj: -16 I0111 23:20:20.252614 3750 kubeadm.go:394] duration metric: took 6.092636333s to StartCluster I0111 23:20:20.252641 3750 settings.go:142] acquiring lock: {Name:mk88264ae28c7efdd72850284fe2c25dd229bb23 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:20.252892 3750 settings.go:150] Updating kubeconfig: /Users/mrw-macmini/.kube/config I0111 23:20:20.253436 3750 lock.go:35] WriteFile acquiring /Users/mrw-macmini/.kube/config: {Name:mkda73d092838c03be4d5c1b66f929fa1cf3c934 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0111 23:20:20.254545 3750 config.go:182] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.0 I0111 23:20:20.254594 3750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0111 23:20:20.254929 3750 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0111 23:20:20.254442 3750 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] I0111 23:20:20.255295 3750 addons.go:69] Setting storage-provisioner=true in profile "minikube" I0111 23:20:20.255297 3750 addons.go:69] Setting default-storageclass=true in profile "minikube" I0111 23:20:20.255312 3750 addons.go:234] Setting addon storage-provisioner=true in "minikube" I0111 23:20:20.255475 3750 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0111 23:20:20.255789 3750 host.go:66] Checking if "minikube" exists ... I0111 23:20:20.258732 3750 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0111 23:20:20.258838 3750 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0111 23:20:20.260146 3750 out.go:177] 🔎 Verifying Kubernetes components... I0111 23:20:20.270312 3750 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0111 23:20:20.281344 3750 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0111 23:20:20.285012 3750 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml I0111 23:20:20.285019 3750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0111 23:20:20.285123 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:20.291369 3750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.65.254 host.minikube.internal\n fallthrough\n }' -e '/^ errors *$/i \ log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0111 23:20:20.298097 3750 addons.go:234] Setting addon default-storageclass=true in "minikube" I0111 23:20:20.298141 3750 host.go:66] Checking if "minikube" exists ... I0111 23:20:20.298512 3750 cli_runner.go:164] Run: docker container inspect minikube --format={{.State.Status}} I0111 23:20:20.305036 3750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58999 SSHKeyPath:/Users/mrw-macmini/.minikube/machines/minikube/id_rsa Username:docker} I0111 23:20:20.315628 3750 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml I0111 23:20:20.315636 3750 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0111 23:20:20.315703 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0111 23:20:20.328025 3750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58999 SSHKeyPath:/Users/mrw-macmini/.minikube/machines/minikube/id_rsa Username:docker} I0111 23:20:20.343806 3750 ssh_runner.go:195] Run: sudo systemctl start kubelet I0111 23:20:20.445453 3750 start.go:971] {"host.minikube.internal": 192.168.65.254} host record injected into CoreDNS's ConfigMap I0111 23:20:20.445592 3750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube I0111 23:20:20.448590 3750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0111 23:20:20.448739 3750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0111 23:20:20.462148 3750 api_server.go:52] waiting for apiserver process to appear ... I0111 23:20:20.462210 3750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0111 23:20:20.568540 3750 api_server.go:72] duration metric: took 313.583583ms to wait for apiserver process to appear ... I0111 23:20:20.568556 3750 api_server.go:88] waiting for apiserver healthz status ... I0111 23:20:20.568600 3750 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58998/healthz ... I0111 23:20:20.573022 3750 api_server.go:279] https://127.0.0.1:58998/healthz returned 200: ok I0111 23:20:20.573662 3750 api_server.go:141] control plane version: v1.31.0 I0111 23:20:20.573669 3750 api_server.go:131] duration metric: took 5.109584ms to wait for apiserver health ... I0111 23:20:20.573692 3750 system_pods.go:43] waiting for kube-system pods to appear ... I0111 23:20:20.576093 3750 out.go:177] 🌟 Enabled addons: storage-provisioner, default-storageclass I0111 23:20:20.585972 3750 addons.go:510] duration metric: took 331.594167ms for enable addons: enabled=[storage-provisioner default-storageclass] I0111 23:20:20.586243 3750 system_pods.go:59] 5 kube-system pods found I0111 23:20:20.586264 3750 system_pods.go:61] "etcd-minikube" [c597df78-9c95-45fc-83b2-ab5ca52252ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0111 23:20:20.586270 3750 system_pods.go:61] "kube-apiserver-minikube" [00c6bbff-716a-4ad4-be1f-3b1fc6bb896e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0111 23:20:20.586277 3750 system_pods.go:61] "kube-controller-manager-minikube" [d1bb0f50-70ae-495f-98ee-1baf64f3330a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0111 23:20:20.586282 3750 system_pods.go:61] "kube-scheduler-minikube" [9c64d6c8-0519-4b0a-9597-79eca6a6d737] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0111 23:20:20.586284 3750 system_pods.go:61] "storage-provisioner" [e1b46fc1-67a7-4330-8d95-ec7ce67e8db7] Pending I0111 23:20:20.586288 3750 system_pods.go:74] duration metric: took 12.592625ms to wait for pod list to return data ... I0111 23:20:20.586294 3750 kubeadm.go:582] duration metric: took 331.347375ms to wait for: map[apiserver:true system_pods:true] I0111 23:20:20.586303 3750 node_conditions.go:102] verifying NodePressure condition ... I0111 23:20:20.589587 3750 node_conditions.go:122] node storage ephemeral capacity is 1055761844Ki I0111 23:20:20.589599 3750 node_conditions.go:123] node cpu capacity is 10 I0111 23:20:20.589618 3750 node_conditions.go:105] duration metric: took 3.312834ms to run NodePressure ... I0111 23:20:20.589623 3750 start.go:241] waiting for startup goroutines ... I0111 23:20:20.953637 3750 kapi.go:214] "coredns" deployment in "kube-system" namespace and "minikube" context rescaled to 1 replicas I0111 23:20:20.953681 3750 start.go:246] waiting for cluster config update ... I0111 23:20:20.953699 3750 start.go:255] writing updated cluster config ... I0111 23:20:20.955183 3750 ssh_runner.go:195] Run: rm -f paused I0111 23:20:22.411067 3750 start.go:600] kubectl: 1.32.0, cluster: 1.31.0 (minor skew: 1) I0111 23:20:22.415995 3750 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default ==> Docker <== Jan 11 15:20:12 minikube dockerd[650]: time="2025-01-11T15:20:12.954006507Z" level=info msg="Processing signal 'terminated'" Jan 11 15:20:12 minikube dockerd[650]: time="2025-01-11T15:20:12.954695715Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Jan 11 15:20:12 minikube dockerd[650]: time="2025-01-11T15:20:12.954889507Z" level=info msg="Daemon shutdown complete" Jan 11 15:20:12 minikube systemd[1]: docker.service: Deactivated successfully. Jan 11 15:20:12 minikube systemd[1]: Stopped Docker Application Container Engine. Jan 11 15:20:12 minikube systemd[1]: Starting Docker Application Container Engine... Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.005738507Z" level=info msg="Starting up" Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.024728049Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.026268840Z" level=info msg="Loading containers: start." Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.065262090Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.078371632Z" level=info msg="Loading containers: done." Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.086990215Z" level=info msg="Docker daemon" commit=3ab5c7d containerd-snapshotter=false storage-driver=overlay2 version=27.2.0 Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.087020340Z" level=info msg="Daemon has completed initialization" Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.103600799Z" level=info msg="API listen on /var/run/docker.sock" Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.103634049Z" level=info msg="API listen on [::]:2376" Jan 11 15:20:13 minikube systemd[1]: Started Docker Application Container Engine. Jan 11 15:20:13 minikube systemd[1]: Stopping Docker Application Container Engine... Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.119060465Z" level=info msg="Processing signal 'terminated'" Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.119500424Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Jan 11 15:20:13 minikube dockerd[958]: time="2025-01-11T15:20:13.119707049Z" level=info msg="Daemon shutdown complete" Jan 11 15:20:13 minikube systemd[1]: docker.service: Deactivated successfully. Jan 11 15:20:13 minikube systemd[1]: Stopped Docker Application Container Engine. Jan 11 15:20:13 minikube systemd[1]: Starting Docker Application Container Engine... Jan 11 15:20:13 minikube dockerd[1226]: time="2025-01-11T15:20:13.164625965Z" level=info msg="Starting up" Jan 11 15:20:13 minikube dockerd[1226]: time="2025-01-11T15:20:13.170914590Z" level=info msg="[graphdriver] trying configured driver: overlay2" Jan 11 15:20:13 minikube dockerd[1226]: time="2025-01-11T15:20:13.179399340Z" level=info msg="Loading containers: start." Jan 11 15:20:13 minikube dockerd[1226]: time="2025-01-11T15:20:13.219281465Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jan 11 15:20:13 minikube dockerd[1226]: time="2025-01-11T15:20:13.231673007Z" level=info msg="Loading containers: done." Jan 11 15:20:13 minikube dockerd[1226]: time="2025-01-11T15:20:13.235504424Z" level=info msg="Docker daemon" commit=3ab5c7d containerd-snapshotter=false storage-driver=overlay2 version=27.2.0 Jan 11 15:20:13 minikube dockerd[1226]: time="2025-01-11T15:20:13.235526382Z" level=info msg="Daemon has completed initialization" Jan 11 15:20:13 minikube dockerd[1226]: time="2025-01-11T15:20:13.246319757Z" level=info msg="API listen on /var/run/docker.sock" Jan 11 15:20:13 minikube dockerd[1226]: time="2025-01-11T15:20:13.246332257Z" level=info msg="API listen on [::]:2376" Jan 11 15:20:13 minikube systemd[1]: Started Docker Application Container Engine. Jan 11 15:20:13 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine... Jan 11 15:20:13 minikube cri-dockerd[1497]: time="2025-01-11T15:20:13Z" level=info msg="Starting cri-dockerd dev (HEAD)" Jan 11 15:20:13 minikube cri-dockerd[1497]: time="2025-01-11T15:20:13Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock" Jan 11 15:20:13 minikube cri-dockerd[1497]: time="2025-01-11T15:20:13Z" level=info msg="Start docker client with request timeout 0s" Jan 11 15:20:13 minikube cri-dockerd[1497]: time="2025-01-11T15:20:13Z" level=info msg="Hairpin mode is set to hairpin-veth" Jan 11 15:20:13 minikube cri-dockerd[1497]: time="2025-01-11T15:20:13Z" level=info msg="Loaded network plugin cni" Jan 11 15:20:13 minikube cri-dockerd[1497]: time="2025-01-11T15:20:13Z" level=info msg="Docker cri networking managed by network plugin cni" Jan 11 15:20:13 minikube cri-dockerd[1497]: time="2025-01-11T15:20:13Z" level=info msg="Setting cgroupDriver cgroupfs" Jan 11 15:20:13 minikube cri-dockerd[1497]: time="2025-01-11T15:20:13Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}" Jan 11 15:20:13 minikube cri-dockerd[1497]: time="2025-01-11T15:20:13Z" level=info msg="Starting the GRPC backend for the Docker CRI interface." Jan 11 15:20:13 minikube cri-dockerd[1497]: time="2025-01-11T15:20:13Z" level=info msg="Start cri-dockerd grpc backend" Jan 11 15:20:13 minikube systemd[1]: Started CRI Interface for Docker Application Container Engine. Jan 11 15:20:15 minikube cri-dockerd[1497]: time="2025-01-11T15:20:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2fdb131cc2d54942fb6fac7407af3af8e0a3dec77eef13e43f057ada3f125a4b/resolv.conf as [nameserver 192.168.65.254 options ndots:0]" Jan 11 15:20:15 minikube cri-dockerd[1497]: time="2025-01-11T15:20:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/dd426c5e7f9f44c9e8a7c274911e1a5432a068841fb6c051a4e0395324e3df9e/resolv.conf as [nameserver 192.168.65.254 options ndots:0]" Jan 11 15:20:15 minikube cri-dockerd[1497]: time="2025-01-11T15:20:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/af168c149c5c332e1e06caf5d7a699597d6828a83f5e8051ce9cc5ffd731f9bf/resolv.conf as [nameserver 192.168.65.254 options ndots:0]" Jan 11 15:20:15 minikube cri-dockerd[1497]: time="2025-01-11T15:20:15Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2af053cb339e93acd256ece8b4087d07e244d394684033ab1379963a3f6f97e3/resolv.conf as [nameserver 192.168.65.254 options ndots:0]" Jan 11 15:20:25 minikube cri-dockerd[1497]: time="2025-01-11T15:20:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c0c7b6e05a83ffb40bb511f4394d952b029015d251bc5df3ae8b61c911c473a0/resolv.conf as [nameserver 192.168.65.254 options ndots:0]" Jan 11 15:20:25 minikube cri-dockerd[1497]: time="2025-01-11T15:20:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d977cbf404a3c0b0b35c016ce4c0e48423a752277bf95ca73f63523efa2e98ed/resolv.conf as [nameserver 192.168.65.254 options ndots:0]" Jan 11 15:20:25 minikube cri-dockerd[1497]: time="2025-01-11T15:20:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a9a876e99f61ccd302195214e43b4db08d968f617f68482762229c0e7da76068/resolv.conf as [nameserver 192.168.65.254 options ndots:0]" Jan 11 15:20:29 minikube cri-dockerd[1497]: time="2025-01-11T15:20:29Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Jan 11 15:22:10 minikube cri-dockerd[1497]: time="2025-01-11T15:22:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/294750406b0266e772a5a32877d4aca8348de37049acf03d4f203d83bdc481cf/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]" Jan 11 15:22:10 minikube cri-dockerd[1497]: time="2025-01-11T15:22:10Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f26c5b5744363eee894c38f523e19ac805229693b09c5d6d8816df35506bf2c9/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local options ndots:5]" Jan 11 15:22:13 minikube dockerd[1226]: time="2025-01-11T15:22:13.089316049Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" Jan 11 15:22:22 minikube cri-dockerd[1497]: time="2025-01-11T15:22:22Z" level=info msg="Stop pulling image docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c: Status: Downloaded newer image for kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" Jan 11 15:22:23 minikube dockerd[1226]: time="2025-01-11T15:22:23.638407679Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" Jan 11 15:22:35 minikube cri-dockerd[1497]: time="2025-01-11T15:22:35Z" level=info msg="Pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: e5f5e14d8b04: Extracting [======================> ] 32.87MB/74.08MB" Jan 11 15:22:36 minikube cri-dockerd[1497]: time="2025-01-11T15:22:36Z" level=info msg="Stop pulling image docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93: Status: Downloaded newer image for kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD 50e64556f29b9 kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93 10 minutes ago Running kubernetes-dashboard 0 294750406b026 kubernetes-dashboard-695b96c756-k42bm 7c8dea3f92572 kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c 10 minutes ago Running dashboard-metrics-scraper 0 f26c5b5744363 dashboard-metrics-scraper-c5db448b4-l88h2 8dc11d63e53cd ba04bb24b9575 12 minutes ago Running storage-provisioner 0 a9a876e99f61c storage-provisioner a352a9151332b 2437cf7621777 12 minutes ago Running coredns 0 d977cbf404a3c coredns-6f6b679f8f-pcxfk 23f97b15f746e 71d55d66fd4ee 12 minutes ago Running kube-proxy 0 c0c7b6e05a83f kube-proxy-8g946 cec873c453d9c fcb0683e6bdbd 12 minutes ago Running kube-controller-manager 0 2fdb131cc2d54 kube-controller-manager-minikube 534802087b429 fbbbd428abb4d 12 minutes ago Running kube-scheduler 0 dd426c5e7f9f4 kube-scheduler-minikube 8253118856aee cd0f0ae0ec9e0 12 minutes ago Running kube-apiserver 0 af168c149c5c3 kube-apiserver-minikube ba6eb08faeb8a 27e3830e14027 12 minutes ago Running etcd 0 2af053cb339e9 etcd-minikube ==> coredns [a352a9151332] <== [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API .:53 [INFO] plugin/reload: Running configuration SHA512 = f869070685748660180df1b7a47d58cdafcf2f368266578c062d1151dc2c900964aecc5975e8882e6de6fdfb6460463e30ebfaad2ec8f0c3c6436f80225b3b5b CoreDNS-1.11.1 linux/arm64, go1.20.7, ae2bbc2 [INFO] 127.0.0.1:43103 - 49283 "HINFO IN 5910428641427499179.2558694441321028516. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031358625s [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/ready: Still waiting on: "kubernetes" [INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/kubernetes: Trace[441215221]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (11-Jan-2025 15:20:25.698) (total time: 30002ms): Trace[441215221]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (15:20:55.699) Trace[441215221]: [30.002100806s] [30.002100806s] END [ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/kubernetes: Trace[1536627391]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (11-Jan-2025 15:20:25.698) (total time: 30002ms): Trace[1536627391]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (15:20:55.699) Trace[1536627391]: [30.00243168s] [30.00243168s] END [ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/kubernetes: Trace[1324429225]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (11-Jan-2025 15:20:25.698) (total time: 30002ms): Trace[1324429225]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (15:20:55.699) Trace[1324429225]: [30.002859805s] [30.002859805s] END [ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout [INFO] plugin/ready: Still waiting on: "kubernetes" ==> describe nodes <== Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=arm64 beta.kubernetes.io/os=linux kubernetes.io/arch=arm64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=210b148df93a80eb872ecbeb7e35281b3c582c61 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2025_01_11T23_20_20_0700 minikube.k8s.io/version=v1.34.0 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sat, 11 Jan 2025 15:20:17 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Sat, 11 Jan 2025 15:32:43 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sat, 11 Jan 2025 15:27:58 +0000 Sat, 11 Jan 2025 15:20:16 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sat, 11 Jan 2025 15:27:58 +0000 Sat, 11 Jan 2025 15:20:16 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sat, 11 Jan 2025 15:27:58 +0000 Sat, 11 Jan 2025 15:20:16 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sat, 11 Jan 2025 15:27:58 +0000 Sat, 11 Jan 2025 15:20:17 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 10 ephemeral-storage: 1055761844Ki hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 8025360Ki pods: 110 Allocatable: cpu: 10 ephemeral-storage: 1055761844Ki hugepages-1Gi: 0 hugepages-2Mi: 0 hugepages-32Mi: 0 hugepages-64Ki: 0 memory: 8025360Ki pods: 110 System Info: Machine ID: 119a45cec1f0493aabbd7520b9a095ba System UUID: 119a45cec1f0493aabbd7520b9a095ba Boot ID: dc8971ae-d91c-484e-ba7f-6458fd8677d6 Kernel Version: 6.10.14-linuxkit OS Image: Ubuntu 22.04.4 LTS Operating System: linux Architecture: arm64 Container Runtime Version: docker://27.2.0 Kubelet Version: v1.31.0 Kube-Proxy Version: PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- kube-system coredns-6f6b679f8f-pcxfk 100m (1%) 0 (0%) 70Mi (0%) 170Mi (2%) 12m kube-system etcd-minikube 100m (1%) 0 (0%) 100Mi (1%) 0 (0%) 12m kube-system kube-apiserver-minikube 250m (2%) 0 (0%) 0 (0%) 0 (0%) 12m kube-system kube-controller-manager-minikube 200m (2%) 0 (0%) 0 (0%) 0 (0%) 12m kube-system kube-proxy-8g946 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m kube-system kube-scheduler-minikube 100m (1%) 0 (0%) 0 (0%) 0 (0%) 12m kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 12m kubernetes-dashboard dashboard-metrics-scraper-c5db448b4-l88h2 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m kubernetes-dashboard kubernetes-dashboard-695b96c756-k42bm 0 (0%) 0 (0%) 0 (0%) 0 (0%) 10m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 750m (7%) 0 (0%) memory 170Mi (2%) 170Mi (2%) ephemeral-storage 0 (0%) 0 (0%) hugepages-1Gi 0 (0%) 0 (0%) hugepages-2Mi 0 (0%) 0 (0%) hugepages-32Mi 0 (0%) 0 (0%) hugepages-64Ki 0 (0%) 0 (0%) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 12m kube-proxy Normal Starting 12m kubelet Starting kubelet. Normal NodeAllocatableEnforced 12m kubelet Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 12m kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 12m kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 12m kubelet Node minikube status is now: NodeHasSufficientPID Normal RegisteredNode 12m node-controller Node minikube event: Registered Node minikube in Controller ==> dmesg <== [Jan11 15:17] netlink: 'init': attribute type 4 has an invalid length. [ +0.054397] fakeowner: loading out-of-tree module taints kernel. ==> etcd [ba6eb08faeb8] <== {"level":"warn","ts":"2025-01-11T15:20:16.050737Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2025-01-11T15:20:16.050810Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.49.2:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.49.2:2380","--initial-cluster=minikube=https://192.168.49.2:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.49.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.49.2:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"warn","ts":"2025-01-11T15:20:16.050850Z","caller":"embed/config.go:687","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2025-01-11T15:20:16.050856Z","caller":"embed/etcd.go:128","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2025-01-11T15:20:16.050869Z","caller":"embed/etcd.go:496","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2025-01-11T15:20:16.051256Z","caller":"embed/etcd.go:136","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"]} {"level":"info","ts":"2025-01-11T15:20:16.051307Z","caller":"embed/etcd.go:310","msg":"starting an etcd server","etcd-version":"3.5.15","git-sha":"9a5533382","go-version":"go1.21.12","go-os":"linux","go-arch":"arm64","max-cpu-set":10,"max-cpu-available":10,"member-initialized":false,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"minikube=https://192.168.49.2:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2025-01-11T15:20:16.052563Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"987.833µs"} {"level":"info","ts":"2025-01-11T15:20:16.054974Z","caller":"etcdserver/raft.go:495","msg":"starting local member","local-member-id":"aec36adc501070cc","cluster-id":"fa54960ea34d58be"} {"level":"info","ts":"2025-01-11T15:20:16.055018Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=()"} {"level":"info","ts":"2025-01-11T15:20:16.055057Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 0"} {"level":"info","ts":"2025-01-11T15:20:16.055063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aec36adc501070cc [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2025-01-11T15:20:16.055067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became follower at term 1"} {"level":"info","ts":"2025-01-11T15:20:16.055083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"warn","ts":"2025-01-11T15:20:16.056370Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2025-01-11T15:20:16.056909Z","caller":"mvcc/kvstore.go:418","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2025-01-11T15:20:16.057542Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2025-01-11T15:20:16.058745Z","caller":"etcdserver/server.go:867","msg":"starting etcd server","local-member-id":"aec36adc501070cc","local-server-version":"3.5.15","cluster-version":"to_be_decided"} {"level":"info","ts":"2025-01-11T15:20:16.058878Z","caller":"etcdserver/server.go:751","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"aec36adc501070cc","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2025-01-11T15:20:16.059024Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2025-01-11T15:20:16.059107Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2025-01-11T15:20:16.059114Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2025-01-11T15:20:16.059118Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"} {"level":"info","ts":"2025-01-11T15:20:16.061084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"} {"level":"info","ts":"2025-01-11T15:20:16.061246Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]} {"level":"info","ts":"2025-01-11T15:20:16.061661Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2025-01-11T15:20:16.061758Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2025-01-11T15:20:16.061773Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2025-01-11T15:20:16.061768Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"} {"level":"info","ts":"2025-01-11T15:20:16.061782Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"} {"level":"info","ts":"2025-01-11T15:20:16.955662Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"} {"level":"info","ts":"2025-01-11T15:20:16.955701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"} {"level":"info","ts":"2025-01-11T15:20:16.955751Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"} {"level":"info","ts":"2025-01-11T15:20:16.955766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"} {"level":"info","ts":"2025-01-11T15:20:16.955769Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"} {"level":"info","ts":"2025-01-11T15:20:16.955773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"} {"level":"info","ts":"2025-01-11T15:20:16.955780Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"} {"level":"info","ts":"2025-01-11T15:20:16.956075Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"} {"level":"info","ts":"2025-01-11T15:20:16.956230Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2025-01-11T15:20:16.956267Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2025-01-11T15:20:16.956366Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2025-01-11T15:20:16.956380Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2025-01-11T15:20:16.956228Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2025-01-11T15:20:16.956681Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"} {"level":"info","ts":"2025-01-11T15:20:16.956766Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"} {"level":"info","ts":"2025-01-11T15:20:16.956793Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2025-01-11T15:20:16.956804Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2025-01-11T15:20:16.957070Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"} {"level":"info","ts":"2025-01-11T15:20:16.957095Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"} {"level":"info","ts":"2025-01-11T15:20:16.957530Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"} {"level":"info","ts":"2025-01-11T15:30:16.963961Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":688} {"level":"info","ts":"2025-01-11T15:30:16.969324Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":688,"took":"4.775333ms","hash":1440402584,"current-db-size-bytes":1830912,"current-db-size":"1.8 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"} {"level":"info","ts":"2025-01-11T15:30:16.969384Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1440402584,"revision":688,"compact-revision":-1} ==> kernel <== 15:32:47 up 14 min, 0 users, load average: 2.36, 2.50, 1.59 Linux minikube 6.10.14-linuxkit #1 SMP Fri Nov 29 17:22:03 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux PRETTY_NAME="Ubuntu 22.04.4 LTS" ==> kube-apiserver [8253118856ae] <== I0111 15:20:17.361339 1 local_available_controller.go:156] Starting LocalAvailability controller I0111 15:20:17.361341 1 cache.go:32] Waiting for caches to sync for LocalAvailability controller I0111 15:20:17.361349 1 aggregator.go:169] waiting for initial CRD sync... I0111 15:20:17.361356 1 apf_controller.go:377] Starting API Priority and Fairness config controller I0111 15:20:17.361359 1 dynamic_serving_content.go:135] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key" I0111 15:20:17.361443 1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0111 15:20:17.361489 1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt" I0111 15:20:17.361802 1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt" I0111 15:20:17.361495 1 customresource_discovery_controller.go:292] Starting DiscoveryController I0111 15:20:17.361816 1 establishing_controller.go:81] Starting EstablishingController I0111 15:20:17.361831 1 nonstructuralschema_controller.go:195] Starting NonStructuralSchemaConditionController I0111 15:20:17.361604 1 controller.go:80] Starting OpenAPI V3 AggregationController I0111 15:20:17.361840 1 apiapproval_controller.go:189] Starting KubernetesAPIApprovalPolicyConformantConditionController I0111 15:20:17.361624 1 system_namespaces_controller.go:66] Starting system namespaces controller I0111 15:20:17.361848 1 crd_finalizer.go:269] Starting CRDFinalizer I0111 15:20:17.361672 1 gc_controller.go:78] Starting apiserver lease garbage collector I0111 15:20:17.361717 1 crdregistration_controller.go:114] Starting crd-autoregister controller I0111 15:20:17.361859 1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister I0111 15:20:17.361793 1 controller.go:142] Starting OpenAPI controller I0111 15:20:17.361803 1 controller.go:90] Starting OpenAPI V3 controller I0111 15:20:17.361809 1 naming_controller.go:294] Starting NamingConditionController E0111 15:20:17.423227 1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms" I0111 15:20:17.444425 1 shared_informer.go:320] Caches are synced for node_authorizer I0111 15:20:17.446623 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator] I0111 15:20:17.446641 1 policy_source.go:224] refreshing policies I0111 15:20:17.461274 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0111 15:20:17.461325 1 cache.go:39] Caches are synced for RemoteAvailability controller I0111 15:20:17.461358 1 handler_discovery.go:450] Starting ResourceDiscoveryManager I0111 15:20:17.461458 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller I0111 15:20:17.461499 1 apf_controller.go:382] Running API Priority and Fairness config worker I0111 15:20:17.461517 1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process I0111 15:20:17.461592 1 shared_informer.go:320] Caches are synced for configmaps I0111 15:20:17.461605 1 cache.go:39] Caches are synced for LocalAvailability controller I0111 15:20:17.462505 1 shared_informer.go:320] Caches are synced for crd-autoregister I0111 15:20:17.462544 1 aggregator.go:171] initial CRD sync complete... I0111 15:20:17.462556 1 autoregister_controller.go:144] Starting autoregister controller I0111 15:20:17.462565 1 cache.go:32] Waiting for caches to sync for autoregister controller I0111 15:20:17.462572 1 cache.go:39] Caches are synced for autoregister controller I0111 15:20:17.463302 1 controller.go:615] quota admission added evaluator for: namespaces I0111 15:20:17.628347 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io I0111 15:20:18.367678 1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000 I0111 15:20:18.371348 1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000 I0111 15:20:18.371375 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. I0111 15:20:18.528960 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0111 15:20:18.536931 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0111 15:20:18.564976 1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"} W0111 15:20:18.566607 1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2] I0111 15:20:18.566882 1 controller.go:615] quota admission added evaluator for: endpoints I0111 15:20:18.568165 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io I0111 15:20:19.370870 1 controller.go:615] quota admission added evaluator for: serviceaccounts I0111 15:20:19.556494 1 controller.go:615] quota admission added evaluator for: deployments.apps I0111 15:20:19.561019 1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"} I0111 15:20:19.567257 1 controller.go:615] quota admission added evaluator for: daemonsets.apps I0111 15:20:24.578221 1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps I0111 15:20:25.076352 1 controller.go:615] quota admission added evaluator for: replicasets.apps I0111 15:22:10.558674 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.36.83"} I0111 15:22:10.564515 1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.156.148"} E0111 15:23:18.148015 1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0111 15:23:18.148711 1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" E0111 15:23:18.149667 1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError" ==> kube-controller-manager [cec873c453d9] <== I0111 15:20:24.124403 1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller" I0111 15:20:24.124422 1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller" I0111 15:20:24.124427 1 shared_informer.go:313] Waiting for caches to sync for cidrallocator I0111 15:20:24.124430 1 shared_informer.go:320] Caches are synced for cidrallocator I0111 15:20:24.124585 1 shared_informer.go:320] Caches are synced for expand I0111 15:20:24.124677 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status I0111 15:20:24.126105 1 shared_informer.go:320] Caches are synced for PV protection I0111 15:20:24.126145 1 shared_informer.go:320] Caches are synced for TTL after finished I0111 15:20:24.126165 1 shared_informer.go:320] Caches are synced for certificate-csrapproving I0111 15:20:24.126228 1 shared_informer.go:320] Caches are synced for GC I0111 15:20:24.127546 1 shared_informer.go:320] Caches are synced for taint-eviction-controller I0111 15:20:24.128557 1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="minikube" podCIDRs=["10.244.0.0/24"] I0111 15:20:24.128584 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="minikube" I0111 15:20:24.128598 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="minikube" I0111 15:20:24.128574 1 shared_informer.go:320] Caches are synced for attach detach I0111 15:20:24.130467 1 shared_informer.go:320] Caches are synced for cronjob I0111 15:20:24.130507 1 shared_informer.go:320] Caches are synced for job I0111 15:20:24.184682 1 shared_informer.go:320] Caches are synced for persistent volume I0111 15:20:24.247297 1 shared_informer.go:320] Caches are synced for deployment I0111 15:20:24.275802 1 shared_informer.go:320] Caches are synced for ReplicaSet I0111 15:20:24.275845 1 shared_informer.go:320] Caches are synced for service account I0111 15:20:24.277463 1 shared_informer.go:320] Caches are synced for namespace I0111 15:20:24.323689 1 shared_informer.go:320] Caches are synced for disruption I0111 15:20:24.326473 1 shared_informer.go:320] Caches are synced for ReplicationController I0111 15:20:24.332385 1 shared_informer.go:320] Caches are synced for resource quota I0111 15:20:24.334306 1 shared_informer.go:320] Caches are synced for resource quota I0111 15:20:24.729408 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="minikube" I0111 15:20:24.745557 1 shared_informer.go:320] Caches are synced for garbage collector I0111 15:20:24.773422 1 shared_informer.go:320] Caches are synced for garbage collector I0111 15:20:24.773463 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller" I0111 15:20:25.239544 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="160.78925ms" I0111 15:20:25.243855 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="4.2455ms" I0111 15:20:25.243940 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="33.083µs" I0111 15:20:25.247001 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="26.792µs" I0111 15:20:26.374752 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="139.167µs" I0111 15:20:29.634843 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="minikube" I0111 15:21:06.180540 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="8.509208ms" I0111 15:21:06.180753 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-6f6b679f8f" duration="90.834µs" I0111 15:22:10.531327 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="7.576833ms" E0111 15:22:10.531385 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError" I0111 15:22:10.532695 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="6.210333ms" E0111 15:22:10.532717 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError" I0111 15:22:10.534407 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="1.915166ms" E0111 15:22:10.534427 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError" I0111 15:22:10.536378 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="2.552333ms" E0111 15:22:10.536394 1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError" I0111 15:22:10.543832 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.5925ms" I0111 15:22:10.549439 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.576667ms" I0111 15:22:10.549688 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="24.584µs" I0111 15:22:10.549624 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="9.368958ms" I0111 15:22:10.552893 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="3.159583ms" I0111 15:22:10.553007 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.75µs" I0111 15:22:10.553136 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="21.625µs" I0111 15:22:10.556349 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="11.959µs" I0111 15:22:23.367623 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.779083ms" I0111 15:22:23.367730 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="33.25µs" I0111 15:22:37.481230 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="10.112042ms" I0111 15:22:37.481343 1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="52µs" I0111 15:22:52.219315 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="minikube" I0111 15:27:58.338170 1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="minikube" ==> kube-proxy [23f97b15f746] <== I0111 15:20:25.671708 1 server_linux.go:66] "Using iptables proxy" I0111 15:20:25.740992 1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"] E0111 15:20:25.741020 1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`" I0111 15:20:25.748066 1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4" I0111 15:20:25.748086 1 server_linux.go:169] "Using iptables Proxier" I0111 15:20:25.748719 1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4" I0111 15:20:25.748874 1 server.go:483] "Version info" version="v1.31.0" I0111 15:20:25.748883 1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0111 15:20:25.749472 1 config.go:197] "Starting service config controller" I0111 15:20:25.749525 1 shared_informer.go:313] Waiting for caches to sync for service config I0111 15:20:25.749546 1 config.go:326] "Starting node config controller" I0111 15:20:25.749549 1 shared_informer.go:313] Waiting for caches to sync for node config I0111 15:20:25.749718 1 config.go:104] "Starting endpoint slice config controller" I0111 15:20:25.749729 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config I0111 15:20:25.851752 1 shared_informer.go:320] Caches are synced for node config I0111 15:20:25.851781 1 shared_informer.go:320] Caches are synced for service config I0111 15:20:25.851803 1 shared_informer.go:320] Caches are synced for endpoint slice config ==> kube-scheduler [534802087b42] <== I0111 15:20:16.251588 1 serving.go:386] Generated self-signed cert in-memory W0111 15:20:17.366083 1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' W0111 15:20:17.366103 1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" W0111 15:20:17.366108 1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous. W0111 15:20:17.366111 1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0111 15:20:17.377094 1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.0" I0111 15:20:17.377110 1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0111 15:20:17.377918 1 secure_serving.go:213] Serving securely on 127.0.0.1:10259 I0111 15:20:17.377978 1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file" I0111 15:20:17.377996 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0111 15:20:17.378012 1 tlsconfig.go:243] "Starting DynamicServingCertificateController" W0111 15:20:17.378638 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0111 15:20:17.378674 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" W0111 15:20:17.378676 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0111 15:20:17.378692 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0111 15:20:17.378696 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" E0111 15:20:17.378724 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0111 15:20:17.379003 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0111 15:20:17.379011 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0111 15:20:17.379015 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" W0111 15:20:17.379022 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0111 15:20:17.379032 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0111 15:20:17.379035 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0111 15:20:17.379120 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" W0111 15:20:17.379125 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0111 15:20:17.379020 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" E0111 15:20:17.379138 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" W0111 15:20:17.379141 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0111 15:20:17.379051 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0111 15:20:17.380664 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" E0111 15:20:17.380680 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" W0111 15:20:17.379057 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0111 15:20:17.381145 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0111 15:20:17.379056 1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0111 15:20:17.381199 1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" W0111 15:20:17.379058 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0111 15:20:17.381215 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" W0111 15:20:17.379073 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0111 15:20:17.381223 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" W0111 15:20:17.379086 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0111 15:20:17.381235 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0111 15:20:18.234477 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0111 15:20:18.234578 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0111 15:20:18.234486 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0111 15:20:18.234661 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" W0111 15:20:18.249608 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0111 15:20:18.249686 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" W0111 15:20:18.258167 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0111 15:20:18.258293 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" W0111 15:20:18.272337 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0111 15:20:18.272417 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" W0111 15:20:18.357671 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0111 15:20:18.357754 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" W0111 15:20:18.474631 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0111 15:20:18.474656 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" W0111 15:20:18.480607 1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0111 15:20:18.480624 1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" I0111 15:20:18.778692 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.308975 2300 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.309064 2300 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.309075 2300 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.309164 2300 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.409832 2300 kubelet_node_status.go:72] "Attempting to register node" node="minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.412036 2300 kubelet_node_status.go:111] "Node was previously registered" node="minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.412067 2300 kubelet_node_status.go:75] "Successfully registered node" node="minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498524 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40f5f661ab65f2e4bfe41ac2993c01de-usr-local-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"40f5f661ab65f2e4bfe41ac2993c01de\") " pod="kube-system/kube-controller-manager-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498554 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9e315b3a91fa9f6f7463439d9dac1a56-ca-certs\") pod \"kube-apiserver-minikube\" (UID: \"9e315b3a91fa9f6f7463439d9dac1a56\") " pod="kube-system/kube-apiserver-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498572 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e315b3a91fa9f6f7463439d9dac1a56-usr-local-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"9e315b3a91fa9f6f7463439d9dac1a56\") " pod="kube-system/kube-apiserver-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498584 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e315b3a91fa9f6f7463439d9dac1a56-usr-share-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"9e315b3a91fa9f6f7463439d9dac1a56\") " pod="kube-system/kube-apiserver-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498593 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9e315b3a91fa9f6f7463439d9dac1a56-etc-ca-certificates\") pod \"kube-apiserver-minikube\" (UID: \"9e315b3a91fa9f6f7463439d9dac1a56\") " pod="kube-system/kube-apiserver-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498605 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40f5f661ab65f2e4bfe41ac2993c01de-etc-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"40f5f661ab65f2e4bfe41ac2993c01de\") " pod="kube-system/kube-controller-manager-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498614 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/40f5f661ab65f2e4bfe41ac2993c01de-kubeconfig\") pod \"kube-controller-manager-minikube\" (UID: \"40f5f661ab65f2e4bfe41ac2993c01de\") " pod="kube-system/kube-controller-manager-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498623 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/a5363f4f31e043bdae3c93aca4991903-etcd-data\") pod \"etcd-minikube\" (UID: \"a5363f4f31e043bdae3c93aca4991903\") " pod="kube-system/etcd-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498632 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/40f5f661ab65f2e4bfe41ac2993c01de-ca-certs\") pod \"kube-controller-manager-minikube\" (UID: \"40f5f661ab65f2e4bfe41ac2993c01de\") " pod="kube-system/kube-controller-manager-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498645 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/40f5f661ab65f2e4bfe41ac2993c01de-k8s-certs\") pod \"kube-controller-manager-minikube\" (UID: \"40f5f661ab65f2e4bfe41ac2993c01de\") " pod="kube-system/kube-controller-manager-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498654 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/40f5f661ab65f2e4bfe41ac2993c01de-usr-share-ca-certificates\") pod \"kube-controller-manager-minikube\" (UID: \"40f5f661ab65f2e4bfe41ac2993c01de\") " pod="kube-system/kube-controller-manager-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498662 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e039200acb850c82bb901653cc38ff6e-kubeconfig\") pod \"kube-scheduler-minikube\" (UID: \"e039200acb850c82bb901653cc38ff6e\") " pod="kube-system/kube-scheduler-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498670 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/a5363f4f31e043bdae3c93aca4991903-etcd-certs\") pod \"etcd-minikube\" (UID: \"a5363f4f31e043bdae3c93aca4991903\") " pod="kube-system/etcd-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498680 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9e315b3a91fa9f6f7463439d9dac1a56-k8s-certs\") pod \"kube-apiserver-minikube\" (UID: \"9e315b3a91fa9f6f7463439d9dac1a56\") " pod="kube-system/kube-apiserver-minikube" Jan 11 15:20:19 minikube kubelet[2300]: I0111 15:20:19.498698 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/40f5f661ab65f2e4bfe41ac2993c01de-flexvolume-dir\") pod \"kube-controller-manager-minikube\" (UID: \"40f5f661ab65f2e4bfe41ac2993c01de\") " pod="kube-system/kube-controller-manager-minikube" Jan 11 15:20:20 minikube kubelet[2300]: I0111 15:20:20.292362 2300 apiserver.go:52] "Watching apiserver" Jan 11 15:20:20 minikube kubelet[2300]: I0111 15:20:20.296575 2300 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 11 15:20:20 minikube kubelet[2300]: E0111 15:20:20.348210 2300 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-minikube\" already exists" pod="kube-system/kube-scheduler-minikube" Jan 11 15:20:20 minikube kubelet[2300]: E0111 15:20:20.348283 2300 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-minikube\" already exists" pod="kube-system/kube-controller-manager-minikube" Jan 11 15:20:20 minikube kubelet[2300]: E0111 15:20:20.348702 2300 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Jan 11 15:20:20 minikube kubelet[2300]: E0111 15:20:20.348712 2300 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Jan 11 15:20:20 minikube kubelet[2300]: I0111 15:20:20.356866 2300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-minikube" podStartSLOduration=1.356856969 podStartE2EDuration="1.356856969s" podCreationTimestamp="2025-01-11 15:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-11 15:20:20.356621719 +0000 UTC m=+1.097602876" watchObservedRunningTime="2025-01-11 15:20:20.356856969 +0000 UTC m=+1.097838085" Jan 11 15:20:20 minikube kubelet[2300]: I0111 15:20:20.362219 2300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-minikube" podStartSLOduration=1.3622087600000001 podStartE2EDuration="1.36220876s" podCreationTimestamp="2025-01-11 15:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-11 15:20:20.361753344 +0000 UTC m=+1.102734543" watchObservedRunningTime="2025-01-11 15:20:20.36220876 +0000 UTC m=+1.103189876" Jan 11 15:20:20 minikube kubelet[2300]: I0111 15:20:20.365768 2300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-minikube" podStartSLOduration=1.365761969 podStartE2EDuration="1.365761969s" podCreationTimestamp="2025-01-11 15:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-11 15:20:20.365679802 +0000 UTC m=+1.106660918" watchObservedRunningTime="2025-01-11 15:20:20.365761969 +0000 UTC m=+1.106743085" Jan 11 15:20:24 minikube kubelet[2300]: I0111 15:20:24.084257 2300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-minikube" podStartSLOduration=5.084154554 podStartE2EDuration="5.084154554s" podCreationTimestamp="2025-01-11 15:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-11 15:20:20.369440677 +0000 UTC m=+1.110421835" watchObservedRunningTime="2025-01-11 15:20:24.084154554 +0000 UTC m=+4.825135795" Jan 11 15:20:24 minikube kubelet[2300]: I0111 15:20:24.200200 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e1b46fc1-67a7-4330-8d95-ec7ce67e8db7-tmp\") pod \"storage-provisioner\" (UID: \"e1b46fc1-67a7-4330-8d95-ec7ce67e8db7\") " pod="kube-system/storage-provisioner" Jan 11 15:20:24 minikube kubelet[2300]: I0111 15:20:24.200312 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4vrg\" (UniqueName: \"kubernetes.io/projected/e1b46fc1-67a7-4330-8d95-ec7ce67e8db7-kube-api-access-m4vrg\") pod \"storage-provisioner\" (UID: \"e1b46fc1-67a7-4330-8d95-ec7ce67e8db7\") " pod="kube-system/storage-provisioner" Jan 11 15:20:24 minikube kubelet[2300]: E0111 15:20:24.309369 2300 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 11 15:20:24 minikube kubelet[2300]: E0111 15:20:24.309425 2300 projected.go:194] Error preparing data for projected volume kube-api-access-m4vrg for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Jan 11 15:20:24 minikube kubelet[2300]: E0111 15:20:24.309545 2300 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1b46fc1-67a7-4330-8d95-ec7ce67e8db7-kube-api-access-m4vrg podName:e1b46fc1-67a7-4330-8d95-ec7ce67e8db7 nodeName:}" failed. No retries permitted until 2025-01-11 15:20:24.809500304 +0000 UTC m=+5.550481462 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m4vrg" (UniqueName: "kubernetes.io/projected/e1b46fc1-67a7-4330-8d95-ec7ce67e8db7-kube-api-access-m4vrg") pod "storage-provisioner" (UID: "e1b46fc1-67a7-4330-8d95-ec7ce67e8db7") : configmap "kube-root-ca.crt" not found Jan 11 15:20:24 minikube kubelet[2300]: I0111 15:20:24.606884 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91a6e70d-f360-43ac-bdc3-93e14fee2a06-kube-proxy\") pod \"kube-proxy-8g946\" (UID: \"91a6e70d-f360-43ac-bdc3-93e14fee2a06\") " pod="kube-system/kube-proxy-8g946" Jan 11 15:20:24 minikube kubelet[2300]: I0111 15:20:24.606933 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91a6e70d-f360-43ac-bdc3-93e14fee2a06-xtables-lock\") pod \"kube-proxy-8g946\" (UID: \"91a6e70d-f360-43ac-bdc3-93e14fee2a06\") " pod="kube-system/kube-proxy-8g946" Jan 11 15:20:24 minikube kubelet[2300]: I0111 15:20:24.606948 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k6qj\" (UniqueName: \"kubernetes.io/projected/91a6e70d-f360-43ac-bdc3-93e14fee2a06-kube-api-access-6k6qj\") pod \"kube-proxy-8g946\" (UID: \"91a6e70d-f360-43ac-bdc3-93e14fee2a06\") " pod="kube-system/kube-proxy-8g946" Jan 11 15:20:24 minikube kubelet[2300]: I0111 15:20:24.606964 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91a6e70d-f360-43ac-bdc3-93e14fee2a06-lib-modules\") pod \"kube-proxy-8g946\" (UID: \"91a6e70d-f360-43ac-bdc3-93e14fee2a06\") " pod="kube-system/kube-proxy-8g946" Jan 11 15:20:24 minikube kubelet[2300]: E0111 15:20:24.715384 2300 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 11 15:20:24 minikube kubelet[2300]: E0111 15:20:24.715433 2300 projected.go:194] Error preparing data for projected volume kube-api-access-6k6qj for pod kube-system/kube-proxy-8g946: configmap "kube-root-ca.crt" not found Jan 11 15:20:24 minikube kubelet[2300]: E0111 15:20:24.715498 2300 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91a6e70d-f360-43ac-bdc3-93e14fee2a06-kube-api-access-6k6qj podName:91a6e70d-f360-43ac-bdc3-93e14fee2a06 nodeName:}" failed. No retries permitted until 2025-01-11 15:20:25.215470346 +0000 UTC m=+5.956451504 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6k6qj" (UniqueName: "kubernetes.io/projected/91a6e70d-f360-43ac-bdc3-93e14fee2a06-kube-api-access-6k6qj") pod "kube-proxy-8g946" (UID: "91a6e70d-f360-43ac-bdc3-93e14fee2a06") : configmap "kube-root-ca.crt" not found Jan 11 15:20:24 minikube kubelet[2300]: E0111 15:20:24.811703 2300 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 11 15:20:24 minikube kubelet[2300]: E0111 15:20:24.811765 2300 projected.go:194] Error preparing data for projected volume kube-api-access-m4vrg for pod kube-system/storage-provisioner: configmap "kube-root-ca.crt" not found Jan 11 15:20:24 minikube kubelet[2300]: E0111 15:20:24.811842 2300 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e1b46fc1-67a7-4330-8d95-ec7ce67e8db7-kube-api-access-m4vrg podName:e1b46fc1-67a7-4330-8d95-ec7ce67e8db7 nodeName:}" failed. No retries permitted until 2025-01-11 15:20:25.811818388 +0000 UTC m=+6.552799545 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-m4vrg" (UniqueName: "kubernetes.io/projected/e1b46fc1-67a7-4330-8d95-ec7ce67e8db7-kube-api-access-m4vrg") pod "storage-provisioner" (UID: "e1b46fc1-67a7-4330-8d95-ec7ce67e8db7") : configmap "kube-root-ca.crt" not found Jan 11 15:20:25 minikube kubelet[2300]: I0111 15:20:25.423843 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/276c893a-5d3c-42f0-bb67-1d4d705e9c0c-config-volume\") pod \"coredns-6f6b679f8f-pcxfk\" (UID: \"276c893a-5d3c-42f0-bb67-1d4d705e9c0c\") " pod="kube-system/coredns-6f6b679f8f-pcxfk" Jan 11 15:20:25 minikube kubelet[2300]: I0111 15:20:25.423945 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z576z\" (UniqueName: \"kubernetes.io/projected/276c893a-5d3c-42f0-bb67-1d4d705e9c0c-kube-api-access-z576z\") pod \"coredns-6f6b679f8f-pcxfk\" (UID: \"276c893a-5d3c-42f0-bb67-1d4d705e9c0c\") " pod="kube-system/coredns-6f6b679f8f-pcxfk" Jan 11 15:20:26 minikube kubelet[2300]: I0111 15:20:26.374428 2300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=6.374399597 podStartE2EDuration="6.374399597s" podCreationTimestamp="2025-01-11 15:20:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-11 15:20:26.36663168 +0000 UTC m=+7.107612838" watchObservedRunningTime="2025-01-11 15:20:26.374399597 +0000 UTC m=+7.115380796" Jan 11 15:20:26 minikube kubelet[2300]: I0111 15:20:26.380564 2300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-pcxfk" podStartSLOduration=1.38055068 podStartE2EDuration="1.38055068s" podCreationTimestamp="2025-01-11 15:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-11 15:20:26.374709097 +0000 UTC m=+7.115690254" watchObservedRunningTime="2025-01-11 15:20:26.38055068 +0000 UTC m=+7.121531838" Jan 11 15:20:26 minikube kubelet[2300]: I0111 15:20:26.385817 2300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8g946" podStartSLOduration=2.38580843 podStartE2EDuration="2.38580843s" podCreationTimestamp="2025-01-11 15:20:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-11 15:20:26.385690388 +0000 UTC m=+7.126671504" watchObservedRunningTime="2025-01-11 15:20:26.38580843 +0000 UTC m=+7.126789588" Jan 11 15:20:29 minikube kubelet[2300]: I0111 15:20:29.623427 2300 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Jan 11 15:20:29 minikube kubelet[2300]: I0111 15:20:29.624681 2300 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Jan 11 15:22:10 minikube kubelet[2300]: I0111 15:22:10.709404 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/14f97382-0bf2-4433-ba95-725fbcd970da-tmp-volume\") pod \"kubernetes-dashboard-695b96c756-k42bm\" (UID: \"14f97382-0bf2-4433-ba95-725fbcd970da\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-k42bm" Jan 11 15:22:10 minikube kubelet[2300]: I0111 15:22:10.709431 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f886f\" (UniqueName: \"kubernetes.io/projected/d3850194-9dd8-4576-9155-c7ba171f2fa6-kube-api-access-f886f\") pod \"dashboard-metrics-scraper-c5db448b4-l88h2\" (UID: \"d3850194-9dd8-4576-9155-c7ba171f2fa6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-l88h2" Jan 11 15:22:10 minikube kubelet[2300]: I0111 15:22:10.709442 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d3850194-9dd8-4576-9155-c7ba171f2fa6-tmp-volume\") pod \"dashboard-metrics-scraper-c5db448b4-l88h2\" (UID: \"d3850194-9dd8-4576-9155-c7ba171f2fa6\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-l88h2" Jan 11 15:22:10 minikube kubelet[2300]: I0111 15:22:10.709450 2300 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-952lm\" (UniqueName: \"kubernetes.io/projected/14f97382-0bf2-4433-ba95-725fbcd970da-kube-api-access-952lm\") pod \"kubernetes-dashboard-695b96c756-k42bm\" (UID: \"14f97382-0bf2-4433-ba95-725fbcd970da\") " pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-k42bm" Jan 11 15:22:23 minikube kubelet[2300]: I0111 15:22:23.361682 2300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-l88h2" podStartSLOduration=1.746873173 podStartE2EDuration="13.361631095s" podCreationTimestamp="2025-01-11 15:22:10 +0000 UTC" firstStartedPulling="2025-01-11 15:22:11.008431048 +0000 UTC m=+111.749543677" lastFinishedPulling="2025-01-11 15:22:22.623189053 +0000 UTC m=+123.364301599" observedRunningTime="2025-01-11 15:22:23.361421637 +0000 UTC m=+124.102534350" watchObservedRunningTime="2025-01-11 15:22:23.361631095 +0000 UTC m=+124.102743683" Jan 11 15:22:37 minikube kubelet[2300]: I0111 15:22:37.471379 2300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-k42bm" podStartSLOduration=1.6747048260000001 podStartE2EDuration="27.471339963s" podCreationTimestamp="2025-01-11 15:22:10 +0000 UTC" firstStartedPulling="2025-01-11 15:22:11.008436214 +0000 UTC m=+111.749548760" lastFinishedPulling="2025-01-11 15:22:36.805790712 +0000 UTC m=+137.546183897" observedRunningTime="2025-01-11 15:22:37.470444463 +0000 UTC m=+138.210837814" watchObservedRunningTime="2025-01-11 15:22:37.471339963 +0000 UTC m=+138.211733189" ==> kubernetes-dashboard [50e64556f29b] <== 2025/01/11 15:29:56 Getting list of all replication controllers in the cluster 2025/01/11 15:29:56 [2025-01-11T15:29:56Z] Incoming HTTP/1.1 GET /api/v1/statefulset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:29:56 Getting list of all pet sets in the cluster 2025/01/11 15:29:56 [2025-01-11T15:29:56Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:29:56 [2025-01-11T15:29:56Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:29:56 [2025-01-11T15:29:56Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Incoming HTTP/1.1 GET /api/v1/daemonset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1: 2025/01/11 15:31:38 Getting list of namespaces 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Incoming HTTP/1.1 GET /api/v1/job/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:38 Getting list of all jobs in the cluster 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:38 Getting list of all pods in the cluster 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:38 Getting list of all deployments in the cluster 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Incoming HTTP/1.1 GET /api/v1/cronjob/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:38 Getting list of all cron jobs in the cluster 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:38 Getting pod metrics 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Incoming HTTP/1.1 GET /api/v1/replicationcontroller/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:38 Getting list of all replication controllers in the cluster 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Incoming HTTP/1.1 GET /api/v1/replicaset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:38 Getting list of all replica sets in the cluster 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Incoming HTTP/1.1 GET /api/v1/statefulset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:38 Getting list of all pet sets in the cluster 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:38 [2025-01-11T15:31:38Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Incoming HTTP/1.1 GET /api/v1/pod/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:39 Getting list of all pods in the cluster 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Incoming HTTP/1.1 GET /api/v1/deployment/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:39 Getting list of all deployments in the cluster 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1: 2025/01/11 15:31:39 Getting list of namespaces 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Incoming HTTP/1.1 GET /api/v1/job/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:39 Getting list of all jobs in the cluster 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Incoming HTTP/1.1 GET /api/v1/cronjob/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:39 Getting list of all cron jobs in the cluster 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Incoming HTTP/1.1 GET /api/v1/daemonset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:39 Getting pod metrics 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Incoming HTTP/1.1 GET /api/v1/replicaset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:39 Getting list of all replica sets in the cluster 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Incoming HTTP/1.1 GET /api/v1/statefulset/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:39 Getting list of all pet sets in the cluster 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Incoming HTTP/1.1 GET /api/v1/replicationcontroller/default?itemsPerPage=10&page=1&sortBy=d,creationTimestamp request from 127.0.0.1: 2025/01/11 15:31:39 Getting list of all replication controllers in the cluster 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Outcoming response to 127.0.0.1 with 200 status code 2025/01/11 15:31:39 [2025-01-11T15:31:39Z] Outcoming response to 127.0.0.1 with 200 status code ==> storage-provisioner [8dc11d63e53c] <== I0111 15:20:25.995047 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0111 15:20:25.998383 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0111 15:20:25.998405 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0111 15:20:26.000447 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0111 15:20:26.000492 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_cf881b0a-7344-4207-bc08-ac9bd8337dc2! I0111 15:20:26.000492 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9b0f9894-45bb-4fae-b125-8221b8406a55", APIVersion:"v1", ResourceVersion:"354", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_cf881b0a-7344-4207-bc08-ac9bd8337dc2 became leader I0111 15:20:26.104469 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_cf881b0a-7344-4207-bc08-ac9bd8337dc2!