cloudadmin@lv-maas-01:~$ juju run --unit kubernetes-worker/0 -- journalctl -o cat -u snap.kubelet.daemon snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. cat: /var/snap/kubelet/1031/args: No such file or directory I0626 19:42:13.523303 17004 server.go:407] Version: v1.13.7 I0626 19:42:13.523538 17004 plugins.go:103] No cloud provider specified. W0626 19:42:13.523559 17004 server.go:552] standalone mode, no API client W0626 19:42:13.633959 17004 server.go:464] No api server defined - no events will be sent to API server. I0626 19:42:13.633991 17004 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / F0626 19:42:13.634931 17004 server.go:261] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority none virtual 8388604 8388336 0] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 1. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted I0626 19:42:14.052796 18823 server.go:407] Version: v1.13.7 I0626 19:42:14.052983 18823 plugins.go:103] No cloud provider specified. W0626 19:42:14.053000 18823 server.go:552] standalone mode, no API client W0626 19:42:14.088469 18823 server.go:464] No api server defined - no events will be sent to API server. I0626 19:42:14.088489 18823 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / F0626 19:42:14.089407 18823 server.go:261] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority none virtual 8388604 8388336 0] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 2. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted I0626 19:42:14.445545 18919 server.go:407] Version: v1.13.7 I0626 19:42:14.446270 18919 plugins.go:103] No cloud provider specified. W0626 19:42:14.446295 18919 server.go:552] standalone mode, no API client W0626 19:42:14.495318 18919 server.go:464] No api server defined - no events will be sent to API server. I0626 19:42:14.495345 18919 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / F0626 19:42:14.496387 18919 server.go:261] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority none virtual 8388604 8388336 0] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 3. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:42:14.813423 18993 server.go:407] Version: v1.13.7 I0626 19:42:14.813626 18993 plugins.go:103] No cloud provider specified. W0626 19:42:14.813643 18993 server.go:552] standalone mode, no API client W0626 19:42:14.847939 18993 server.go:464] No api server defined - no events will be sent to API server. I0626 19:42:14.847961 18993 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / F0626 19:42:14.848804 18993 server.go:261] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority none virtual 8388604 8388336 0] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 4. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:42:15.279977 19480 server.go:407] Version: v1.13.7 I0626 19:42:15.280205 19480 plugins.go:103] No cloud provider specified. W0626 19:42:15.280222 19480 server.go:552] standalone mode, no API client W0626 19:42:15.333756 19480 server.go:464] No api server defined - no events will be sent to API server. I0626 19:42:15.333777 19480 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / F0626 19:42:15.334666 19480 server.go:261] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority none virtual 8388604 8388336 0] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 5. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:42:15.813940 19568 server.go:407] Version: v1.13.7 I0626 19:42:15.814122 19568 plugins.go:103] No cloud provider specified. W0626 19:42:15.814140 19568 server.go:552] standalone mode, no API client W0626 19:42:15.854765 19568 server.go:464] No api server defined - no events will be sent to API server. I0626 19:42:15.854789 19568 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / F0626 19:42:15.855623 19568 server.go:261] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority none virtual 8388604 8388336 0] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 6. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:42:16.133089 19646 server.go:407] Version: v1.13.7 I0626 19:42:16.133313 19646 plugins.go:103] No cloud provider specified. W0626 19:42:16.133330 19646 server.go:552] standalone mode, no API client W0626 19:42:16.190355 19646 server.go:464] No api server defined - no events will be sent to API server. I0626 19:42:16.190378 19646 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / F0626 19:42:16.191144 19646 server.go:261] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority none virtual 8388604 8388336 0] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 7. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:42:16.556103 19779 server.go:407] Version: v1.13.7 I0626 19:42:16.556365 19779 plugins.go:103] No cloud provider specified. W0626 19:42:16.556385 19779 server.go:552] standalone mode, no API client W0626 19:42:16.616881 19779 server.go:464] No api server defined - no events will be sent to API server. I0626 19:42:16.616921 19779 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / F0626 19:42:16.617865 19779 server.go:261] failed to run Kubelet: Running with swap on is not supported, please disable swap! or set --fail-swap-on flag to false. /proc/swaps contained: [Filename Type Size Used Priority none virtual 8388604 8388336 0] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 8. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Start request repeated too quickly. snap.kubelet.daemon.service: Failed with result 'exit-code'. Failed to start Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:50:40.958860 50011 controller.go:101] kubelet config controller: starting controller I0626 19:50:40.959843 50011 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:50:40.959994 50011 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:50:40.970327 50011 server.go:407] Version: v1.13.7 I0626 19:50:40.970568 50011 plugins.go:103] No cloud provider specified. I0626 19:50:40.976038 50011 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:50:40.976207 50011 controller.go:197] kubelet config controller: starting status sync loop I0626 19:50:40.976241 50011 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:50:40.976496 50011 controller.go:226] kubelet config controller: starting Node informer I0626 19:50:40.977001 50011 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:50:41.033705 50011 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / E0626 19:50:41.034008 50011 status.go:155] kubelet config controller: could not get Node "juju-c4ad65-7", will not sync status, error: nodes "juju-c4ad65-7" not found I0626 19:50:41.049221 50011 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:50:41.049258 50011 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:50:41.049377 50011 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:50:41.049517 50011 state_mem.go:36] [cpumanager] initializing new in-memory state store W0626 19:50:41.258565 50011 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:50:41.276165 50011 kubelet.go:306] Watching apiserver I0626 19:50:41.308168 50011 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:50:41.317292 50011 client.go:104] Start docker client with request timeout=2m0s W0626 19:50:41.319134 50011 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:50:41.319182 50011 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:50:41.321320 50011 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:50:41.321560 50011 docker_service.go:251] Docker cri networking managed by cni I0626 19:50:41.348320 50011 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:50:41.322558591Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00074a2a0 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:50:41.348412 50011 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:50:41.392566 50011 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 W0626 19:50:41.404246 50011 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. I0626 19:50:41.409599 50011 server.go:999] Started kubelet I0626 19:50:41.409721 50011 server.go:137] Starting to listen on 0.0.0.0:10250 E0626 19:50:41.429576 50011 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:50:41.430539 50011 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:50:41.430586 50011 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:50:41.430610 50011 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:50:41.430626 50011 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:50:41.437670 50011 server.go:333] Adding debug handlers to kubelet server. I0626 19:50:41.437692 50011 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:50:41.437715 50011 desired_state_of_world_populator.go:130] Desired state populator starts to run W0626 19:50:41.502511 50011 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:50:41.530722 50011 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] E0626 19:50:41.540201 50011 kubelet.go:2266] node "juju-c4ad65-7" not found I0626 19:50:41.540408 50011 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:50:41.547183 50011 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:50:41.561517 50011 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:50:41.566026 50011 watch.go:89] kubelet config controller: initial Node watch event E0626 19:50:41.569607 50011 container_manager_linux.go:98] Unable to ensure the docker processes run in the desired containers: errors moving "docker" pid: failed to apply oom score -999 to PID 15815: write /proc/15815/oom_score_adj: permission denied I0626 19:50:41.641738 50011 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:50:41.641773 50011 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:50:41.641786 50011 policy_none.go:42] [cpumanager] none policy: Start F0626 19:50:41.642421 50011 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied, open /proc/sys/vm/overcommit_memory: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 1. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:50:42.027631 50329 controller.go:101] kubelet config controller: starting controller I0626 19:50:42.028451 50329 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:50:42.028483 50329 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:50:42.037132 50329 server.go:407] Version: v1.13.7 I0626 19:50:42.037378 50329 plugins.go:103] No cloud provider specified. I0626 19:50:42.040133 50329 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:50:42.040240 50329 controller.go:226] kubelet config controller: starting Node informer I0626 19:50:42.040299 50329 controller.go:197] kubelet config controller: starting status sync loop I0626 19:50:42.040338 50329 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:50:42.040783 50329 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:50:42.052174 50329 watch.go:89] kubelet config controller: initial Node watch event I0626 19:50:42.092924 50329 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:50:42.093301 50329 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:50:42.093324 50329 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:50:42.093444 50329 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:50:42.093482 50329 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:50:42.093634 50329 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:50:42.093653 50329 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:50:42.294070 50329 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:50:42.294151 50329 kubelet.go:306] Watching apiserver I0626 19:50:42.296808 50329 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:50:42.296838 50329 client.go:104] Start docker client with request timeout=2m0s W0626 19:50:42.298407 50329 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:50:42.298432 50329 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:50:42.302927 50329 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:50:42.303398 50329 docker_service.go:251] Docker cri networking managed by cni I0626 19:50:42.320115 50329 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:50:42.304548784Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000268070 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:50:42.320207 50329 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:50:42.334502 50329 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:50:42.335894 50329 server.go:999] Started kubelet E0626 19:50:42.336145 50329 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:50:42.336324 50329 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:50:42.336532 50329 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:50:42.336560 50329 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:50:42.336581 50329 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:50:42.336608 50329 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:50:42.336909 50329 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:50:42.336983 50329 server.go:333] Adding debug handlers to kubelet server. I0626 19:50:42.337723 50329 desired_state_of_world_populator.go:130] Desired state populator starts to run W0626 19:50:42.358639 50329 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:50:42.436714 50329 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:50:42.437174 50329 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:50:42.442470 50329 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:50:42.456216 50329 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:50:42.456242 50329 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:50:42.486644 50329 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:50:42.486672 50329 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:50:42.486686 50329 policy_none.go:42] [cpumanager] none policy: Start F0626 19:50:42.487272 50329 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 2. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:50:42.786122 50515 controller.go:101] kubelet config controller: starting controller I0626 19:50:42.787221 50515 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:50:42.787440 50515 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:50:42.798725 50515 server.go:407] Version: v1.13.7 I0626 19:50:42.799107 50515 plugins.go:103] No cloud provider specified. I0626 19:50:42.801718 50515 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:50:42.801834 50515 controller.go:197] kubelet config controller: starting status sync loop I0626 19:50:42.801986 50515 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:50:42.802370 50515 controller.go:226] kubelet config controller: starting Node informer I0626 19:50:42.802672 50515 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:50:42.816712 50515 watch.go:89] kubelet config controller: initial Node watch event I0626 19:50:42.843971 50515 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:50:42.844350 50515 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:50:42.844372 50515 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:50:42.844495 50515 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:50:42.844534 50515 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:50:42.844656 50515 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:50:42.844670 50515 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:50:43.045102 50515 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:50:43.045197 50515 kubelet.go:306] Watching apiserver I0626 19:50:43.047762 50515 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:50:43.047792 50515 client.go:104] Start docker client with request timeout=2m0s W0626 19:50:43.049299 50515 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:50:43.049338 50515 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:50:43.051663 50515 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:50:43.051868 50515 docker_service.go:251] Docker cri networking managed by cni I0626 19:50:43.066059 50515 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:50:43.052981911Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0006bb1f0 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:50:43.066192 50515 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:50:43.087002 50515 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:50:43.089168 50515 server.go:999] Started kubelet I0626 19:50:43.089378 50515 server.go:137] Starting to listen on 0.0.0.0:10250 E0626 19:50:43.089290 50515 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:50:43.089919 50515 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:50:43.090065 50515 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:50:43.090176 50515 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:50:43.090292 50515 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:50:43.090212 50515 server.go:333] Adding debug handlers to kubelet server. I0626 19:50:43.090100 50515 desired_state_of_world_populator.go:130] Desired state populator starts to run I0626 19:50:43.090089 50515 volume_manager.go:248] Starting Kubelet Volume Manager W0626 19:50:43.111064 50515 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:50:43.190332 50515 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:50:43.192851 50515 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:50:43.193445 50515 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:50:43.213745 50515 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:50:43.213773 50515 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:50:43.234122 50515 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:50:43.234152 50515 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:50:43.234165 50515 policy_none.go:42] [cpumanager] none policy: Start F0626 19:50:43.234737 50515 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 3. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:50:43.516888 50682 controller.go:101] kubelet config controller: starting controller I0626 19:50:43.517138 50682 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:50:43.517160 50682 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:50:43.526425 50682 server.go:407] Version: v1.13.7 I0626 19:50:43.526580 50682 plugins.go:103] No cloud provider specified. I0626 19:50:43.534017 50682 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:50:43.534138 50682 controller.go:197] kubelet config controller: starting status sync loop I0626 19:50:43.534173 50682 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:50:43.534522 50682 controller.go:226] kubelet config controller: starting Node informer I0626 19:50:43.534853 50682 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:50:43.549505 50682 watch.go:89] kubelet config controller: initial Node watch event I0626 19:50:43.573129 50682 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:50:43.573559 50682 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:50:43.573581 50682 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:50:43.573699 50682 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:50:43.573737 50682 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:50:43.573880 50682 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:50:43.573898 50682 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:50:43.774292 50682 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:50:43.774385 50682 kubelet.go:306] Watching apiserver I0626 19:50:43.777321 50682 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:50:43.777364 50682 client.go:104] Start docker client with request timeout=2m0s W0626 19:50:43.780100 50682 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:50:43.780138 50682 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:50:43.786677 50682 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:50:43.786869 50682 docker_service.go:251] Docker cri networking managed by cni I0626 19:50:43.799701 50682 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:50:43.788007315Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000818230 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:50:43.799835 50682 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:50:43.813417 50682 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:50:43.814651 50682 server.go:999] Started kubelet E0626 19:50:43.814688 50682 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:50:43.814817 50682 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:50:43.815307 50682 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:50:43.815361 50682 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:50:43.815373 50682 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:50:43.815385 50682 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:50:43.815490 50682 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:50:43.815535 50682 server.go:333] Adding debug handlers to kubelet server. I0626 19:50:43.815585 50682 desired_state_of_world_populator.go:130] Desired state populator starts to run W0626 19:50:43.833952 50682 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:50:43.915540 50682 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:50:43.915619 50682 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:50:43.917976 50682 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:50:43.929859 50682 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:50:43.929890 50682 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:50:43.950205 50682 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:50:43.950230 50682 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:50:43.950241 50682 policy_none.go:42] [cpumanager] none policy: Start F0626 19:50:43.950835 50682 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 4. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:50:44.294873 50817 controller.go:101] kubelet config controller: starting controller I0626 19:50:44.295791 50817 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:50:44.295931 50817 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:50:44.306399 50817 server.go:407] Version: v1.13.7 I0626 19:50:44.306700 50817 plugins.go:103] No cloud provider specified. I0626 19:50:44.309274 50817 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:50:44.310180 50817 controller.go:197] kubelet config controller: starting status sync loop I0626 19:50:44.310444 50817 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:50:44.310380 50817 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:50:44.310399 50817 controller.go:226] kubelet config controller: starting Node informer I0626 19:50:44.328068 50817 watch.go:89] kubelet config controller: initial Node watch event I0626 19:50:44.361186 50817 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:50:44.361667 50817 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:50:44.361690 50817 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:50:44.361828 50817 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:50:44.361866 50817 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:50:44.362016 50817 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:50:44.362031 50817 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:50:44.562521 50817 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:50:44.562643 50817 kubelet.go:306] Watching apiserver I0626 19:50:44.565275 50817 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:50:44.565304 50817 client.go:104] Start docker client with request timeout=2m0s W0626 19:50:44.566904 50817 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:50:44.566937 50817 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:50:44.569575 50817 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:50:44.569804 50817 docker_service.go:251] Docker cri networking managed by cni I0626 19:50:44.588154 50817 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:50:44.571001574Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000376690 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:50:44.588322 50817 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:50:44.604399 50817 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:50:44.605742 50817 server.go:999] Started kubelet E0626 19:50:44.605791 50817 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:50:44.605828 50817 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:50:44.606434 50817 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:50:44.606467 50817 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:50:44.606480 50817 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:50:44.606498 50817 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:50:44.606557 50817 server.go:333] Adding debug handlers to kubelet server. I0626 19:50:44.606719 50817 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:50:44.606753 50817 desired_state_of_world_populator.go:130] Desired state populator starts to run W0626 19:50:44.634317 50817 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:50:44.706681 50817 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:50:44.706696 50817 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:50:44.709689 50817 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:50:44.739424 50817 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:50:44.739463 50817 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:50:44.766693 50817 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:50:44.766721 50817 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:50:44.766734 50817 policy_none.go:42] [cpumanager] none policy: Start F0626 19:50:44.767320 50817 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 5. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Start request repeated too quickly. snap.kubelet.daemon.service: Failed with result 'exit-code'. Failed to start Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:54:43.347944 55420 controller.go:101] kubelet config controller: starting controller I0626 19:54:43.348949 55420 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:54:43.349191 55420 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:54:43.358710 55420 server.go:407] Version: v1.13.7 I0626 19:54:43.359067 55420 plugins.go:103] No cloud provider specified. I0626 19:54:43.361875 55420 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:54:43.361984 55420 controller.go:197] kubelet config controller: starting status sync loop I0626 19:54:43.362008 55420 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:54:43.362326 55420 controller.go:226] kubelet config controller: starting Node informer I0626 19:54:43.362573 55420 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:54:43.376754 55420 watch.go:89] kubelet config controller: initial Node watch event I0626 19:54:43.403436 55420 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:54:43.403886 55420 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:54:43.403911 55420 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:54:43.404074 55420 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:54:43.404116 55420 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:54:43.404379 55420 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:54:43.404399 55420 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:54:43.604792 55420 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:54:43.604871 55420 kubelet.go:306] Watching apiserver I0626 19:54:43.607748 55420 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:54:43.607779 55420 client.go:104] Start docker client with request timeout=2m0s W0626 19:54:43.609698 55420 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:54:43.609724 55420 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:54:43.614674 55420 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:54:43.614892 55420 docker_service.go:251] Docker cri networking managed by cni I0626 19:54:43.631836 55420 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:54:43.616403105Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00062d180 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:54:43.631940 55420 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:54:43.656421 55420 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:54:43.658075 55420 server.go:999] Started kubelet I0626 19:54:43.658293 55420 server.go:137] Starting to listen on 0.0.0.0:10250 E0626 19:54:43.658324 55420 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:54:43.659159 55420 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:54:43.661352 55420 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:54:43.661368 55420 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:54:43.661388 55420 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:54:43.661393 55420 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:54:43.661403 55420 desired_state_of_world_populator.go:130] Desired state populator starts to run I0626 19:54:43.662953 55420 server.go:333] Adding debug handlers to kubelet server. W0626 19:54:43.691019 55420 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:54:43.761499 55420 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:54:43.762989 55420 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:54:43.765226 55420 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:54:43.794648 55420 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:54:43.794703 55420 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:54:43.800118 55420 setters.go:520] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-06-26 19:54:43.800104422 +0000 UTC m=+0.555962444 LastTransitionTime:2019-06-26 19:54:43.800104422 +0000 UTC m=+0.555962444 Reason:KubeletNotReady Message:container runtime status check may not have completed yet,Missing node capacity for resources: ephemeral-storage} I0626 19:54:43.826521 55420 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:54:43.826543 55420 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:54:43.826564 55420 policy_none.go:42] [cpumanager] none policy: Start F0626 19:54:43.827259 55420 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 1. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:54:44.282129 55737 controller.go:101] kubelet config controller: starting controller I0626 19:54:44.282884 55737 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:54:44.282923 55737 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:54:44.292296 55737 server.go:407] Version: v1.13.7 I0626 19:54:44.292684 55737 plugins.go:103] No cloud provider specified. I0626 19:54:44.295641 55737 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:54:44.295946 55737 controller.go:197] kubelet config controller: starting status sync loop I0626 19:54:44.295972 55737 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:54:44.296661 55737 controller.go:226] kubelet config controller: starting Node informer I0626 19:54:44.296960 55737 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:54:44.318985 55737 watch.go:89] kubelet config controller: initial Node watch event I0626 19:54:44.342581 55737 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:54:44.343061 55737 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:54:44.343084 55737 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:54:44.343231 55737 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:54:44.343275 55737 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:54:44.343413 55737 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:54:44.343429 55737 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:54:44.543857 55737 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:54:44.543965 55737 kubelet.go:306] Watching apiserver I0626 19:54:44.546237 55737 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:54:44.546268 55737 client.go:104] Start docker client with request timeout=2m0s W0626 19:54:44.547796 55737 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:54:44.547821 55737 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:54:44.549998 55737 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:54:44.550205 55737 docker_service.go:251] Docker cri networking managed by cni W0626 19:54:44.552381 55737 reflector.go:256] k8s.io/kubernetes/pkg/kubelet/kubelet.go:444: watch of *v1.Service ended with: too old resource version: 286 (398) I0626 19:54:44.563835 55737 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:54:44.55137009Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0007923f0 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:54:44.563933 55737 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:54:44.580091 55737 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:54:44.582047 55737 server.go:999] Started kubelet E0626 19:54:44.582941 55737 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:54:44.584404 55737 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:54:44.585594 55737 server.go:333] Adding debug handlers to kubelet server. I0626 19:54:44.586464 55737 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:54:44.586504 55737 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:54:44.586547 55737 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:54:44.586563 55737 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:54:44.586879 55737 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:54:44.587898 55737 desired_state_of_world_populator.go:130] Desired state populator starts to run W0626 19:54:44.621053 55737 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:54:44.686647 55737 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:54:44.687036 55737 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:54:44.689389 55737 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:54:44.730915 55737 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:54:44.730949 55737 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:54:44.745447 55737 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:54:44.745468 55737 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:54:44.745481 55737 policy_none.go:42] [cpumanager] none policy: Start F0626 19:54:44.746038 55737 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 2. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:54:45.043052 55914 controller.go:101] kubelet config controller: starting controller I0626 19:54:45.043263 55914 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:54:45.043285 55914 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:54:45.052366 55914 server.go:407] Version: v1.13.7 I0626 19:54:45.052615 55914 plugins.go:103] No cloud provider specified. I0626 19:54:45.057032 55914 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:54:45.057990 55914 controller.go:226] kubelet config controller: starting Node informer I0626 19:54:45.058350 55914 controller.go:197] kubelet config controller: starting status sync loop I0626 19:54:45.058368 55914 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:54:45.058565 55914 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:54:45.073864 55914 watch.go:89] kubelet config controller: initial Node watch event I0626 19:54:45.108035 55914 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:54:45.108435 55914 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:54:45.108457 55914 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:54:45.108580 55914 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:54:45.108619 55914 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:54:45.108735 55914 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:54:45.108750 55914 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:54:45.309197 55914 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:54:45.309296 55914 kubelet.go:306] Watching apiserver I0626 19:54:45.312122 55914 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:54:45.312152 55914 client.go:104] Start docker client with request timeout=2m0s W0626 19:54:45.313905 55914 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:54:45.313935 55914 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:54:45.316428 55914 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:54:45.316669 55914 docker_service.go:251] Docker cri networking managed by cni I0626 19:54:45.330197 55914 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:54:45.317817124Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0006c88c0 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:54:45.330301 55914 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:54:45.349890 55914 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:54:45.351200 55914 server.go:999] Started kubelet I0626 19:54:45.351435 55914 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:54:45.352015 55914 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:54:45.352047 55914 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:54:45.352060 55914 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:54:45.352073 55914 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] E0626 19:54:45.352140 55914 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:54:45.352535 55914 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:54:45.352679 55914 desired_state_of_world_populator.go:130] Desired state populator starts to run I0626 19:54:45.352917 55914 server.go:333] Adding debug handlers to kubelet server. W0626 19:54:45.375352 55914 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:54:45.452607 55914 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:54:45.452978 55914 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:54:45.455146 55914 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:54:45.473873 55914 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:54:45.473913 55914 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:54:45.499511 55914 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:54:45.499540 55914 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:54:45.499551 55914 policy_none.go:42] [cpumanager] none policy: Start F0626 19:54:45.500143 55914 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 3. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:54:45.776898 56055 controller.go:101] kubelet config controller: starting controller I0626 19:54:45.777459 56055 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:54:45.777488 56055 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:54:45.787663 56055 server.go:407] Version: v1.13.7 I0626 19:54:45.788014 56055 plugins.go:103] No cloud provider specified. I0626 19:54:45.790491 56055 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:54:45.790622 56055 controller.go:197] kubelet config controller: starting status sync loop I0626 19:54:45.790653 56055 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:54:45.791019 56055 controller.go:226] kubelet config controller: starting Node informer I0626 19:54:45.791292 56055 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:54:45.809120 56055 watch.go:89] kubelet config controller: initial Node watch event I0626 19:54:45.842951 56055 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:54:45.843619 56055 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:54:45.843657 56055 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:54:45.843813 56055 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:54:45.843863 56055 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:54:45.844019 56055 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:54:45.844039 56055 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:54:46.044425 56055 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:54:46.044506 56055 kubelet.go:306] Watching apiserver I0626 19:54:46.047464 56055 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:54:46.047494 56055 client.go:104] Start docker client with request timeout=2m0s W0626 19:54:46.049849 56055 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:54:46.050081 56055 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:54:46.060047 56055 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:54:46.060572 56055 docker_service.go:251] Docker cri networking managed by cni I0626 19:54:46.073858 56055 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:54:46.061887275Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00059a000 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:54:46.073993 56055 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:54:46.087795 56055 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:54:46.089263 56055 server.go:999] Started kubelet E0626 19:54:46.089806 56055 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:54:46.090832 56055 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:54:46.091052 56055 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:54:46.091072 56055 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:54:46.091097 56055 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:54:46.091536 56055 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:54:46.091793 56055 desired_state_of_world_populator.go:130] Desired state populator starts to run I0626 19:54:46.092252 56055 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:54:46.097337 56055 server.go:333] Adding debug handlers to kubelet server. W0626 19:54:46.115879 56055 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:54:46.191215 56055 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:54:46.192846 56055 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:54:46.195452 56055 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:54:46.207186 56055 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:54:46.207212 56055 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:54:46.242627 56055 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:54:46.242652 56055 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:54:46.242663 56055 policy_none.go:42] [cpumanager] none policy: Start F0626 19:54:46.243258 56055 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied, open /proc/sys/vm/overcommit_memory: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 4. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:54:46.526363 56238 controller.go:101] kubelet config controller: starting controller I0626 19:54:46.526803 56238 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:54:46.526829 56238 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:54:46.536453 56238 server.go:407] Version: v1.13.7 I0626 19:54:46.536726 56238 plugins.go:103] No cloud provider specified. I0626 19:54:46.541609 56238 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:54:46.541866 56238 controller.go:197] kubelet config controller: starting status sync loop I0626 19:54:46.541991 56238 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:54:46.541995 56238 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:54:46.541866 56238 controller.go:226] kubelet config controller: starting Node informer I0626 19:54:46.558132 56238 watch.go:89] kubelet config controller: initial Node watch event I0626 19:54:46.582021 56238 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:54:46.582435 56238 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:54:46.582457 56238 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:54:46.582784 56238 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:54:46.582832 56238 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:54:46.582991 56238 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:54:46.583139 56238 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:54:46.783619 56238 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:54:46.783719 56238 kubelet.go:306] Watching apiserver I0626 19:54:46.786180 56238 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:54:46.786218 56238 client.go:104] Start docker client with request timeout=2m0s W0626 19:54:46.787842 56238 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:54:46.787878 56238 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:54:46.792887 56238 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:54:46.793153 56238 docker_service.go:251] Docker cri networking managed by cni I0626 19:54:46.807376 56238 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:54:46.794387766Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000130690 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:54:46.807655 56238 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:54:46.825982 56238 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:54:46.827712 56238 server.go:999] Started kubelet I0626 19:54:46.827848 56238 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:54:46.829400 56238 server.go:333] Adding debug handlers to kubelet server. E0626 19:54:46.827979 56238 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:54:46.828588 56238 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:54:46.830504 56238 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:54:46.830522 56238 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:54:46.830534 56238 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:54:46.830632 56238 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:54:46.832105 56238 desired_state_of_world_populator.go:130] Desired state populator starts to run W0626 19:54:46.859679 56238 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:54:46.931630 56238 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:54:46.931659 56238 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:54:46.933927 56238 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:54:46.948428 56238 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:54:46.948456 56238 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:54:46.990806 56238 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:54:46.990842 56238 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:54:46.990855 56238 policy_none.go:42] [cpumanager] none policy: Start F0626 19:54:46.991461 56238 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 5. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Start request repeated too quickly. snap.kubelet.daemon.service: Failed with result 'exit-code'. Failed to start Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:55:03.084203 59896 controller.go:101] kubelet config controller: starting controller I0626 19:55:03.085906 59896 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:55:03.086069 59896 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:55:03.097852 59896 server.go:407] Version: v1.13.7 I0626 19:55:03.098142 59896 plugins.go:103] No cloud provider specified. I0626 19:55:03.100414 59896 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:55:03.100701 59896 controller.go:197] kubelet config controller: starting status sync loop I0626 19:55:03.100739 59896 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:55:03.100772 59896 controller.go:226] kubelet config controller: starting Node informer I0626 19:55:03.101240 59896 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:55:03.112900 59896 watch.go:89] kubelet config controller: initial Node watch event I0626 19:55:03.153040 59896 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:55:03.153437 59896 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:55:03.153459 59896 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:55:03.153713 59896 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:55:03.153765 59896 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:55:03.153901 59896 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:55:03.154040 59896 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:55:03.354706 59896 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:55:03.354788 59896 kubelet.go:306] Watching apiserver I0626 19:55:03.358655 59896 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:55:03.358702 59896 client.go:104] Start docker client with request timeout=2m0s W0626 19:55:03.361713 59896 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:55:03.361754 59896 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:55:03.370663 59896 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:55:03.370860 59896 docker_service.go:251] Docker cri networking managed by cni I0626 19:55:03.387966 59896 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:55:03.371957908Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc00073da40 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:55:03.388062 59896 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:55:03.405043 59896 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:55:03.413240 59896 server.go:999] Started kubelet E0626 19:55:03.413673 59896 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:55:03.413673 59896 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:55:03.414424 59896 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:55:03.414453 59896 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:55:03.414465 59896 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:55:03.414628 59896 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:55:03.414816 59896 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:55:03.415134 59896 desired_state_of_world_populator.go:130] Desired state populator starts to run I0626 19:55:03.415900 59896 server.go:333] Adding debug handlers to kubelet server. W0626 19:55:03.437196 59896 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:55:03.514733 59896 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:55:03.515013 59896 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:55:03.517099 59896 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:55:03.536924 59896 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:55:03.538032 59896 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:55:03.565830 59896 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:55:03.565851 59896 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:55:03.565876 59896 policy_none.go:42] [cpumanager] none policy: Start F0626 19:55:03.566470 59896 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 1. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:55:04.031089 60214 controller.go:101] kubelet config controller: starting controller I0626 19:55:04.031603 60214 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:55:04.031630 60214 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:55:04.040865 60214 server.go:407] Version: v1.13.7 I0626 19:55:04.041311 60214 plugins.go:103] No cloud provider specified. I0626 19:55:04.044798 60214 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:55:04.046189 60214 controller.go:197] kubelet config controller: starting status sync loop I0626 19:55:04.046396 60214 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:55:04.047361 60214 controller.go:226] kubelet config controller: starting Node informer I0626 19:55:04.047452 60214 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:55:04.069843 60214 watch.go:89] kubelet config controller: initial Node watch event I0626 19:55:04.099443 60214 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:55:04.099931 60214 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:55:04.099960 60214 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:55:04.100127 60214 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:55:04.100186 60214 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:55:04.100345 60214 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:55:04.100637 60214 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:55:04.301302 60214 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:55:04.301368 60214 kubelet.go:306] Watching apiserver I0626 19:55:04.304233 60214 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:55:04.304280 60214 client.go:104] Start docker client with request timeout=2m0s W0626 19:55:04.305833 60214 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:55:04.305857 60214 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:55:04.309378 60214 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:55:04.309599 60214 docker_service.go:251] Docker cri networking managed by cni I0626 19:55:04.326317 60214 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:55:04.3107259Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000846380 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:55:04.326426 60214 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:55:04.343762 60214 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:55:04.345475 60214 server.go:999] Started kubelet E0626 19:55:04.345950 60214 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:55:04.346681 60214 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:55:04.346713 60214 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:55:04.346727 60214 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:55:04.346744 60214 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:55:04.346765 60214 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:55:04.346864 60214 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:55:04.347831 60214 desired_state_of_world_populator.go:130] Desired state populator starts to run I0626 19:55:04.348479 60214 server.go:333] Adding debug handlers to kubelet server. W0626 19:55:04.374453 60214 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:55:04.449186 60214 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:55:04.449198 60214 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:55:04.451714 60214 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:55:04.463372 60214 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:55:04.463577 60214 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:55:04.498780 60214 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:55:04.498804 60214 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:55:04.498815 60214 policy_none.go:42] [cpumanager] none policy: Start F0626 19:55:04.499411 60214 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 2. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:55:04.784460 60389 controller.go:101] kubelet config controller: starting controller I0626 19:55:04.784942 60389 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:55:04.784978 60389 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:55:04.794061 60389 server.go:407] Version: v1.13.7 I0626 19:55:04.794363 60389 plugins.go:103] No cloud provider specified. I0626 19:55:04.796813 60389 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:55:04.796974 60389 controller.go:226] kubelet config controller: starting Node informer I0626 19:55:04.797132 60389 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:55:04.796995 60389 controller.go:197] kubelet config controller: starting status sync loop I0626 19:55:04.797188 60389 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:55:04.809634 60389 watch.go:89] kubelet config controller: initial Node watch event I0626 19:55:04.850341 60389 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:55:04.850956 60389 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:55:04.850993 60389 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:55:04.851176 60389 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:55:04.851236 60389 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:55:04.851449 60389 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:55:04.851473 60389 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:55:05.052033 60389 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:55:05.052139 60389 kubelet.go:306] Watching apiserver I0626 19:55:05.057100 60389 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:55:05.057152 60389 client.go:104] Start docker client with request timeout=2m0s W0626 19:55:05.059761 60389 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:55:05.059814 60389 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:55:05.064615 60389 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:55:05.065194 60389 docker_service.go:251] Docker cri networking managed by cni I0626 19:55:05.085161 60389 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:55:05.066915102Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000689b90 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:55:05.085298 60389 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:55:05.111858 60389 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:55:05.113531 60389 server.go:999] Started kubelet E0626 19:55:05.113711 60389 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:55:05.113752 60389 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:55:05.114485 60389 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:55:05.114515 60389 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:55:05.114542 60389 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:55:05.114561 60389 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:55:05.114782 60389 server.go:333] Adding debug handlers to kubelet server. I0626 19:55:05.114805 60389 desired_state_of_world_populator.go:130] Desired state populator starts to run I0626 19:55:05.114793 60389 volume_manager.go:248] Starting Kubelet Volume Manager W0626 19:55:05.136216 60389 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:55:05.216988 60389 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:55:05.216982 60389 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:55:05.219132 60389 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:55:05.234640 60389 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:55:05.234680 60389 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:55:05.260294 60389 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:55:05.260318 60389 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:55:05.260331 60389 policy_none.go:42] [cpumanager] none policy: Start F0626 19:55:05.260906 60389 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 3. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:55:05.539365 60473 controller.go:101] kubelet config controller: starting controller I0626 19:55:05.541163 60473 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:55:05.541425 60473 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:55:05.552260 60473 server.go:407] Version: v1.13.7 I0626 19:55:05.552437 60473 plugins.go:103] No cloud provider specified. I0626 19:55:05.554971 60473 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:55:05.555099 60473 controller.go:197] kubelet config controller: starting status sync loop I0626 19:55:05.555119 60473 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:55:05.555557 60473 controller.go:226] kubelet config controller: starting Node informer I0626 19:55:05.555609 60473 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:55:05.567075 60473 watch.go:89] kubelet config controller: initial Node watch event I0626 19:55:05.607570 60473 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:55:05.607960 60473 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:55:05.607989 60473 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:55:05.608104 60473 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:55:05.608146 60473 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:55:05.608277 60473 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:55:05.608292 60473 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:55:05.808750 60473 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:55:05.808849 60473 kubelet.go:306] Watching apiserver I0626 19:55:05.812440 60473 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:55:05.812508 60473 client.go:104] Start docker client with request timeout=2m0s W0626 19:55:05.816036 60473 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:55:05.816092 60473 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:55:05.819937 60473 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:55:05.820215 60473 docker_service.go:251] Docker cri networking managed by cni I0626 19:55:05.838058 60473 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:55:05.821532057Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000683d50 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:55:05.838199 60473 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:55:05.866158 60473 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:55:05.867385 60473 server.go:999] Started kubelet I0626 19:55:05.867687 60473 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:55:05.868140 60473 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:55:05.868167 60473 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:55:05.868201 60473 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:55:05.868217 60473 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] E0626 19:55:05.868390 60473 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:55:05.868424 60473 server.go:333] Adding debug handlers to kubelet server. I0626 19:55:05.868527 60473 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:55:05.870827 60473 desired_state_of_world_populator.go:130] Desired state populator starts to run W0626 19:55:05.889733 60473 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:55:05.968434 60473 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:55:05.968703 60473 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:55:05.971246 60473 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:55:05.991940 60473 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:55:05.991975 60473 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:55:06.003526 60473 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:55:06.003549 60473 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:55:06.003561 60473 policy_none.go:42] [cpumanager] none policy: Start F0626 19:55:06.004203 60473 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 4. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Failed to reset devices.list: Operation not permitted Started Service for snap application kubelet.daemon. I0626 19:55:06.276396 60563 controller.go:101] kubelet config controller: starting controller I0626 19:55:06.276682 60563 controller.go:267] kubelet config controller: ensuring filesystem is set up correctly I0626 19:55:06.276705 60563 fsstore.go:59] kubelet config controller: initializing config checkpoints directory "/root/cdk/kubelet/dynamic-config/store" I0626 19:55:06.286405 60563 server.go:407] Version: v1.13.7 I0626 19:55:06.286702 60563 plugins.go:103] No cloud provider specified. I0626 19:55:06.291237 60563 controller.go:207] kubelet config controller: local source is assigned, will not start remote config source informer I0626 19:55:06.291460 60563 controller.go:197] kubelet config controller: starting status sync loop I0626 19:55:06.291493 60563 status.go:145] kubelet config controller: updating Node.Status.Config I0626 19:55:06.291545 60563 controller.go:226] kubelet config controller: starting Node informer I0626 19:55:06.291918 60563 controller.go:231] kubelet config controller: starting Kubelet config sync loop I0626 19:55:06.313662 60563 watch.go:89] kubelet config controller: initial Node watch event I0626 19:55:06.346649 60563 server.go:666] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to / I0626 19:55:06.347300 60563 container_manager_linux.go:248] container manager verified user specified cgroup-root exists: [] I0626 19:55:06.347345 60563 container_manager_linux.go:253] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms} I0626 19:55:06.347557 60563 container_manager_linux.go:272] Creating device plugin manager: true I0626 19:55:06.347636 60563 state_mem.go:36] [cpumanager] initializing new in-memory state store I0626 19:55:06.347862 60563 state_mem.go:84] [cpumanager] updated default cpuset: "" I0626 19:55:06.347890 60563 state_mem.go:92] [cpumanager] updated cpuset assignments: "map[]" W0626 19:55:06.548444 60563 server.go:740] write /proc/self/oom_score_adj: permission denied I0626 19:55:06.548532 60563 kubelet.go:306] Watching apiserver I0626 19:55:06.553412 60563 client.go:75] Connecting to docker on unix:///var/run/docker.sock I0626 19:55:06.553468 60563 client.go:104] Start docker client with request timeout=2m0s W0626 19:55:06.557359 60563 docker_service.go:540] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" I0626 19:55:06.557812 60563 docker_service.go:236] Hairpin mode set to "hairpin-veth" W0626 19:55:06.561196 60563 hostport_manager.go:68] The binary conntrack is not installed, this can cause failures in network connection cleanup. I0626 19:55:06.561400 60563 docker_service.go:251] Docker cri networking managed by cni I0626 19:55:06.576535 60563 docker_service.go:256] Docker Info: &{ID:EX2C:WFGA:EJNX:77IH:DO4G:YDZY:MST3:56PA:7XU5:32DM:6YSV:YDGQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:vfs DriverStatus:[] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6tables:false Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-06-26T19:55:06.562517032Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.15.0-52-generic OperatingSystem:Ubuntu 18.04.2 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0007fe620 NCPU:4 MemTotal:4096000000 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:juju-c4ad65-7 Labels:[] ExperimentalBuild:false ServerVersion:18.09.5 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID: Expected:} RuncCommit:{ID:N/A Expected:N/A} InitCommit:{ID:v0.18.0 Expected:fec3683b971d9c3ef73f284f176672c44b448662} SecurityOptions:[name=apparmor name=seccomp,profile=default]} I0626 19:55:06.576665 60563 docker_service.go:269] Setting cgroupDriver to cgroupfs I0626 19:55:06.597546 60563 kuberuntime_manager.go:198] Container runtime docker initialized, version: 18.09.5, apiVersion: 1.39.0 I0626 19:55:06.600015 60563 server.go:999] Started kubelet E0626 19:55:06.600160 60563 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache I0626 19:55:06.600528 60563 server.go:137] Starting to listen on 0.0.0.0:10250 I0626 19:55:06.601194 60563 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer I0626 19:55:06.601264 60563 status_manager.go:152] Starting to sync pod status with apiserver I0626 19:55:06.601285 60563 kubelet.go:1829] Starting kubelet main sync loop. I0626 19:55:06.601318 60563 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful] I0626 19:55:06.601695 60563 server.go:333] Adding debug handlers to kubelet server. I0626 19:55:06.601798 60563 volume_manager.go:248] Starting Kubelet Volume Manager I0626 19:55:06.602702 60563 desired_state_of_world_populator.go:130] Desired state populator starts to run W0626 19:55:06.626412 60563 manager.go:349] Could not configure a source for OOM detection, disabling OOM events: open /dev/kmsg: no such file or directory I0626 19:55:06.701535 60563 kubelet.go:1846] skipping pod synchronization - [container runtime status check may not have completed yet] I0626 19:55:06.701966 60563 kubelet_node_status.go:279] Setting node annotation to enable volume controller attach/detach I0626 19:55:06.704340 60563 kubelet_node_status.go:72] Attempting to register node juju-c4ad65-7 I0626 19:55:06.724898 60563 kubelet_node_status.go:114] Node juju-c4ad65-7 was previously registered I0626 19:55:06.724924 60563 kubelet_node_status.go:75] Successfully registered node juju-c4ad65-7 I0626 19:55:06.744637 60563 cpu_manager.go:155] [cpumanager] starting with none policy I0626 19:55:06.744658 60563 cpu_manager.go:156] [cpumanager] reconciling every 10s I0626 19:55:06.744669 60563 policy_none.go:42] [cpumanager] none policy: Start F0626 19:55:06.745273 60563 kubelet.go:1384] Failed to start ContainerManager [open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied] snap.kubelet.daemon.service: Main process exited, code=exited, status=255/n/a snap.kubelet.daemon.service: Failed with result 'exit-code'. snap.kubelet.daemon.service: Service hold-off time over, scheduling restart. snap.kubelet.daemon.service: Scheduled restart job, restart counter is at 5. Stopped Service for snap application kubelet.daemon. snap.kubelet.daemon.service: Start request repeated too quickly. snap.kubelet.daemon.service: Failed with result 'exit-code'. Failed to start Service for snap application kubelet.daemon.