2021-03-15T10:54:11.674 controller-0 kernel: info 49.314242] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) 2021-03-15T10:54:11.674 controller-0 kernel: info 52.076557] systemd[1]: Detected virtualization kvm. 2021-03-15T10:54:11.674 controller-0 kernel: info 52.699101] systemd[1]: Detected architecture x86-64. 2021-03-15T10:54:11.674 controller-0 kernel: info 53.519794] systemd[1]: Running in initial RAM disk. 2021-03-15T10:54:11.674 controller-0 kernel: info 55.135773] systemd[1]: Set hostname to . 2021-03-15T10:54:11.674 controller-0 kernel: info 64.785911] systemd[1]: Reached target Local File Systems. 2021-03-15T10:54:11.674 controller-0 kernel: info 65.588732] systemd[1]: Reached target Timers. 2021-03-15T10:54:11.674 controller-0 kernel: info 67.139438] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. 2021-03-15T10:54:11.674 controller-0 kernel: info 69.491159] systemd[1]: Reached target Paths. 2021-03-15T10:54:11.674 controller-0 kernel: info 70.548251] systemd[1]: Reached target Swap. 2021-03-15T10:54:11.674 controller-0 kernel: info 71.491530] systemd[1]: Created slice Root Slice. 2021-03-15T10:54:11.674 controller-0 kernel: info 72.438238] systemd[1]: Listening on udev Kernel Socket. 2021-03-15T10:54:11.674 controller-0 kernel: info 74.010307] systemd[1]: Created slice System Slice. 2021-03-15T10:54:11.674 controller-0 kernel: info 75.499445] systemd[1]: Listening on Journal Socket. 2021-03-15T10:54:11.674 controller-0 kernel: info 76.393806] systemd[1]: Starting Create list of required static device nodes for the current kernel... 2021-03-15T10:54:11.674 controller-0 kernel: info 77.438172] systemd[1]: Starting Journal Service... 2021-03-15T10:54:11.674 controller-0 kernel: info 77.453479] systemd[1]: Starting Load Kernel Modules... 2021-03-15T10:54:11.674 controller-0 kernel: info 80.214607] systemd[1]: Starting dracut cmdline hook... 2021-03-15T10:54:11.674 controller-0 kernel: info 80.279443] systemd[1]: Listening on udev Control Socket. 2021-03-15T10:54:11.674 controller-0 kernel: info 82.132106] systemd[1]: Reached target Sockets. 2021-03-15T10:54:11.674 controller-0 kernel: info 82.251102] systemd[1]: Reached target Slices. 2021-03-15T10:54:11.674 controller-0 kernel: info 83.940868] systemd[1]: Started Journal Service. 2021-03-15T10:54:11.675 controller-0 kernel: info 129.689261] systemd[1]: Inserted module 'ip_tables' 2021-03-15T10:54:11.730 controller-0 network[832]: info Bringing up loopback interface: [ OK ] 2021-03-15T10:54:16.654 controller-0 network[832]: info Bringing up interface enp0s3: [ OK ] 2021-03-15T10:54:21.038 controller-0 network[832]: info Bringing up interface enp0s8: Determining if ip address 192.168.204.2 is already in use for device enp0s8... 2021-03-15T10:54:25.242 controller-0 network[832]: info RTNETLINK answers: File exists 2021-03-15T10:54:25.403 controller-0 network[832]: info Determining if ip address 192.168.206.2 is already in use for device enp0s8... 2021-03-15T10:54:30.029 controller-0 network[832]: info [ OK ] 2021-03-15T10:54:30.621 controller-0 network[832]: info Bringing up interface eth1000: [ OK ] 2021-03-15T10:54:30.712 controller-0 network[832]: info Bringing up interface eth1001: [ OK ] 2021-03-15T10:54:30.732 controller-0 systemd[1]: info Started LSB: Bring up/down networking. 2021-03-15T10:54:31.337 controller-0 systemd[1]: info Reached target Network. 2021-03-15T10:54:31.644 controller-0 systemd[1]: info Starting StarlingX Filesystem Common... 2021-03-15T10:54:31.664 controller-0 systemd[1]: info Starting containerd container runtime... 2021-03-15T10:54:31.681 controller-0 systemd[1]: info Starting Dynamic System Tuning Daemon... 2021-03-15T10:54:31.682 controller-0 nfscommon[1466]: info creating NFS state directory: done 2021-03-15T10:54:31.691 controller-0 systemd[1]: info Starting Open-iSCSI... 2021-03-15T10:54:31.697 controller-0 systemd[1]: info Starting LLDP daemon... 2021-03-15T10:54:31.000 controller-0 iscsid: warning iSCSI logger with pid=1481 started! 2021-03-15T10:54:31.704 controller-0 systemd[1]: info Starting Crash Dump Manager... 2021-03-15T10:54:31.000 controller-0 rpc.statd[1487]: notice Version 1.3.0 starting 2021-03-15T10:54:31.709 controller-0 systemd[1]: info Reached target Network is Online. 2021-03-15T10:54:31.000 controller-0 sm-notify[1488]: notice Version 1.3.0 starting 2021-03-15T10:54:31.715 controller-0 systemd[1]: info Starting StarlingX Patching... 2021-03-15T10:54:31.732 controller-0 systemd[1]: info Started memcached daemon. 2021-03-15T10:54:31.741 controller-0 systemd[1]: info Starting Set time via NTP... 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: notice ntpd 4.2.6p5@1.2349-o Fri Jan 29 00:22:00 UTC 2021 (1) 2021-03-15T10:54:31.746 controller-0 systemd[1]: info Starting InfluxDB open-source, distributed, time series database... 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: notice proto: precision = 0.031 usec 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info 0.0.0.0 c01d 0d kern kernel time sync enabled 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: debug ntp_io: estimated max descriptors: 1024, initial socket boundary: 16 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info Listen and drop on 1 v6wildcard :: UDP 123 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info Listen normally on 2 lo 127.0.0.1 UDP 123 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info Listen normally on 3 enp0s3 10.20.2.4 UDP 123 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info Listen normally on 4 enp0s8 192.168.204.2 UDP 123 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info Listen normally on 5 enp0s8:5 192.168.206.2 UDP 123 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info Listen normally on 6 lo ::1 UDP 123 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info Listen normally on 7 enp0s8 fe80::a00:27ff:fe6b:9bdf UDP 123 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info Listen normally on 8 eth1000 fe80::a00:27ff:fe4a:bc10 UDP 123 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info Listen normally on 9 enp0s3 fe80::a00:27ff:fef0:7c46 UDP 123 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info Listening on routing socket on fd #26 for interface updates 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info 0.0.0.0 c016 06 restart 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info 0.0.0.0 c012 02 freq_set kernel 0.000 PPM 2021-03-15T10:54:31.000 controller-0 ntpd[1511]: info 0.0.0.0 c011 01 freq_not_set 2021-03-15T10:54:31.755 controller-0 nfscommon[1466]: info starting statd: done 2021-03-15T10:54:31.756 controller-0 systemd[1]: info Starting Notify NFS peers of a restart... 2021-03-15T10:54:31.758 controller-0 nfscommon[1466]: info mount: rpc_pipefs is already mounted or /var/lib/nfs/rpc_pipefs busy 2021-03-15T10:54:31.000 controller-0 sm-notify[1516]: notice Version 1.3.0 starting 2021-03-15T10:54:31.000 controller-0 sm-notify[1516]: notice Already notifying clients; Exiting! 2021-03-15T10:54:31.762 controller-0 nfscommon[1466]: info starting idmapd: done 2021-03-15T10:54:31.765 controller-0 systemd[1]: info Started StarlingX Filesystem Common. 2021-03-15T10:54:31.769 controller-0 systemd[1]: info Started Open-iSCSI. 2021-03-15T10:54:31.777 controller-0 systemd[1]: info Started Crash Dump Manager. 2021-03-15T10:54:31.792 controller-0 systemd[1]: info Started Notify NFS peers of a restart. 2021-03-15T10:54:31.806 controller-0 systemd[1]: info Started containerd container runtime. 2021-03-15T10:54:31.000 controller-0 lldpd[1548]: info /etc/localtime copied to chroot 2021-03-15T10:54:31.814 controller-0 systemd[1]: info Starting Docker Application Container Engine... 2021-03-15T10:54:31.000 controller-0 lldpd[1548]: info protocol LLDP enabled 2021-03-15T10:54:31.000 controller-0 lldpd[1548]: info protocol CDPv1 disabled 2021-03-15T10:54:31.000 controller-0 lldpd[1548]: info protocol CDPv2 disabled 2021-03-15T10:54:31.000 controller-0 lldpd[1548]: info protocol SONMP disabled 2021-03-15T10:54:31.000 controller-0 lldpd[1548]: info protocol EDP disabled 2021-03-15T10:54:31.000 controller-0 lldpd[1548]: info protocol FDP disabled 2021-03-15T10:54:31.000 controller-0 lldpd[1548]: info libevent 2.0.21-stable initialized with epoll method 2021-03-15T10:54:31.822 controller-0 systemd[1]: info Started OpenSSH server daemon. 2021-03-15T10:54:31.839 controller-0 systemd[1]: info Starting Logout off all iSCSI sessions on shutdown... 2021-03-15T10:54:31.844 controller-0 systemd[1]: info Starting Activation of LVM2 logical volumes... 2021-03-15T10:54:31.851 controller-0 systemd[1]: info Starting StarlingX Cloud Filesystem Auto-mounter... 2021-03-15T10:54:31.000 controller-0 lldpcli[1545]: info system name set to new value controller-0:vbox 2021-03-15T10:54:31.000 controller-0 lldpcli[1545]: info transmit delay set to new value 2021-03-15T10:54:31.000 controller-0 lldpcli[1545]: info transmit hold set to new value 4 2021-03-15T10:54:31.000 controller-0 lldpcli[1545]: info iface-pattern set to new value *,!br*,!ovs*,!tap*,!cali*,!tunl*,!docker* 2021-03-15T10:54:31.000 controller-0 lldpcli[1545]: info lldpd should resume operations 2021-03-15T10:54:31.856 controller-0 systemd[1]: info Starting StarlingX Filesystem Server... 2021-03-15T10:54:31.859 controller-0 systemd[1]: info Started LLDP daemon. 2021-03-15T10:54:31.875 controller-0 systemd[1]: info Started Logout off all iSCSI sessions on shutdown. 2021-03-15T10:54:31.881 controller-0 nfsserver[1568]: info exportfs: 192.168.204.3:/etc/platform: Function not implemented 2021-03-15T10:54:31.881 controller-0 systemd[1]: info Started StarlingX Cloud Filesystem Auto-mounter. 2021-03-15T10:54:31.899 controller-0 systemd[1]: info Starting StarlingX Filesystem Initialization... 2021-03-15T10:54:31.899 controller-0 lvm[1579]: info 1 logical volume(s) in volume group "nova-local" now active 2021-03-15T10:54:31.904 controller-0 lvm[1579]: info 12 logical volume(s) in volume group "cgts-vg" now active 2021-03-15T10:54:31.909 controller-0 systemd[1]: info Started Activation of LVM2 logical volumes. 2021-03-15T10:54:31.922 controller-0 systemd[1]: info Started InfluxDB open-source, distributed, time series database. 2021-03-15T10:54:31.923 controller-0 sshd[1551]: info Starting sshd: [ OK ] 2021-03-15T10:54:31.936 controller-0 systemd[1]: info Started StarlingX Filesystem Initialization. 2021-03-15T10:54:31.962 controller-0 systemd[1]: info Reached target Remote File Systems (Pre). 2021-03-15T10:54:31.973 controller-0 systemd[1]: info Reached target Remote File Systems. 2021-03-15T10:54:31.988 controller-0 systemd[1]: info Starting Crash recovery kernel arming... 2021-03-15T10:54:31.995 controller-0 systemd[1]: info Starting Permit User Sessions... 2021-03-15T10:54:32.003 controller-0 systemd[1]: info Started Permit User Sessions. 2021-03-15T10:54:32.011 controller-0 systemd[1]: info Started Dynamic System Tuning Daemon. 2021-03-15T10:54:32.000 controller-0 nfsdcltrack[1641]: err Failed to init database: -13 2021-03-15T10:54:32.031 controller-0 nfsserver[1568]: info starting 8 nfsd kernel threads: done 2021-03-15T10:54:32.071 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.070804832Z" level=info msg="parsed scheme: \"unix\"" module=grpc 2021-03-15T10:54:32.071 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.070863295Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc 2021-03-15T10:54:32.071 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.070910684Z" level=info msg="parsed scheme: \"unix\"" module=grpc 2021-03-15T10:54:32.071 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.070916823Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc 2021-03-15T10:54:32.071 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.071232951Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 }]" module=grpc 2021-03-15T10:54:32.071 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.071267433Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc 2021-03-15T10:54:32.071 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.071304574Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4206d43d0, CONNECTING" module=grpc 2021-03-15T10:54:32.071 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.071528693Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 }]" module=grpc 2021-03-15T10:54:32.071 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.071559369Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc 2021-03-15T10:54:32.071 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.071599489Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420878180, CONNECTING" module=grpc 2021-03-15T10:54:32.071 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.071711000Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420878180, READY" module=grpc 2021-03-15T10:54:32.075 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.074990837Z" level=info msg="[graphdriver] using prior storage driver: overlay2" 2021-03-15T10:54:32.076 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.076833281Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4206d43d0, READY" module=grpc 2021-03-15T10:54:32.085 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.085902438Z" level=info msg="Graph migration to content-addressability took 0.00 seconds" 2021-03-15T10:54:32.086 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.086204758Z" level=warning msg="Your kernel does not support cgroup rt period" 2021-03-15T10:54:32.086 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.086250123Z" level=warning msg="Your kernel does not support cgroup rt runtime" 2021-03-15T10:54:32.086 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.086274070Z" level=warning msg="Your kernel does not support cgroup blkio weight" 2021-03-15T10:54:32.086 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.086306415Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" 2021-03-15T10:54:32.087 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.087278935Z" level=info msg="Loading containers: start." 2021-03-15T10:54:32.000 controller-0 rpc.mountd[1692]: notice Version 1.3.0 starting 2021-03-15T10:54:32.089 controller-0 nfsserver[1568]: info starting mountd: done 2021-03-15T10:54:32.092 controller-0 systemd[1]: info Started StarlingX Filesystem Server. 2021-03-15T10:54:32.420 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.420662070Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" 2021-03-15T10:54:32.465 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.465176467Z" level=info msg="Loading containers: done." 2021-03-15T10:54:32.476 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.476382536Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version 1.0.0-rc10\nspec: 1.0.1-dev\n" 2021-03-15T10:54:32.485 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.485488689Z" level=info msg="Docker daemon" commit=481bc77 graphdriver(s)=overlay2 version=18.09.6 2021-03-15T10:54:32.485 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.485590911Z" level=info msg="Daemon has completed initialization" 2021-03-15T10:54:32.491 controller-0 dockerd[1544]: info time="2021-03-15T10:54:32.491343068Z" level=info msg="API listen on /var/run/docker.sock" 2021-03-15T10:54:32.493 controller-0 systemd[1]: info Started Docker Application Container Engine. 2021-03-15T10:54:32.000 controller-0 iscsid: err iSCSI daemon with pid=1483 started! 2021-03-15T10:54:33.000 controller-0 ntpd[1511]: info Listen normally on 10 docker0 172.17.0.1 UDP 123 2021-03-15T10:54:33.000 controller-0 ntpd[1511]: info Listen normally on 11 eth1001 fe80::a00:27ff:fe2e:586d UDP 123 2021-03-15T10:54:33.000 controller-0 ntpd[1511]: debug new interface(s) found: waking up resolver 2021-03-15T10:54:33.925 controller-0 kdumpctl[1624]: info kexec: loaded kdump kernel 2021-03-15T10:54:33.925 controller-0 kdumpctl[1624]: info Starting kdump: [OK] 2021-03-15T10:54:33.925 controller-0 systemd[1]: info Started Crash recovery kernel arming. 2021-03-15T10:54:34.707 controller-0 sw-patch[1495]: info Checking for software updates... 2021-03-15T10:54:34.713 controller-0 sw-patch[1495]: info Nothing to install. 2021-03-15T10:54:34.713 controller-0 systemd[1]: info Started StarlingX Patching. 2021-03-15T10:54:34.714 controller-0 systemd[1]: info Starting StarlingX System Inventory Agent... 2021-03-15T10:54:34.731 controller-0 sysinv-agent[2103]: info Setting up config for sysinv-agent: Installing virtio_net driver: OK 2021-03-15T10:54:34.733 controller-0 sysinv-agent[2103]: info Starting sysinv-agent: OK 2021-03-15T10:54:34.747 controller-0 systemd[1]: info Started KVM Timer Advance Setup. 2021-03-15T10:54:34.755 controller-0 systemd[1]: info Starting StarlingX Patching Controller... 2021-03-15T10:54:34.775 controller-0 systemd[1]: info Started Fault Management REST API Service. 2021-03-15T10:54:34.782 controller-0 systemd[1]: info Starting StarlingX Affine Platform... 2021-03-15T10:54:34.788 controller-0 systemd[1]: info Starting StarlingX Log Management... 2021-03-15T10:54:34.797 controller-0 systemd[1]: info Started StarlingX System Inventory Agent. 2021-03-15T10:54:34.806 controller-0 systemd[1]: notice kvm_timer_advance_setup.service: main process exited, code=exited, status=1/FAILURE 2021-03-15T10:54:34.808 controller-0 fm-api[2122]: info OK 2021-03-15T10:54:34.808 controller-0 systemd[1]: info Starting StarlingX PCI Interrupt Affinity Agent... 2021-03-15T10:54:34.813 controller-0 systemd[1]: info Started controllerconfig service. 2021-03-15T10:54:34.818 controller-0 systemd[1]: info Starting StarlingX FPGA Agent... 2021-03-15T10:54:34.823 controller-0 pci-irq-affinity-agent[2149]: info Setting up config for pci-irq-affinity-agent: Starting pci-irq-affinity-agent: OK 2021-03-15T10:54:34.828 controller-0 controller_config[2151]: info Configuring controller node... 2021-03-15T10:54:34.831 controller-0 systemd[1]: info Started StarlingX PCI Interrupt Affinity Agent. 2021-03-15T10:54:34.896 controller-0 controller_config[2151]: info Checking connectivity to controller-platform-nfs for up to 70 seconds over interface 192.168.204.2 2021-03-15T10:54:34.921 controller-0 sysinv-fpga-agent[2158]: info Waiting for sysinv config fileStarting sysinv-fpga-agent: OK 2021-03-15T10:54:34.842 controller-0 systemd[1]: info Started StarlingX FPGA Agent. 2021-03-15T10:54:34.921 controller-0 sysinv-fpga-agent[2218]: info Stopping sysinv-fpga-agent: OK 2021-03-15T10:54:34.849 controller-0 systemd[1]: info Starting StarlingX conf watcher... 2021-03-15T10:54:34.874 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:54:34.875 controller-0 systemd[1]: info Stopping StarlingX FPGA Agent... 2021-03-15T10:54:34.943 controller-0 systemd[1]: info Stopped StarlingX FPGA Agent. 2021-03-15T10:54:34.984 controller-0 systemd[1]: info Starting StarlingX FPGA Agent... 2021-03-15T10:54:35.000 controller-0 sysinv-fpga-agent[2283]: info Waiting for sysinv config fileStarting sysinv-fpga-agent: OK 2021-03-15T10:54:35.003 controller-0 systemd[1]: info Started StarlingX FPGA Agent. 2021-03-15T10:54:35.024 controller-0 systemd[1]: info Started StarlingX conf watcher. 2021-03-15T10:54:35.075 controller-0 logmgmt[2131]: info Starting logmgmt...done. 2021-03-15T10:54:35.075 controller-0 systemd[1]: info Can't open PID file /var/run/logmgmt.pid (yet?) after start: No such file or directory 2021-03-15T10:54:35.081 controller-0 affine-platform.sh[2127]: info Starting affine-platform.sh: /etc/init.d/affine-platform.sh[1]: Affining all PCI/MSI irqs(0 9 10 11 16 17 18 19 21) with cpus (0-1) 2021-03-15T10:54:35.075 controller-0 systemd[1]: info Started StarlingX Log Management. 2021-03-15T10:54:35.086 controller-0 systemd[1]: info Starting Titanium Cloud libvirt QEMU cleanup... 2021-03-15T10:54:35.096 controller-0 affine-platform.sh[2127]: info [ OK ] 2021-03-15T10:54:35.101 controller-0 systemd[1]: info Starting General StarlingX config gate... 2021-03-15T10:54:35.105 controller-0 systemd[1]: info Started StarlingX Affine Platform. 2021-03-15T10:54:35.110 controller-0 systemd[1]: info Started StarlingX Patching Controller. 2021-03-15T10:54:35.114 controller-0 systemd[1]: info Started Titanium Cloud libvirt QEMU cleanup. 2021-03-15T10:54:35.121 controller-0 systemd[1]: info Starting StarlingX Patching Controller Daemon... 2021-03-15T10:54:35.126 controller-0 systemd[1]: info Starting StarlingX Patching Agent... 2021-03-15T10:54:35.129 controller-0 sw-patch-controller-daemon[2351]: info Starting sw-patch-controller-daemon...done. 2021-03-15T10:54:35.132 controller-0 sw-patch-agent[2353]: info Starting sw-patch-agent...done. 2021-03-15T10:54:35.135 controller-0 systemd[1]: info Started StarlingX Affine Tasks. 2021-03-15T10:54:35.148 controller-0 systemd[1]: info Started StarlingX Patching Controller Daemon. 2021-03-15T10:54:35.165 controller-0 systemd[1]: info Started StarlingX Patching Agent. 2021-03-15T10:54:35.213 controller-0 systemd[1]: info Reloading System Logger Daemon. 2021-03-15T10:54:35.221 controller-0 systemd[1]: info Reloaded System Logger Daemon. 2021-03-15T10:54:35.000 controller-0 affine-tasks.sh(2360): info : Starting. 2021-03-15T10:54:35.000 controller-0 affine-tasks.sh(2360): info : Affine all tasks, CPUS: 0-3; online=0-3 (0xf), isol=, nonisol=0-3 (0xf) 2021-03-15T10:54:35.000 controller-0 affine-tasks.sh(2360): info : Affined 50 processes to all cores. 2021-03-15T10:54:35.985 controller-0 controller_config[2151]: info /etc/init.d/controller_config: Running puppet manifest apply 2021-03-15T10:54:36.205 controller-0 controller_config[2151]: info Applying puppet controller manifest... 2021-03-15T10:54:42.000 controller-0 ovs-vsctl: err ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) 2021-03-15T10:54:42.000 controller-0 ntpd[1511]: notice ntpd: no servers found 2021-03-15T10:54:42.749 controller-0 ntpd[1511]: info ntpd: no servers found 2021-03-15T10:54:42.750 controller-0 systemd[1]: notice ntpdate.service: main process exited, code=exited, status=1/FAILURE 2021-03-15T10:54:42.750 controller-0 systemd[1]: err Failed to start Set time via NTP. 2021-03-15T10:54:42.763 controller-0 systemd[1]: notice Unit ntpdate.service entered failed state. 2021-03-15T10:54:42.763 controller-0 systemd[1]: warning ntpdate.service failed. 2021-03-15T10:54:42.764 controller-0 systemd[1]: info Reached target System Time Synchronized. 2021-03-15T10:54:42.772 controller-0 systemd[1]: info Started daily update of the root trust anchor for DNSSEC. 2021-03-15T10:54:42.777 controller-0 systemd[1]: info Reached target Timers. 2021-03-15T10:54:42.784 controller-0 systemd[1]: info Started Command Scheduler. 2021-03-15T10:54:42.792 controller-0 systemd[1]: info Starting Network Time Service... 2021-03-15T10:54:42.000 controller-0 ntpd[4000]: notice ntpd 4.2.6p5@1.2349-o Fri Jan 29 00:22:00 UTC 2021 (1) 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: notice proto: precision = 0.032 usec 2021-03-15T10:54:42.802 controller-0 systemd[1]: info Started Network Time Service. 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info 0.0.0.0 c01d 0d kern kernel time sync enabled 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: debug ntp_io: estimated max descriptors: 1024, initial socket boundary: 16 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen and drop on 1 v6wildcard :: UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen normally on 2 lo 127.0.0.1 UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen normally on 3 enp0s3 10.20.2.4 UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen normally on 4 enp0s8 192.168.204.2 UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen normally on 5 enp0s8:5 192.168.206.2 UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen normally on 6 docker0 172.17.0.1 UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen normally on 7 lo ::1 UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen normally on 8 enp0s8 fe80::a00:27ff:fe6b:9bdf UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen normally on 9 eth1000 fe80::a00:27ff:fe4a:bc10 UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen normally on 10 eth1001 fe80::a00:27ff:fe2e:586d UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listen normally on 11 enp0s3 fe80::a00:27ff:fef0:7c46 UDP 123 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info Listening on routing socket on fd #28 for interface updates 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info 0.0.0.0 c016 06 restart 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info 0.0.0.0 c012 02 freq_set kernel 0.000 PPM 2021-03-15T10:54:42.000 controller-0 ntpd[4004]: info 0.0.0.0 c011 01 freq_not_set 2021-03-15T10:55:52.812 controller-0 systemd[1]: info Reloading. 2021-03-15T10:55:52.897 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:01.600 controller-0 systemd[1]: info Reloading System Logger Daemon. 2021-03-15T10:56:01.627 controller-0 systemd[1]: info Reloaded System Logger Daemon. 2021-03-15T10:56:10.804 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:13.985 controller-0 systemd[1]: info Got automount request for /proc/sys/fs/binfmt_misc, triggered by 76511 (sysctl) 2021-03-15T10:56:13.992 controller-0 systemd[1]: info Mounting Arbitrary Executable File Formats File System... 2021-03-15T10:56:14.070 controller-0 systemd[1]: info Mounted Arbitrary Executable File Formats File System. 2021-03-15T10:56:18.315 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:56:18.316 controller-0 systemd[1]: info Stopping Docker Application Container Engine... 2021-03-15T10:56:18.316 controller-0 dockerd[1544]: info time="2021-03-15T10:56:18.316232581Z" level=info msg="Processing signal 'terminated'" 2021-03-15T10:56:18.369 controller-0 systemd[1]: info Stopped Docker Application Container Engine. 2021-03-15T10:56:18.389 controller-0 systemd[1]: info Closed Docker Socket for the API. 2021-03-15T10:56:18.461 controller-0 systemd[1]: info Stopping Docker Socket for the API. 2021-03-15T10:56:18.466 controller-0 systemd[1]: info Starting Docker Socket for the API. 2021-03-15T10:56:18.567 controller-0 systemd[1]: info Stopping containerd container runtime... 2021-03-15T10:56:18.583 controller-0 systemd[1]: info Listening on Docker Socket for the API. 2021-03-15T10:56:18.604 controller-0 systemd[1]: info Stopped containerd container runtime. 2021-03-15T10:56:18.614 controller-0 systemd[1]: info Starting containerd container runtime... 2021-03-15T10:56:18.650 controller-0 systemd[1]: info Started containerd container runtime. 2021-03-15T10:56:18.653 controller-0 systemd[1]: info Starting Docker Application Container Engine... 2021-03-15T10:56:18.676 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:18.723 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.723104757Z" level=info msg="parsed scheme: \"unix\"" module=grpc 2021-03-15T10:56:18.723 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.723137561Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc 2021-03-15T10:56:18.723 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.723162446Z" level=info msg="parsed scheme: \"unix\"" module=grpc 2021-03-15T10:56:18.723 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.723167573Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc 2021-03-15T10:56:18.723 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.723541924Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 }]" module=grpc 2021-03-15T10:56:18.723 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.723563146Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc 2021-03-15T10:56:18.723 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.723589893Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420740180, CONNECTING" module=grpc 2021-03-15T10:56:18.723 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.723700164Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420740180, READY" module=grpc 2021-03-15T10:56:18.723 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.723842963Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 }]" module=grpc 2021-03-15T10:56:18.723 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.723872554Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc 2021-03-15T10:56:18.723 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.723904613Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4207f84a0, CONNECTING" module=grpc 2021-03-15T10:56:18.724 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.724243128Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4207f84a0, READY" module=grpc 2021-03-15T10:56:18.729 controller-0 dockerd[77210]: info time="2021-03-15T10:56:18.728668050Z" level=info msg="[graphdriver] using prior storage driver: overlay2" 2021-03-15T10:56:18.748 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:19.233 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.232961635Z" level=info msg="Graph migration to content-addressability took 0.00 seconds" 2021-03-15T10:56:19.234 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.233676489Z" level=warning msg="Your kernel does not support cgroup rt period" 2021-03-15T10:56:19.234 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.233796727Z" level=warning msg="Your kernel does not support cgroup rt runtime" 2021-03-15T10:56:19.234 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.233823345Z" level=warning msg="Your kernel does not support cgroup blkio weight" 2021-03-15T10:56:19.234 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.233843297Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" 2021-03-15T10:56:19.237 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.234866359Z" level=info msg="Loading containers: start." 2021-03-15T10:56:19.364 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:56:19.364 controller-0 systemd[1]: info Stopping containerd container runtime... 2021-03-15T10:56:19.366 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.366279404Z" level=error msg="failed to get event" error="rpc error: code = Unavailable desc = transport is closing" module=libcontainerd namespace=moby 2021-03-15T10:56:19.366 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.366412837Z" level=error msg="failed to get event" error="rpc error: code = Unavailable desc = transport is closing" module=libcontainerd namespace=moby 2021-03-15T10:56:19.366 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.366420816Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4207f84a0, TRANSIENT_FAILURE" module=grpc 2021-03-15T10:56:19.366 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.366446998Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4207f84a0, CONNECTING" module=grpc 2021-03-15T10:56:19.366 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.366447852Z" level=error msg="failed to get event" error="rpc error: code = Unavailable desc = transport is closing" module=libcontainerd namespace=moby 2021-03-15T10:56:19.366 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.366483825Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc 2021-03-15T10:56:19.366 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.366487095Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420740180, TRANSIENT_FAILURE" module=grpc 2021-03-15T10:56:19.366 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.366278090Z" level=error msg="failed to get event" error="rpc error: code = Unavailable desc = transport is closing" module=libcontainerd namespace=plugins.moby 2021-03-15T10:56:19.366 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.366497447Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420740180, CONNECTING" module=grpc 2021-03-15T10:56:19.410 controller-0 systemd[1]: info Stopped containerd container runtime. 2021-03-15T10:56:19.414 controller-0 systemd[1]: info Starting containerd container runtime... 2021-03-15T10:56:19.422 controller-0 systemd[1]: info Started containerd container runtime. 2021-03-15T10:56:19.426 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.426153144Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" 2021-03-15T10:56:19.437 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:56:19.438 controller-0 systemd[1]: info Stopping containerd container runtime... 2021-03-15T10:56:19.444 controller-0 systemd[1]: info Stopped containerd container runtime. 2021-03-15T10:56:19.448 controller-0 systemd[1]: info Starting containerd container runtime... 2021-03-15T10:56:19.456 controller-0 systemd[1]: info Started containerd container runtime. 2021-03-15T10:56:19.591 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.591554336Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4207f84a0, READY" module=grpc 2021-03-15T10:56:19.591 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.591702208Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420740180, READY" module=grpc 2021-03-15T10:56:19.599 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.599717069Z" level=info msg="Loading containers: done." 2021-03-15T10:56:19.606 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.605915462Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version 1.0.0-rc10\nspec: 1.0.1-dev\n" 2021-03-15T10:56:19.618 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:19.630 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.630323643Z" level=info msg="Docker daemon" commit=481bc77 graphdriver(s)=overlay2 version=18.09.6 2021-03-15T10:56:19.630 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.630483297Z" level=info msg="Daemon has completed initialization" 2021-03-15T10:56:19.638 controller-0 dockerd[77210]: info time="2021-03-15T10:56:19.635850663Z" level=info msg="API listen on /var/run/docker.sock" 2021-03-15T10:56:19.738 controller-0 systemd[1]: info Started Docker Application Container Engine. 2021-03-15T10:56:19.753 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:56:19.778 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:19.834 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:20.380 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:20.476 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:20.539 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:20.638 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:20.000 controller-0 ntpd[4004]: notice ntpd exiting on signal 15 2021-03-15T10:56:20.741 controller-0 systemd[1]: info Stopping Network Time Service... 2021-03-15T10:56:20.769 controller-0 systemd[1]: info Stopped Network Time Service. 2021-03-15T10:56:23.516 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:23.656 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:56:23.665 controller-0 systemd[1]: info Starting Name Service Cache Daemon... 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 monitoring file `/etc/passwd` (1) 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 monitoring directory `/etc` (2) 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 monitoring file `/etc/group` (3) 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 monitoring directory `/etc` (2) 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 monitoring file `/etc/hosts` (4) 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 monitoring directory `/etc` (2) 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 monitoring file `/etc/resolv.conf` (5) 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 monitoring directory `/etc` (2) 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 monitoring file `/etc/services` (6) 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 monitoring directory `/etc` (2) 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory 2021-03-15T10:56:23.000 controller-0 nscd: notice 79108 stat failed for file `/etc/netgroup'; will try again later: No such file or directory 2021-03-15T10:56:23.710 controller-0 systemd[1]: info Started Name Service Cache Daemon. 2021-03-15T10:56:23.787 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:23.963 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:24.077 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:56:24.086 controller-0 systemd[1]: info Starting Naming services LDAP client daemon.... 2021-03-15T10:56:24.000 controller-0 nslcd[79195]: info version 0.8.13 starting 2021-03-15T10:56:24.126 controller-0 systemd[1]: info Permission denied while opening PID file or unsafe symlink chain: /var/run/nslcd/nslcd.pid 2021-03-15T10:56:24.126 controller-0 systemd[1]: info Started Naming services LDAP client daemon.. 2021-03-15T10:56:24.156 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:24.317 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:24.558 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:56:24.569 controller-0 systemd[1]: info Starting OpenLDAP Server Daemon... 2021-03-15T10:56:29.000 controller-0 nslcd[79195]: info accepting connections 2021-03-15T10:56:29.367 controller-0 check-config.sh[79274]: info Checking configuration file failed: 2021-03-15T10:56:29.369 controller-0 check-config.sh[79274]: info 604f3d5d could not stat config file "/etc/openldap/slapd.conf": Permission denied (13) 2021-03-15T10:56:29.369 controller-0 check-config.sh[79274]: info slaptest: bad configuration file! 2021-03-15T10:56:29.416 controller-0 openldap[79319]: info Starting SLAPD: ● nscd.service - Name Service Cache Daemon 2021-03-15T10:56:29.416 controller-0 openldap[79319]: info Loaded: loaded (/usr/lib/systemd/system/nscd.service; disabled; vendor preset: disabled) 2021-03-15T10:56:29.416 controller-0 openldap[79319]: info Active: active (running) since Mon 2021-03-15 10:56:23 UTC; 5s ago 2021-03-15T10:56:29.416 controller-0 openldap[79319]: info Main PID: 79108 (nscd) 2021-03-15T10:56:29.416 controller-0 openldap[79319]: info Tasks: 11 2021-03-15T10:56:29.416 controller-0 openldap[79319]: info Memory: 1.6M 2021-03-15T10:56:29.416 controller-0 openldap[79319]: info CGroup: /system.slice/nscd.service 2021-03-15T10:56:29.416 controller-0 openldap[79319]: info └─79108 /usr/sbin/nscd 2021-03-15T10:56:29.416 controller-0 openldap[79319]: info . 2021-03-15T10:56:29.417 controller-0 systemd[1]: info Started OpenLDAP Server Daemon. 2021-03-15T10:56:29.456 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:30.260 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:36.456 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:36.548 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:36.629 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:56:36.638 controller-0 systemd[1]: info Starting Set time via NTP... 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: notice ntpd 4.2.6p5@1.2349-o Fri Jan 29 00:22:00 UTC 2021 (1) 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: notice proto: precision = 0.031 usec 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info 0.0.0.0 c01d 0d kern kernel time sync enabled 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: debug ntp_io: estimated max descriptors: 1024, initial socket boundary: 16 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen and drop on 1 v6wildcard :: UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen normally on 2 lo 127.0.0.1 UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen normally on 3 enp0s3 10.20.2.4 UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen normally on 4 enp0s8 192.168.204.2 UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen normally on 5 enp0s8:5 192.168.206.2 UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen normally on 6 docker0 172.17.0.1 UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen normally on 7 lo ::1 UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen normally on 8 enp0s8 fe80::a00:27ff:fe6b:9bdf UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen normally on 9 eth1000 fe80::a00:27ff:fe4a:bc10 UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen normally on 10 eth1001 fe80::a00:27ff:fe2e:586d UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listen normally on 11 enp0s3 fe80::a00:27ff:fef0:7c46 UDP 123 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info Listening on routing socket on fd #28 for interface updates 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info 0.0.0.0 c016 06 restart 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info 0.0.0.0 c012 02 freq_set kernel 0.000 PPM 2021-03-15T10:56:36.000 controller-0 ntpd[81369]: info 0.0.0.0 c011 01 freq_not_set 2021-03-15T10:56:42.000 controller-0 nscd: notice 79108 checking for monitored file `/etc/netgroup': No such file or directory 2021-03-15T10:56:47.000 controller-0 ntpd[81369]: notice ntpd: no servers found 2021-03-15T10:56:47.643 controller-0 ntpd[81369]: info ntpd: no servers found 2021-03-15T10:56:47.644 controller-0 systemd[1]: notice ntpdate.service: main process exited, code=exited, status=1/FAILURE 2021-03-15T10:56:47.646 controller-0 systemd[1]: err Failed to start Set time via NTP. 2021-03-15T10:56:47.669 controller-0 systemd[1]: notice Unit ntpdate.service entered failed state. 2021-03-15T10:56:47.669 controller-0 systemd[1]: warning ntpdate.service failed. 2021-03-15T10:56:47.719 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:47.784 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:56:47.792 controller-0 systemd[1]: info Starting Network Time Service... 2021-03-15T10:56:47.000 controller-0 ntpd[82286]: notice ntpd 4.2.6p5@1.2349-o Fri Jan 29 00:22:00 UTC 2021 (1) 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: notice proto: precision = 0.030 usec 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info 0.0.0.0 c01d 0d kern kernel time sync enabled 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: debug ntp_io: estimated max descriptors: 1024, initial socket boundary: 16 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen and drop on 1 v6wildcard :: UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen normally on 2 lo 127.0.0.1 UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen normally on 3 enp0s3 10.20.2.4 UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen normally on 4 enp0s8 192.168.204.2 UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen normally on 5 enp0s8:5 192.168.206.2 UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen normally on 6 docker0 172.17.0.1 UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen normally on 7 lo ::1 UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen normally on 8 enp0s8 fe80::a00:27ff:fe6b:9bdf UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen normally on 9 eth1000 fe80::a00:27ff:fe4a:bc10 UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen normally on 10 eth1001 fe80::a00:27ff:fe2e:586d UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listen normally on 11 enp0s3 fe80::a00:27ff:fef0:7c46 UDP 123 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info Listening on routing socket on fd #28 for interface updates 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info 0.0.0.0 c016 06 restart 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info 0.0.0.0 c012 02 freq_set kernel 0.000 PPM 2021-03-15T10:56:47.000 controller-0 ntpd[82287]: info 0.0.0.0 c011 01 freq_not_set 2021-03-15T10:56:47.827 controller-0 systemd[1]: info Started Network Time Service. 2021-03-15T10:56:47.876 controller-0 systemd[1]: info Reloading. 2021-03-15T10:56:57.873 controller-0 controller_config[2151]: info [DONE] 2021-03-15T10:56:58.531 controller-0 systemd[1]: info Started General StarlingX config gate. 2021-03-15T10:56:58.562 controller-0 systemd[1]: info Starting Kubernetes Kubelet Server... 2021-03-15T10:56:58.571 controller-0 systemd[1]: info Starting StarlingX Maintenance Guest Heartbeat Monitor Server... 2021-03-15T10:56:58.000 controller-0 root: info /usr/bin/kubelet-cgroup-setup.sh(83313): Creating: /sys/fs/cgroup/pids/k8s-infra 2021-03-15T10:56:58.000 controller-0 root: info /usr/bin/kubelet-cgroup-setup.sh(83313): Creating: /sys/fs/cgroup/hugetlb/k8s-infra 2021-03-15T10:56:58.588 controller-0 systemd[1]: info Starting StarlingX Maintenance Logger... 2021-03-15T10:56:58.000 controller-0 root: info /usr/bin/kubelet-cgroup-setup.sh(83313): Nothing to do, already configured: /sys/fs/cgroup/cpuset/k8s-infra. 2021-03-15T10:56:58.599 controller-0 systemd[1]: info Starting StarlingX Maintenance Heartbeat Client... 2021-03-15T10:56:58.611 controller-0 systemd[1]: info Starting Collectd statistics daemon and extension services... 2021-03-15T10:56:58.617 controller-0 collectd[83341]: info plugin_load: plugin "network" successfully loaded. 2021-03-15T10:56:58.618 controller-0 collectd[83341]: info plugin_load: plugin "python" successfully loaded. 2021-03-15T10:56:58.626 controller-0 systemd[1]: info Starting StarlingX Maintenance Alarm Handler Client... 2021-03-15T10:56:58.636 controller-0 guestServer[83316]: info Starting guestServer: OK 2021-03-15T10:56:58.636 controller-0 mtclog[83324]: info Starting mtclogd: OK 2021-03-15T10:56:58.643 controller-0 systemd[1]: info Starting Starling-X Maintenance Link Monitor... 2021-03-15T10:56:58.644 controller-0 hbsClient[83334]: info Starting hbsClient: OK 2021-03-15T10:56:58.647 controller-0 systemd[1]: info Starting Service Management Watchdog... 2021-03-15T10:56:58.655 controller-0 systemd[1]: info Starting StarlingX Maintenance Filesystem Monitor... 2021-03-15T10:56:58.668 controller-0 systemd[1]: info Starting StarlingX Maintenance Goenable Ready... 2021-03-15T10:56:58.678 controller-0 sm-watchdog[83365]: info Starting sm-watchdog: OK 2021-03-15T10:56:58.698 controller-0 goenabled[83377]: info Goenabled Ready: [ OK ] 2021-03-15T10:56:58.700 controller-0 mtcalarm[83352]: info Starting mtcalarmd: OK 2021-03-15T10:56:58.702 controller-0 lmon[83363]: info Starting lmond: OK 2021-03-15T10:56:58.714 controller-0 systemd[1]: info Started StarlingX Maintenance Guest Heartbeat Monitor Server. 2021-03-15T10:56:58.718 controller-0 systemd[1]: info Started StarlingX Maintenance Logger. 2021-03-15T10:56:58.724 controller-0 systemd[1]: info Started StarlingX Maintenance Heartbeat Client. 2021-03-15T10:56:58.729 controller-0 fsmon[83367]: info Starting fsmond: OK 2021-03-15T10:56:58.729 controller-0 systemd[1]: info Started StarlingX Maintenance Alarm Handler Client. 2021-03-15T10:56:58.733 controller-0 systemd[1]: info Started Starling-X Maintenance Link Monitor. 2021-03-15T10:56:58.736 controller-0 systemd[1]: info Started Service Management Watchdog. 2021-03-15T10:56:58.740 controller-0 systemd[1]: info Started StarlingX Maintenance Filesystem Monitor. 2021-03-15T10:56:58.744 controller-0 systemd[1]: info Started StarlingX Maintenance Goenable Ready. 2021-03-15T10:56:58.749 controller-0 systemd[1]: info Started Kubernetes Kubelet Server. 2021-03-15T10:56:58.755 controller-0 systemd[1]: info Started Kubernetes Pods Recovery Service. 2021-03-15T10:56:58.760 controller-0 systemd[1]: info Started workerconfig service. 2021-03-15T10:56:58.764 controller-0 systemd[1]: info Starting STX worker config gate... 2021-03-15T10:56:58.768 controller-0 systemd[1]: info Starting Service Management Unit... 2021-03-15T10:56:58.000 controller-0 k8s-pod-recovery(83413): info : Starting. 2021-03-15T10:56:58.774 controller-0 systemd[1]: info Starting StarlingX Maintenance Command Handler Client... 2021-03-15T10:56:58.778 controller-0 systemd[1]: info Starting StarlingX Maintenance Heartbeat Agent... 2021-03-15T10:56:58.782 controller-0 worker_config[83417]: info Configuring worker node... 2021-03-15T10:56:58.000 controller-0 k8s-pod-recovery(83413): info : Waiting for systemd to finish booting... 2021-03-15T10:56:58.817 controller-0 worker_config[83417]: info Checking connectivity to controller-platform-nfs for up to 900 seconds over interface 192.168.204.2 2021-03-15T10:56:58.852 controller-0 mtcClient[83429]: info Starting mtcClient: OK 2021-03-15T10:56:58.856 controller-0 systemd[1]: info Started StarlingX Maintenance Command Handler Client. 2021-03-15T10:56:58.864 controller-0 kubelet[83402]: info Flag --cpu-manager-policy has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2021-03-15T10:56:58.864 controller-0 kubelet[83402]: info Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2021-03-15T10:56:58.866 controller-0 kubelet[83402]: info Flag --cpu-manager-policy has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2021-03-15T10:56:58.866 controller-0 kubelet[83402]: info Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2021-03-15T10:56:58.872 controller-0 systemd[1]: info Started Kubernetes systemd probe. 2021-03-15T10:56:58.881 controller-0 kubelet[83402]: info I0315 10:56:58.881298 83402 server.go:417] Version: v1.18.1 2021-03-15T10:56:58.881 controller-0 kubelet[83402]: info I0315 10:56:58.881633 83402 plugins.go:100] No cloud provider specified. 2021-03-15T10:56:58.881 controller-0 kubelet[83402]: info I0315 10:56:58.881677 83402 server.go:837] Client rotation is on, will bootstrap in background 2021-03-15T10:56:58.906 controller-0 kubelet[83402]: info I0315 10:56:58.893261 83402 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". 2021-03-15T10:56:58.912 controller-0 dockerd[77210]: info time="2021-03-15T10:56:58.910559424Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version 1.0.0-rc10\nspec: 1.0.1-dev\n" 2021-03-15T10:56:58.945 controller-0 dockerd[77210]: info time="2021-03-15T10:56:58.945266044Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version 1.0.0-rc10\nspec: 1.0.1-dev\n" 2021-03-15T10:56:58.946 controller-0 hbsAgent[83433]: info Starting hbsAgent: OK 2021-03-15T10:56:58.952 controller-0 systemd[1]: info Started StarlingX Maintenance Heartbeat Agent. 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.966613 83402 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: [k8s-infra] 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.966632 83402 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/k8s-infra CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[memory:{i:{value:4823449600 scale:0} d:{Dec:} s: Format:BinarySI}] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.967430 83402 topology_manager.go:126] [topologymanager] Creating topology manager with none policy 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.967439 83402 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.967444 83402 container_manager_linux.go:306] Creating device plugin manager: true 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.967821 83402 remote_runtime.go:59] parsed scheme: "" 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.967831 83402 remote_runtime.go:59] scheme "" not registered, fallback to default scheme 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.967859 83402 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/containerd/containerd.sock 0 }] } 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.967866 83402 clientconn.go:933] ClientConn switching balancer to "pick_first" 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.967895 83402 remote_image.go:50] parsed scheme: "" 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.967900 83402 remote_image.go:50] scheme "" not registered, fallback to default scheme 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.967906 83402 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/containerd/containerd.sock 0 }] } 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.967909 83402 clientconn.go:933] ClientConn switching balancer to "pick_first" 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.968188 83402 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests 2021-03-15T10:56:58.969 controller-0 kubelet[83402]: info I0315 10:56:58.968209 83402 kubelet.go:317] Watching apiserver 2021-03-15T10:56:59.000 controller-0 nscd: notice 79108 monitored file `/etc/hosts` was written to 2021-03-15T10:56:59.061 controller-0 worker_config[83417]: info /etc/init.d/worker_config: Running puppet manifest apply 2021-03-15T10:56:59.143 controller-0 worker_config[83417]: info Applying puppet worker manifest... 2021-03-15T10:56:59.587 controller-0 collectd[83341]: info plugin_load: plugin "threshold" successfully loaded. 2021-03-15T10:56:59.588 controller-0 collectd[83341]: info plugin_load: plugin "df" successfully loaded. 2021-03-15T10:56:59.590 controller-0 collectd[83341]: info platform cpu usage plugin debug=False, verbose=True 2021-03-15T10:56:59.593 controller-0 collectd[83341]: info platform memory usage: debug=False, verbose=True 2021-03-15T10:56:59.595 controller-0 collectd[83341]: info interface plugin configured by config file [http://localhost:2122/mtce/lmon] 2021-03-15T10:56:59.616 controller-0 collectd[83341]: info Systemd detected, trying to signal readyness. 2021-03-15T10:56:59.619 controller-0 systemd[1]: info Started Collectd statistics daemon and extension services. 2021-03-15T10:56:59.678 controller-0 systemd[1]: info Can't open PID file /var/run/sm.pid (yet?) after start: No such file or directory 2021-03-15T10:56:59.678 controller-0 sm[83424]: info Starting sm: OK 2021-03-15T10:56:59.678 controller-0 systemd[1]: info Started Service Management Unit. 2021-03-15T10:56:59.704 controller-0 systemd[1]: info Started Service Management Shutdown Unit. 2021-03-15T10:56:59.735 controller-0 systemd[1]: info Starting Service Management API Unit... 2021-03-15T10:56:59.770 controller-0 sm-api[83720]: info Starting sm-api: OK 2021-03-15T10:56:59.770 controller-0 systemd[1]: info Started Service Management API Unit. 2021-03-15T10:56:59.795 controller-0 systemd[1]: info Starting Service Management Event Recorder Unit... 2021-03-15T10:56:59.840 controller-0 sm-eru[83737]: info Starting sm-eru: OK 2021-03-15T10:56:59.840 controller-0 systemd[1]: info Can't open PID file /var/run/sm-eru.pid (yet?) after start: No such file or directory 2021-03-15T10:56:59.840 controller-0 systemd[1]: info Started Service Management Event Recorder Unit. 2021-03-15T10:56:59.878 controller-0 systemd[1]: info Starting StarlingX Maintenance Process Monitor... 2021-03-15T10:56:59.904 controller-0 pmon[83754]: info Starting pmond: OK 2021-03-15T10:56:59.906 controller-0 systemd[1]: info Can't open PID file /var/run/pmond.pid (yet?) after start: No such file or directory 2021-03-15T10:56:59.906 controller-0 systemd[1]: info Started StarlingX Maintenance Process Monitor. 2021-03-15T10:56:59.947 controller-0 systemd[1]: info Starting StarlingX Maintenance Host Watchdog... 2021-03-15T10:56:59.986 controller-0 hostw[83802]: info Starting hostwd: OK 2021-03-15T10:56:59.986 controller-0 systemd[1]: info Can't open PID file /var/run/hostwd.pid (yet?) after start: No such file or directory 2021-03-15T10:56:59.986 controller-0 systemd[1]: info Started StarlingX Maintenance Host Watchdog. 2021-03-15T10:57:00.737 controller-0 systemd[1]: info Reloading. 2021-03-15T10:57:02.148 controller-0 systemd[1]: info Reloading. 2021-03-15T10:57:04.395 controller-0 systemd[1]: info Stopping Name Service Cache Daemon... 2021-03-15T10:57:04.488 controller-0 kubelet[83402]: info E0315 10:57:04.487089 83402 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated. 2021-03-15T10:57:04.488 controller-0 kubelet[83402]: info For verbose messaging see aws.Config.CredentialsChainVerboseErrors 2021-03-15T10:57:04.509 controller-0 kubelet[83402]: info I0315 10:57:04.509293 83402 kuberuntime_manager.go:211] Container runtime containerd initialized, version: v1.3.3, apiVersion: v1alpha2 2021-03-15T10:57:04.550 controller-0 kubelet[83402]: info I0315 10:57:04.550624 83402 server.go:1125] Started kubelet 2021-03-15T10:57:04.554 controller-0 kubelet[83402]: info I0315 10:57:04.554047 83402 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer 2021-03-15T10:57:04.555 controller-0 kubelet[83402]: info I0315 10:57:04.555381 83402 server.go:145] Starting to listen on 0.0.0.0:10250 2021-03-15T10:57:04.569 controller-0 kubelet[83402]: info I0315 10:57:04.555974 83402 server.go:393] Adding debug handlers to kubelet server. 2021-03-15T10:57:04.569 controller-0 kubelet[83402]: info I0315 10:57:04.557982 83402 volume_manager.go:265] Starting Kubelet Volume Manager 2021-03-15T10:57:04.569 controller-0 kubelet[83402]: info I0315 10:57:04.558351 83402 desired_state_of_world_populator.go:139] Desired state populator starts to run 2021-03-15T10:57:04.571 controller-0 kubelet[83402]: info E0315 10:57:04.571061 83402 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/docker/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache. 2021-03-15T10:57:04.571 controller-0 kubelet[83402]: info E0315 10:57:04.571093 83402 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem 2021-03-15T10:57:04.573 controller-0 kubelet[83402]: info I0315 10:57:04.572506 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6b364b65fb82bc9935badb94ed0b0ac359c3a53078de96278eee5edb8370d9a7 2021-03-15T10:57:04.576 controller-0 systemd[1]: info Stopped Name Service Cache Daemon. 2021-03-15T10:57:04.579 controller-0 kubelet[83402]: info E0315 10:57:04.577010 83402 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/docker/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache. 2021-03-15T10:57:04.579 controller-0 kubelet[83402]: info E0315 10:57:04.577035 83402 kubelet.go:1301] Image garbage collection failed multiple times in a row: invalid capacity 0 on image filesystem 2021-03-15T10:57:04.657 controller-0 kubelet[83402]: info I0315 10:57:04.657104 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f91ed27a6619a3ccae4c41e2eb667dacc74a473e7e7f99c5d2d6386cb19e2aaf 2021-03-15T10:57:04.690 controller-0 dockerd[77210]: info time="2021-03-15T10:57:04.690546910Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version 1.0.0-rc10\nspec: 1.0.1-dev\n" 2021-03-15T10:57:04.690 controller-0 kubelet[83402]: info I0315 10:57:04.686772 83402 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach 2021-03-15T10:57:04.690 controller-0 kubelet[83402]: info I0315 10:57:04.687089 83402 kuberuntime_manager.go:978] updating runtime config through cri with podcidr 172.16.0.0/24 2021-03-15T10:57:04.690 controller-0 kubelet[83402]: info I0315 10:57:04.690137 83402 kubelet_network.go:77] Setting Pod CIDR: -> 172.16.0.0/24 2021-03-15T10:57:04.691 controller-0 kubelet[83402]: info I0315 10:57:04.691241 83402 kubelet_node_status.go:70] Attempting to register node controller-0 2021-03-15T10:57:04.701 controller-0 kubelet[83402]: info I0315 10:57:04.701567 83402 clientconn.go:106] parsed scheme: "unix" 2021-03-15T10:57:04.702 controller-0 kubelet[83402]: info I0315 10:57:04.701621 83402 clientconn.go:106] scheme "unix" not registered, fallback to default scheme 2021-03-15T10:57:04.702 controller-0 kubelet[83402]: info I0315 10:57:04.701640 83402 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] } 2021-03-15T10:57:04.702 controller-0 kubelet[83402]: info I0315 10:57:04.701647 83402 clientconn.go:933] ClientConn switching balancer to "pick_first" 2021-03-15T10:57:04.713 controller-0 kubelet[83402]: info I0315 10:57:04.712253 83402 status_manager.go:158] Starting to sync pod status with apiserver 2021-03-15T10:57:04.713 controller-0 kubelet[83402]: info I0315 10:57:04.712301 83402 kubelet.go:1821] Starting kubelet main sync loop. 2021-03-15T10:57:04.713 controller-0 kubelet[83402]: info E0315 10:57:04.712361 83402 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] 2021-03-15T10:57:04.715 controller-0 kubelet[83402]: info I0315 10:57:04.713989 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: e25270eab52139e93cbf735f6d612dc1d3e30ef7d798678d41d3f27eb4ba5e6e 2021-03-15T10:57:04.806 controller-0 kubelet[83402]: info I0315 10:57:04.806027 83402 kubelet_node_status.go:112] Node controller-0 was previously registered 2021-03-15T10:57:04.807 controller-0 kubelet[83402]: info I0315 10:57:04.806744 83402 kubelet_node_status.go:73] Successfully registered node controller-0 2021-03-15T10:57:04.814 controller-0 kubelet[83402]: info E0315 10:57:04.813231 83402 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet 2021-03-15T10:57:04.820 controller-0 kubelet[83402]: info I0315 10:57:04.819654 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 629039d6a683e012b9f00bf229645e894d686ab2b2b700c585fbad3bd718a2c5 2021-03-15T10:57:04.825 controller-0 kubelet[83402]: info I0315 10:57:04.825565 83402 setters.go:559] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2021-03-15 10:57:04.825548808 +0000 UTC m=+6.081892008 LastTransitionTime:2021-03-15 10:57:04.825548808 +0000 UTC m=+6.081892008 Reason:KubeletNotReady Message:container runtime status check may not have completed yet} 2021-03-15T10:57:04.875 controller-0 kubelet[83402]: info I0315 10:57:04.873756 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: d6daab5ddd23208ac0193dae8767cc7f0f328298b7b1e7245bf9f9d3ac8b7ede 2021-03-15T10:57:04.897 controller-0 kubelet[83402]: info I0315 10:57:04.897518 83402 cpu_manager.go:184] [cpumanager] starting with none policy 2021-03-15T10:57:04.897 controller-0 kubelet[83402]: info I0315 10:57:04.897533 83402 cpu_manager.go:185] [cpumanager] reconciling every 10s 2021-03-15T10:57:04.897 controller-0 kubelet[83402]: info I0315 10:57:04.897546 83402 state_mem.go:36] [cpumanager] initializing new in-memory state store 2021-03-15T10:57:04.904 controller-0 kubelet[83402]: info I0315 10:57:04.904309 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6468154fdbeed01b8a7aed6b2b40897cbb0bd6dc4bc68d5c87389f3d7d54badb 2021-03-15T10:57:04.908 controller-0 kubelet[83402]: info I0315 10:57:04.906626 83402 policy_none.go:43] [cpumanager] none policy: Start 2021-03-15T10:57:04.925 controller-0 kubelet[83402]: info I0315 10:57:04.924979 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: fdfa429164b79634e210064cf39e9cb9cf5bf4fb76f81c7e41962a29c58f1d62 2021-03-15T10:57:04.932 controller-0 kubelet[83402]: info I0315 10:57:04.932163 83402 plugin_manager.go:114] Starting Kubelet Plugin Manager 2021-03-15T10:57:04.946 controller-0 kubelet[83402]: info E0315 10:57:04.943607 83402 remote_runtime.go:295] ContainerStatus "6468154fdbeed01b8a7aed6b2b40897cbb0bd6dc4bc68d5c87389f3d7d54badb" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "6468154fdbeed01b8a7aed6b2b40897cbb0bd6dc4bc68d5c87389f3d7d54badb": does not exist 2021-03-15T10:57:04.946 controller-0 kubelet[83402]: info E0315 10:57:04.943709 83402 kuberuntime_manager.go:952] getPodContainerStatuses for pod "calico-node-bbjjj_kube-system(623835c9-005a-4584-8ad8-af75cc21e13a)" failed: rpc error: code = Unknown desc = an error occurred when try to find container "6468154fdbeed01b8a7aed6b2b40897cbb0bd6dc4bc68d5c87389f3d7d54badb": does not exist 2021-03-15T10:57:05.013 controller-0 kubelet[83402]: info I0315 10:57:05.013475 83402 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:05.022 controller-0 kubelet[83402]: info I0315 10:57:05.020933 83402 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:05.028 controller-0 kubelet[83402]: info I0315 10:57:05.024860 83402 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:05.028 controller-0 kubelet[83402]: info I0315 10:57:05.025975 83402 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:05.031 controller-0 kubelet[83402]: info I0315 10:57:05.030585 83402 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:05.058 controller-0 kubelet[83402]: info I0315 10:57:05.055849 83402 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:05.070 controller-0 kubelet[83402]: info W0315 10:57:05.063070 83402 pod_container_deletor.go:77] Container "fdfa429164b79634e210064cf39e9cb9cf5bf4fb76f81c7e41962a29c58f1d62" not found in pod's containers 2021-03-15T10:57:05.070 controller-0 kubelet[83402]: info W0315 10:57:05.063108 83402 pod_container_deletor.go:77] Container "e2c0e7aca89162814240097fdf0a666799dc6c39b5f0db4c43263e9a3b5bc611" not found in pod's containers 2021-03-15T10:57:05.070 controller-0 kubelet[83402]: info W0315 10:57:05.063133 83402 pod_container_deletor.go:77] Container "e48264665bbc2abd32fae8a68e7a097ab423c025fbbcf449e614f985a8affe18" not found in pod's containers 2021-03-15T10:57:05.070 controller-0 kubelet[83402]: info W0315 10:57:05.063146 83402 pod_container_deletor.go:77] Container "02db1e3c2a0dd9e1ef9a7b27d10441aba3b406b2f2cbc84f4c8e9bf3ce8af8c0" not found in pod's containers 2021-03-15T10:57:05.070 controller-0 kubelet[83402]: info W0315 10:57:05.063216 83402 pod_container_deletor.go:77] Container "629039d6a683e012b9f00bf229645e894d686ab2b2b700c585fbad3bd718a2c5" not found in pod's containers 2021-03-15T10:57:05.070 controller-0 kubelet[83402]: info W0315 10:57:05.063236 83402 pod_container_deletor.go:77] Container "48c3bb4b18802cf072b1942488bc1bed9e7ee4d066d4d41f5dbc1d79b00e0f20" not found in pod's containers 2021-03-15T10:57:05.070 controller-0 kubelet[83402]: info W0315 10:57:05.063257 83402 pod_container_deletor.go:77] Container "db9913b7588565ebb4d93fc4f41993f0373ccab00c816ad12ee8f89db1ca9824" not found in pod's containers 2021-03-15T10:57:05.070 controller-0 kubelet[83402]: info I0315 10:57:05.063281 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 890c219d2e8ac6e5f61a9489493b6daaa80d9e2bf3bef09fe617b8e07fc0cbf6 2021-03-15T10:57:05.078 controller-0 kubelet[83402]: info E0315 10:57:05.072995 83402 pod_workers.go:191] Error syncing pod 623835c9-005a-4584-8ad8-af75cc21e13a ("calico-node-bbjjj_kube-system(623835c9-005a-4584-8ad8-af75cc21e13a)"), skipping: rpc error: code = Unknown desc = an error occurred when try to find container "6468154fdbeed01b8a7aed6b2b40897cbb0bd6dc4bc68d5c87389f3d7d54badb": does not exist 2021-03-15T10:57:05.108 controller-0 kubelet[83402]: info W0315 10:57:05.106147 83402 pod_container_deletor.go:77] Container "348fb49faf663ea444e445c35dda1c2209221159422ca63d771f121375f14cc3" not found in pod's containers 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113054 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-xtables-lock") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113094 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/beb4cf3721fe7ab7384230d84f609a39-ca-certs") pod "kube-controller-manager-controller-0" (UID: "beb4cf3721fe7ab7384230d84f609a39") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113125 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/2318749d-2064-4647-90f0-f6c8cbf4d753-xtables-lock") pod "kube-proxy-vnhrw" (UID: "2318749d-2064-4647-90f0-f6c8cbf4d753") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113143 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-var-run-calico") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113162 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-cni-bin-dir") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113183 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "host-local-net-dir" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-host-local-net-dir") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113213 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-mvgqm" (UniqueName: "kubernetes.io/secret/623835c9-005a-4584-8ad8-af75cc21e13a-calico-node-token-mvgqm") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113232 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/beb4cf3721fe7ab7384230d84f609a39-k8s-certs") pod "kube-controller-manager-controller-0" (UID: "beb4cf3721fe7ab7384230d84f609a39") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113255 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/beb4cf3721fe7ab7384230d84f609a39-kubeconfig") pod "kube-controller-manager-controller-0" (UID: "beb4cf3721fe7ab7384230d84f609a39") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113273 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2318749d-2064-4647-90f0-f6c8cbf4d753-kube-proxy") pod "kube-proxy-vnhrw" (UID: "2318749d-2064-4647-90f0-f6c8cbf4d753") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113290 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/04abb2ef72685c7615231f0f216c924e-kubeconfig") pod "kube-scheduler-controller-0" (UID: "04abb2ef72685c7615231f0f216c924e") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113306 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-cni-net-dir") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113327 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "policysync" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-policysync") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113343 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvol-driver-host" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-flexvol-driver-host") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113360 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/eea3f7ab53a44b935832ed67b7d00029-ca-certs") pod "kube-apiserver-controller-0" (UID: "eea3f7ab53a44b935832ed67b7d00029") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113377 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "encryption-config" (UniqueName: "kubernetes.io/host-path/eea3f7ab53a44b935832ed67b7d00029-encryption-config") pod "kube-apiserver-controller-0" (UID: "eea3f7ab53a44b935832ed67b7d00029") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113397 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/eea3f7ab53a44b935832ed67b7d00029-etc-pki") pod "kube-apiserver-controller-0" (UID: "eea3f7ab53a44b935832ed67b7d00029") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113419 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/eea3f7ab53a44b935832ed67b7d00029-k8s-certs") pod "kube-apiserver-controller-0" (UID: "eea3f7ab53a44b935832ed67b7d00029") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113435 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/2318749d-2064-4647-90f0-f6c8cbf4d753-lib-modules") pod "kube-proxy-vnhrw" (UID: "2318749d-2064-4647-90f0-f6c8cbf4d753") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113452 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-cj6rp" (UniqueName: "kubernetes.io/secret/2318749d-2064-4647-90f0-f6c8cbf4d753-kube-proxy-token-cj6rp") pod "kube-proxy-vnhrw" (UID: "2318749d-2064-4647-90f0-f6c8cbf4d753") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113468 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-lib-modules") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113484 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-var-lib-calico") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113504 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/beb4cf3721fe7ab7384230d84f609a39-etc-pki") pod "kube-controller-manager-controller-0" (UID: "beb4cf3721fe7ab7384230d84f609a39") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113521 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/beb4cf3721fe7ab7384230d84f609a39-flexvolume-dir") pod "kube-controller-manager-controller-0" (UID: "beb4cf3721fe7ab7384230d84f609a39") 2021-03-15T10:57:05.113 controller-0 kubelet[83402]: info I0315 10:57:05.113527 83402 reconciler.go:157] Reconciler: start to sync state 2021-03-15T10:57:05.137 controller-0 kubelet[83402]: info I0315 10:57:05.137195 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9568616d4de4270bf08bf78666f7924173f9f1112e0759864b96c0e87bda4e7a 2021-03-15T10:57:05.151 controller-0 kubelet[83402]: info I0315 10:57:05.151527 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 890c219d2e8ac6e5f61a9489493b6daaa80d9e2bf3bef09fe617b8e07fc0cbf6 2021-03-15T10:57:05.152 controller-0 kubelet[83402]: info E0315 10:57:05.152132 83402 remote_runtime.go:295] ContainerStatus "890c219d2e8ac6e5f61a9489493b6daaa80d9e2bf3bef09fe617b8e07fc0cbf6" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "890c219d2e8ac6e5f61a9489493b6daaa80d9e2bf3bef09fe617b8e07fc0cbf6": does not exist 2021-03-15T10:57:05.152 controller-0 kubelet[83402]: info I0315 10:57:05.152258 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9568616d4de4270bf08bf78666f7924173f9f1112e0759864b96c0e87bda4e7a 2021-03-15T10:57:05.153 controller-0 kubelet[83402]: info E0315 10:57:05.152885 83402 remote_runtime.go:295] ContainerStatus "9568616d4de4270bf08bf78666f7924173f9f1112e0759864b96c0e87bda4e7a" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "9568616d4de4270bf08bf78666f7924173f9f1112e0759864b96c0e87bda4e7a": does not exist 2021-03-15T10:57:05.153 controller-0 kubelet[83402]: info I0315 10:57:05.152918 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 890c219d2e8ac6e5f61a9489493b6daaa80d9e2bf3bef09fe617b8e07fc0cbf6 2021-03-15T10:57:05.153 controller-0 kubelet[83402]: info I0315 10:57:05.153289 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9568616d4de4270bf08bf78666f7924173f9f1112e0759864b96c0e87bda4e7a 2021-03-15T10:57:05.154 controller-0 kubelet[83402]: info I0315 10:57:05.153583 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 890c219d2e8ac6e5f61a9489493b6daaa80d9e2bf3bef09fe617b8e07fc0cbf6 2021-03-15T10:57:05.155 controller-0 kubelet[83402]: info I0315 10:57:05.155609 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9568616d4de4270bf08bf78666f7924173f9f1112e0759864b96c0e87bda4e7a 2021-03-15T10:57:05.156 controller-0 kubelet[83402]: info I0315 10:57:05.156073 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 890c219d2e8ac6e5f61a9489493b6daaa80d9e2bf3bef09fe617b8e07fc0cbf6 2021-03-15T10:57:05.156 controller-0 kubelet[83402]: info I0315 10:57:05.156777 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9568616d4de4270bf08bf78666f7924173f9f1112e0759864b96c0e87bda4e7a 2021-03-15T10:57:05.158 controller-0 kubelet[83402]: info I0315 10:57:05.157946 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 890c219d2e8ac6e5f61a9489493b6daaa80d9e2bf3bef09fe617b8e07fc0cbf6 2021-03-15T10:57:05.158 controller-0 kubelet[83402]: info I0315 10:57:05.158556 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9568616d4de4270bf08bf78666f7924173f9f1112e0759864b96c0e87bda4e7a 2021-03-15T10:57:05.174 controller-0 kubelet[83402]: info E0315 10:57:05.173957 83402 kubelet.go:1663] Failed creating a mirror pod for "kube-scheduler-controller-0_kube-system(04abb2ef72685c7615231f0f216c924e)": pods "kube-scheduler-controller-0" already exists 2021-03-15T10:57:05.379 controller-0 kubelet[83402]: info E0315 10:57:05.379253 83402 kubelet.go:1663] Failed creating a mirror pod for "kube-apiserver-controller-0_kube-system(eea3f7ab53a44b935832ed67b7d00029)": pods "kube-apiserver-controller-0" already exists 2021-03-15T10:57:05.402 controller-0 kubelet[83402]: info E0315 10:57:05.401593 83402 remote_runtime.go:128] StopPodSandbox "f057bfcd4360dd381874100891e5c64299a3e396ea88a0e1c0cbd48299ce4423" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "f057bfcd4360dd381874100891e5c64299a3e396ea88a0e1c0cbd48299ce4423": delegateDel: error invoking ConflistDel - "chain": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused 2021-03-15T10:57:05.402 controller-0 kubelet[83402]: info E0315 10:57:05.401644 83402 kuberuntime_manager.go:895] Failed to stop sandbox {"containerd" "f057bfcd4360dd381874100891e5c64299a3e396ea88a0e1c0cbd48299ce4423"} 2021-03-15T10:57:05.407 controller-0 kubelet[83402]: info E0315 10:57:05.405870 83402 remote_runtime.go:128] StopPodSandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9": delegateDel: error invoking ConflistDel - "chain": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused 2021-03-15T10:57:05.407 controller-0 kubelet[83402]: info E0315 10:57:05.405899 83402 kuberuntime_gc.go:170] Failed to stop sandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9" before removing: rpc error: code = Unknown desc = failed to destroy network for sandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9": delegateDel: error invoking ConflistDel - "chain": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused 2021-03-15T10:57:05.482 controller-0 kubelet[83402]: info E0315 10:57:05.480937 83402 remote_runtime.go:128] StopPodSandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9": delegateDel: error invoking ConflistDel - "chain": conflistDel: error in getting result from DelNetworkList: could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-3982e60f517df736662a2 --wait]: exit status 1: iptables: No chain/target/match by that name. 2021-03-15T10:57:05.482 controller-0 kubelet[83402]: info E0315 10:57:05.480970 83402 kuberuntime_gc.go:170] Failed to stop sandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9" before removing: rpc error: code = Unknown desc = failed to destroy network for sandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9": delegateDel: error invoking ConflistDel - "chain": conflistDel: error in getting result from DelNetworkList: could not teardown ipv4 dnat: running [/usr/sbin/iptables -t nat -X CNI-DN-3982e60f517df736662a2 --wait]: exit status 1: iptables: No chain/target/match by that name. 2021-03-15T10:57:05.557 controller-0 kubelet[83402]: info E0315 10:57:05.556629 83402 remote_runtime.go:128] StopPodSandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9": delegateDel: error invoking ConflistDel - "chain": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused 2021-03-15T10:57:05.557 controller-0 kubelet[83402]: info E0315 10:57:05.556684 83402 kuberuntime_manager.go:895] Failed to stop sandbox {"containerd" "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9"} 2021-03-15T10:57:05.557 controller-0 kubelet[83402]: info E0315 10:57:05.556727 83402 kubelet.go:1575] error killing pod: failed to "KillPodSandbox" for "54590ffb-c482-4020-81e6-c3629037dcf5" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9\": delegateDel: error invoking ConflistDel - \"chain\": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused" 2021-03-15T10:57:05.557 controller-0 kubelet[83402]: info E0315 10:57:05.556738 83402 pod_workers.go:191] Error syncing pod 54590ffb-c482-4020-81e6-c3629037dcf5 ("platform-deployment-manager-0_platform-deployment-manager(54590ffb-c482-4020-81e6-c3629037dcf5)"), skipping: error killing pod: failed to "KillPodSandbox" for "54590ffb-c482-4020-81e6-c3629037dcf5" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9\": delegateDel: error invoking ConflistDel - \"chain\": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused" 2021-03-15T10:57:05.000 controller-0 ovs-vsctl: err ovs|00001|db_ctl_base|ERR|unix:/var/run/openvswitch/db.sock: database connection failed (No such file or directory) 2021-03-15T10:57:05.647 controller-0 kubelet[83402]: info E0315 10:57:05.647339 83402 remote_runtime.go:128] StopPodSandbox "f057bfcd4360dd381874100891e5c64299a3e396ea88a0e1c0cbd48299ce4423" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "f057bfcd4360dd381874100891e5c64299a3e396ea88a0e1c0cbd48299ce4423": delegateDel: error invoking ConflistDel - "chain": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused 2021-03-15T10:57:05.647 controller-0 kubelet[83402]: info E0315 10:57:05.647365 83402 kuberuntime_gc.go:170] Failed to stop sandbox "f057bfcd4360dd381874100891e5c64299a3e396ea88a0e1c0cbd48299ce4423" before removing: rpc error: code = Unknown desc = failed to destroy network for sandbox "f057bfcd4360dd381874100891e5c64299a3e396ea88a0e1c0cbd48299ce4423": delegateDel: error invoking ConflistDel - "chain": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused 2021-03-15T10:57:06.336 controller-0 kubelet[83402]: info I0315 10:57:06.153806 83402 request.go:621] Throttling request took 1.078804452s, request: POST:https://192.168.206.1:6443/api/v1/namespaces/kube-system/pods 2021-03-15T10:57:06.336 controller-0 kubelet[83402]: info E0315 10:57:06.269678 83402 remote_runtime.go:128] StopPodSandbox "f057bfcd4360dd381874100891e5c64299a3e396ea88a0e1c0cbd48299ce4423" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "f057bfcd4360dd381874100891e5c64299a3e396ea88a0e1c0cbd48299ce4423": delegateDel: error invoking ConflistDel - "chain": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused 2021-03-15T10:57:06.336 controller-0 kubelet[83402]: info E0315 10:57:06.269730 83402 kuberuntime_manager.go:895] Failed to stop sandbox {"containerd" "f057bfcd4360dd381874100891e5c64299a3e396ea88a0e1c0cbd48299ce4423"} 2021-03-15T10:57:06.472 controller-0 kubelet[83402]: info E0315 10:57:06.471525 83402 remote_runtime.go:128] StopPodSandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for sandbox "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9": delegateDel: error invoking ConflistDel - "chain": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused 2021-03-15T10:57:06.472 controller-0 kubelet[83402]: info E0315 10:57:06.471564 83402 kuberuntime_manager.go:895] Failed to stop sandbox {"containerd" "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9"} 2021-03-15T10:57:06.472 controller-0 kubelet[83402]: info E0315 10:57:06.471609 83402 kubelet.go:1575] error killing pod: failed to "KillPodSandbox" for "54590ffb-c482-4020-81e6-c3629037dcf5" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9\": delegateDel: error invoking ConflistDel - \"chain\": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused" 2021-03-15T10:57:06.472 controller-0 kubelet[83402]: info E0315 10:57:06.471624 83402 pod_workers.go:191] Error syncing pod 54590ffb-c482-4020-81e6-c3629037dcf5 ("platform-deployment-manager-0_platform-deployment-manager(54590ffb-c482-4020-81e6-c3629037dcf5)"), skipping: error killing pod: failed to "KillPodSandbox" for "54590ffb-c482-4020-81e6-c3629037dcf5" with KillPodSandboxError: "rpc error: code = Unknown desc = failed to destroy network for sandbox \"0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9\": delegateDel: error invoking ConflistDel - \"chain\": conflistDel: error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[10.96.0.1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp 10.96.0.1:443: connect: connection refused" 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier mtce port 2101 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier controller:controller-0 init function 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info plugin path: /etc/collectd.d/starlingx/ 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring Platform CPU usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring Memory usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring / usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /tmp usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /dev usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /dev/shm usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /var/run usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /var/log usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /var/lock usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /boot usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /scratch usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /opt/etcd usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /opt/platform usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /opt/extension usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /var/lib/rabbitmq usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /var/lib/postgresql usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /var/lib/ceph/mon usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /var/lib/docker usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /var/lib/docker-distribution usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /var/lib/kubelet usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /var/lib/nova/instances usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier monitoring /opt/backups usage 2021-03-15T10:57:06.631 controller-0 collectd[83341]: info alarm notifier setting up influxdb:collectd database 2021-03-15T10:57:06.641 controller-0 collectd[83341]: info alarm notifier influxdb:collectd database already exists 2021-03-15T10:57:06.645 controller-0 collectd[83341]: info alarm notifier influxdb:collectd retention policy already exists 2021-03-15T10:57:06.649 controller-0 collectd[83341]: info alarm notifier influxdb:collectd samples retention policy: {u'duration': u'168h0m0s', u'default': True, u'replicaN': 1, u'name': u'collectd samples'} 2021-03-15T10:57:06.649 controller-0 collectd[83341]: info alarm notifier influxdb:collectd is setup 2021-03-15T10:57:06.649 controller-0 collectd[83341]: info alarm notifier initialization completed 2021-03-15T10:57:06.649 controller-0 collectd[83341]: info Initialization complete, entering read-loop. 2021-03-15T10:57:07.093 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/623835c9-005a-4584-8ad8-af75cc21e13a/volumes/kubernetes.io~secret/calico-node-token-mvgqm. 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info E0315 10:57:07.060555 83402 kubelet.go:1663] Failed creating a mirror pod for "kube-controller-manager-controller-0_kube-system(beb4cf3721fe7ab7384230d84f609a39)": pods "kube-controller-manager-controller-0" already exists 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.088887 83402 reconciler.go:196] operationExecutor.UnmountVolume started for volume "cert" (UniqueName: "kubernetes.io/secret/54590ffb-c482-4020-81e6-c3629037dcf5-cert") pod "54590ffb-c482-4020-81e6-c3629037dcf5" (UID: "54590ffb-c482-4020-81e6-c3629037dcf5") 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.088925 83402 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-zzkzx" (UniqueName: "kubernetes.io/secret/54590ffb-c482-4020-81e6-c3629037dcf5-default-token-zzkzx") pod "54590ffb-c482-4020-81e6-c3629037dcf5" (UID: "54590ffb-c482-4020-81e6-c3629037dcf5") 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.088941 83402 reconciler.go:196] operationExecutor.UnmountVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/54590ffb-c482-4020-81e6-c3629037dcf5-config") pod "54590ffb-c482-4020-81e6-c3629037dcf5" (UID: "54590ffb-c482-4020-81e6-c3629037dcf5") 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info W0315 10:57:07.089319 83402 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/54590ffb-c482-4020-81e6-c3629037dcf5/volumes/kubernetes.io~secret/cert: ClearQuota called, but quotas disabled 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.089830 83402 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54590ffb-c482-4020-81e6-c3629037dcf5-cert" (OuterVolumeSpecName: "cert") pod "54590ffb-c482-4020-81e6-c3629037dcf5" (UID: "54590ffb-c482-4020-81e6-c3629037dcf5"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info W0315 10:57:07.089920 83402 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/54590ffb-c482-4020-81e6-c3629037dcf5/volumes/kubernetes.io~secret/default-token-zzkzx: ClearQuota called, but quotas disabled 2021-03-15T10:57:07.106 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/2318749d-2064-4647-90f0-f6c8cbf4d753/volumes/kubernetes.io~secret/kube-proxy-token-cj6rp. 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.090087 83402 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54590ffb-c482-4020-81e6-c3629037dcf5-default-token-zzkzx" (OuterVolumeSpecName: "default-token-zzkzx") pod "54590ffb-c482-4020-81e6-c3629037dcf5" (UID: "54590ffb-c482-4020-81e6-c3629037dcf5"). InnerVolumeSpecName "default-token-zzkzx". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info W0315 10:57:07.090138 83402 empty_dir.go:453] Warning: Failed to clear quota on /var/lib/kubelet/pods/54590ffb-c482-4020-81e6-c3629037dcf5/volumes/kubernetes.io~configmap/config: ClearQuota called, but quotas disabled 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.090643 83402 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54590ffb-c482-4020-81e6-c3629037dcf5-config" (OuterVolumeSpecName: "config") pod "54590ffb-c482-4020-81e6-c3629037dcf5" (UID: "54590ffb-c482-4020-81e6-c3629037dcf5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.190936 83402 reconciler.go:319] Volume detached for volume "cert" (UniqueName: "kubernetes.io/secret/54590ffb-c482-4020-81e6-c3629037dcf5-cert") on node "controller-0" DevicePath "" 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.190969 83402 reconciler.go:319] Volume detached for volume "default-token-zzkzx" (UniqueName: "kubernetes.io/secret/54590ffb-c482-4020-81e6-c3629037dcf5-default-token-zzkzx") on node "controller-0" DevicePath "" 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.190981 83402 reconciler.go:319] Volume detached for volume "config" (UniqueName: "kubernetes.io/configmap/54590ffb-c482-4020-81e6-c3629037dcf5-config") on node "controller-0" DevicePath "" 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.222540 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 7bfdeff13283248bb337267ccd294cf0471dc93b2ff66a50934a8ee6ab17dc1e 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info W0315 10:57:07.229096 83402 kuberuntime_container.go:758] No ref for container {"containerd" "7bfdeff13283248bb337267ccd294cf0471dc93b2ff66a50934a8ee6ab17dc1e"} 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.229139 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: d8610117033a808d58406927f59499a5835aeb52be2f06b466d6d077acfd6ad8 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info W0315 10:57:07.235305 83402 kuberuntime_container.go:758] No ref for container {"containerd" "d8610117033a808d58406927f59499a5835aeb52be2f06b466d6d077acfd6ad8"} 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info I0315 10:57:07.235355 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 10a157d50f439fb2359f8a4d5886b4069769550f6f98dde65bc7a13cd30bb8e3 2021-03-15T10:57:07.553 controller-0 kubelet[83402]: info W0315 10:57:07.242913 83402 kuberuntime_container.go:758] No ref for container {"containerd" "10a157d50f439fb2359f8a4d5886b4069769550f6f98dde65bc7a13cd30bb8e3"} 2021-03-15T10:57:08.295 controller-0 kubelet[83402]: info I0315 10:57:08.294833 83402 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:08.421 controller-0 kubelet[83402]: info I0315 10:57:08.421775 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/5976a9f2-ed39-4e2a-8de9-183538e6164b-config") pod "platform-deployment-manager-0" (UID: "5976a9f2-ed39-4e2a-8de9-183538e6164b") 2021-03-15T10:57:08.421 controller-0 kubelet[83402]: info I0315 10:57:08.421818 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cert" (UniqueName: "kubernetes.io/secret/5976a9f2-ed39-4e2a-8de9-183538e6164b-cert") pod "platform-deployment-manager-0" (UID: "5976a9f2-ed39-4e2a-8de9-183538e6164b") 2021-03-15T10:57:08.421 controller-0 kubelet[83402]: info I0315 10:57:08.421836 83402 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-zzkzx" (UniqueName: "kubernetes.io/secret/5976a9f2-ed39-4e2a-8de9-183538e6164b-default-token-zzkzx") pod "platform-deployment-manager-0" (UID: "5976a9f2-ed39-4e2a-8de9-183538e6164b") 2021-03-15T10:57:08.567 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/5976a9f2-ed39-4e2a-8de9-183538e6164b/volumes/kubernetes.io~secret/cert. 2021-03-15T10:57:08.780 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/5976a9f2-ed39-4e2a-8de9-183538e6164b/volumes/kubernetes.io~secret/default-token-zzkzx. 2021-03-15T10:57:11.253 controller-0 kubelet[83402]: info I0315 10:57:11.252568 83402 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f170b5806f7662cad63052c80f1eceedb43c8d09926f0f535c90c35210631e83 2021-03-15T10:57:13.000 controller-0 k8s-pod-recovery(83413): info : Waiting for systemd to finish booting... 2021-03-15T10:57:16.000 controller-0 ntpd[82287]: info Listen normally on 12 tunl0 172.16.192.64 UDP 123 2021-03-15T10:57:16.000 controller-0 ntpd[82287]: debug new interface(s) found: waking up resolver 2021-03-15T10:57:18.000 controller-0 ntpd[82287]: info Listen normally on 13 cali8c7155b8c25 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:57:18.000 controller-0 ntpd[82287]: debug new interface(s) found: waking up resolver 2021-03-15T10:57:26.257 controller-0 systemd[1]: info Reloading. 2021-03-15T10:57:26.671 controller-0 systemd[1]: info Reloading. 2021-03-15T10:57:27.801 controller-0 systemd[1]: info Reloading System Logger Daemon. 2021-03-15T10:57:27.867 controller-0 systemd[1]: info Reloaded System Logger Daemon. 2021-03-15T10:57:27.956 controller-0 systemd[1]: info Reloading. 2021-03-15T10:57:28.000 controller-0 run_docker_login(104030): info : Waiting for registry.local to resolve 2021-03-15T10:57:28.233 controller-0 dockerd[77210]: info time="2021-03-15T10:57:28.232538431Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version 1.0.0-rc10\nspec: 1.0.1-dev\n" 2021-03-15T10:57:28.000 controller-0 run_docker_login(104030): info : docker login to registry.local completed successfully 2021-03-15T10:57:28.000 controller-0 k8s-pod-recovery(83413): info : Waiting for systemd to finish booting... 2021-03-15T10:57:31.817 controller-0 systemd[1]: info Reloading. 2021-03-15T10:57:32.975 controller-0 systemd[1]: info Reloading. 2021-03-15T10:57:33.623 controller-0 systemd[1]: info Reloading. 2021-03-15T10:57:34.213 controller-0 systemd[1]: info Reloading. 2021-03-15T10:57:34.858 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:57:34.873 controller-0 systemd[1]: info Stopping Kubernetes Pods Recovery Service... 2021-03-15T10:57:34.000 controller-0 k8s-pod-recovery(105858): info : Stopping. 2021-03-15T10:57:34.927 controller-0 systemd[1]: info Stopped Kubernetes Pods Recovery Service. 2021-03-15T10:57:34.931 controller-0 systemd[1]: info Stopping Kubernetes Kubelet Server... 2021-03-15T10:57:34.985 controller-0 systemd[1]: info Stopped Kubernetes Kubelet Server. 2021-03-15T10:57:34.991 controller-0 systemd[1]: info Starting Kubernetes Kubelet Server... 2021-03-15T10:57:34.000 controller-0 root: info /usr/bin/kubelet-cgroup-setup.sh(105871): Nothing to do, already configured: /sys/fs/cgroup/pids/k8s-infra. 2021-03-15T10:57:34.000 controller-0 root: info /usr/bin/kubelet-cgroup-setup.sh(105871): Nothing to do, already configured: /sys/fs/cgroup/hugetlb/k8s-infra. 2021-03-15T10:57:34.000 controller-0 root: info /usr/bin/kubelet-cgroup-setup.sh(105871): Nothing to do, already configured: /sys/fs/cgroup/cpuset/k8s-infra. 2021-03-15T10:57:35.002 controller-0 systemd[1]: info Started Kubernetes Kubelet Server. 2021-03-15T10:57:35.007 controller-0 systemd[1]: info Started Kubernetes Pods Recovery Service. 2021-03-15T10:57:35.000 controller-0 k8s-pod-recovery(105887): info : Starting. 2021-03-15T10:57:35.000 controller-0 k8s-pod-recovery(105887): info : Waiting for systemd to finish booting... 2021-03-15T10:57:35.066 controller-0 systemd[1]: info Started Kubernetes systemd probe. 2021-03-15T10:57:35.562 controller-0 kubelet[105877]: info Flag --cpu-manager-policy has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2021-03-15T10:57:35.562 controller-0 kubelet[105877]: info Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2021-03-15T10:57:35.562 controller-0 kubelet[105877]: info Flag --cpu-manager-policy has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2021-03-15T10:57:35.562 controller-0 kubelet[105877]: info Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2021-03-15T10:57:35.562 controller-0 kubelet[105877]: info I0315 10:57:35.072092 105877 server.go:417] Version: v1.18.1 2021-03-15T10:57:35.562 controller-0 kubelet[105877]: info I0315 10:57:35.072286 105877 plugins.go:100] No cloud provider specified. 2021-03-15T10:57:35.562 controller-0 kubelet[105877]: info I0315 10:57:35.072310 105877 server.go:837] Client rotation is on, will bootstrap in background 2021-03-15T10:57:35.562 controller-0 kubelet[105877]: info I0315 10:57:35.073772 105877 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". 2021-03-15T10:57:35.593 controller-0 dockerd[77210]: info time="2021-03-15T10:57:35.587062189Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version 1.0.0-rc10\nspec: 1.0.1-dev\n" 2021-03-15T10:57:35.601 controller-0 dockerd[77210]: info time="2021-03-15T10:57:35.601602144Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version 1.0.0-rc10\nspec: 1.0.1-dev\n" 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619544 105877 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: [k8s-infra] 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619563 105877 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/k8s-infra CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[memory:{i:{value:4823449600 scale:0} d:{Dec:} s: Format:BinarySI}] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619645 105877 topology_manager.go:126] [topologymanager] Creating topology manager with none policy 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619651 105877 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619655 105877 container_manager_linux.go:306] Creating device plugin manager: true 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619788 105877 remote_runtime.go:59] parsed scheme: "" 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619795 105877 remote_runtime.go:59] scheme "" not registered, fallback to default scheme 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619819 105877 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/containerd/containerd.sock 0 }] } 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619825 105877 clientconn.go:933] ClientConn switching balancer to "pick_first" 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619850 105877 remote_image.go:50] parsed scheme: "" 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619855 105877 remote_image.go:50] scheme "" not registered, fallback to default scheme 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619860 105877 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/containerd/containerd.sock 0 }] } 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619866 105877 clientconn.go:933] ClientConn switching balancer to "pick_first" 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619888 105877 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests 2021-03-15T10:57:35.621 controller-0 kubelet[105877]: info I0315 10:57:35.619905 105877 kubelet.go:317] Watching apiserver 2021-03-15T10:57:36.588 controller-0 worker_config[83417]: info [DONE] 2021-03-15T10:57:37.614 controller-0 systemd[1]: info Started STX worker config gate. 2021-03-15T10:57:37.624 controller-0 systemd[1]: info Started Serial Getty on ttyS0. 2021-03-15T10:57:37.633 controller-0 systemd[1]: info Started Getty on tty1. 2021-03-15T10:57:37.637 controller-0 systemd[1]: info Reached target Login Prompts. 2021-03-15T10:57:37.654 controller-0 systemd[1]: info Starting StarlingX Maintenance Worker Goenable Ready... 2021-03-15T10:57:37.673 controller-0 goenabledWorker[106064]: info Goenabled Ready: [ OK ] 2021-03-15T10:57:37.676 controller-0 systemd[1]: info Started StarlingX Maintenance Worker Goenable Ready. 2021-03-15T10:57:37.688 controller-0 systemd[1]: info Reached target Multi-User System. 2021-03-15T10:57:37.709 controller-0 systemd[1]: info Starting Update UTMP about System Runlevel Changes... 2021-03-15T10:57:37.734 controller-0 systemd[1]: info Started Update UTMP about System Runlevel Changes. 2021-03-15T10:57:37.742 controller-0 systemd[1]: info Startup finished in 46.307s (kernel) + 1min 23.367s (initrd) + 3min 47.608s (userspace) = 5min 57.282s. 2021-03-15T10:57:39.598 controller-0 collectd[83341]: info interface plugin configuration completed 2021-03-15T10:57:39.598 controller-0 collectd[83341]: info interface plugin initialization completed 2021-03-15T10:57:39.598 controller-0 collectd[83341]: info ovs interface plugin configuration completed 2021-03-15T10:57:39.598 controller-0 collectd[83341]: info ovs interface plugin waiting for ovs-vswitchd to be running 2021-03-15T10:57:39.777 controller-0 systemd[1]: notice collectd.service: main process exited, code=killed, status=9/KILL 2021-03-15T10:57:39.779 controller-0 systemd[1]: notice Unit collectd.service entered failed state. 2021-03-15T10:57:39.779 controller-0 systemd[1]: warning collectd.service failed. 2021-03-15T10:57:39.782 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:57:39.784 controller-0 systemd[1]: info Starting Collectd statistics daemon and extension services... 2021-03-15T10:57:39.786 controller-0 collectd[106099]: info plugin_load: plugin "network" successfully loaded. 2021-03-15T10:57:39.786 controller-0 collectd[106099]: info plugin_load: plugin "python" successfully loaded. 2021-03-15T10:57:40.756 controller-0 collectd[106099]: info plugin_load: plugin "threshold" successfully loaded. 2021-03-15T10:57:40.756 controller-0 collectd[106099]: info plugin_load: plugin "df" successfully loaded. 2021-03-15T10:57:40.762 controller-0 collectd[106099]: info platform cpu usage plugin debug=False, verbose=True 2021-03-15T10:57:40.762 controller-0 collectd[106099]: info platform memory usage: debug=False, verbose=True 2021-03-15T10:57:40.762 controller-0 collectd[106099]: info interface plugin configured by config file [http://localhost:2122/mtce/lmon] 2021-03-15T10:57:40.762 controller-0 collectd[106099]: info Systemd detected, trying to signal readyness. 2021-03-15T10:57:40.762 controller-0 collectd[106099]: info remote logging server configuration completed 2021-03-15T10:57:40.763 controller-0 collectd[106099]: info remote logging server initialization completed 2021-03-15T10:57:40.763 controller-0 collectd[106099]: info ovs interface plugin configuration completed 2021-03-15T10:57:40.763 controller-0 collectd[106099]: info ovs interface plugin waiting for ovs-vswitchd to be running 2021-03-15T10:57:40.763 controller-0 collectd[106099]: info interface plugin configuration completed 2021-03-15T10:57:40.763 controller-0 collectd[106099]: info interface plugin initialization completed 2021-03-15T10:57:40.763 controller-0 collectd[106099]: info ptp plugin configuration completed 2021-03-15T10:57:40.763 controller-0 collectd[106099]: info ptp plugin failed to get Timestamping Mode 2021-03-15T10:57:40.768 controller-0 systemd[1]: info Started Collectd statistics daemon and extension services. 2021-03-15T10:57:41.267 controller-0 kubelet[105877]: info E0315 10:57:41.267275 105877 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated. 2021-03-15T10:57:41.267 controller-0 kubelet[105877]: info For verbose messaging see aws.Config.CredentialsChainVerboseErrors 2021-03-15T10:57:41.268 controller-0 kubelet[105877]: info I0315 10:57:41.267844 105877 kuberuntime_manager.go:211] Container runtime containerd initialized, version: v1.3.3, apiVersion: v1alpha2 2021-03-15T10:57:41.268 controller-0 kubelet[105877]: info I0315 10:57:41.268148 105877 server.go:1125] Started kubelet 2021-03-15T10:57:41.285 controller-0 kubelet[105877]: info I0315 10:57:41.280826 105877 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer 2021-03-15T10:57:41.285 controller-0 kubelet[105877]: info I0315 10:57:41.281225 105877 volume_manager.go:265] Starting Kubelet Volume Manager 2021-03-15T10:57:41.285 controller-0 kubelet[105877]: info I0315 10:57:41.281499 105877 desired_state_of_world_populator.go:139] Desired state populator starts to run 2021-03-15T10:57:41.285 controller-0 kubelet[105877]: info I0315 10:57:41.281791 105877 server.go:145] Starting to listen on 0.0.0.0:10250 2021-03-15T10:57:41.285 controller-0 kubelet[105877]: info I0315 10:57:41.282288 105877 server.go:393] Adding debug handlers to kubelet server. 2021-03-15T10:57:41.285 controller-0 kubelet[105877]: info E0315 10:57:41.285782 105877 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/docker/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache. 2021-03-15T10:57:41.285 controller-0 kubelet[105877]: info E0315 10:57:41.285805 105877 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem 2021-03-15T10:57:41.286 controller-0 kubelet[105877]: info I0315 10:57:41.286161 105877 clientconn.go:106] parsed scheme: "unix" 2021-03-15T10:57:41.286 controller-0 kubelet[105877]: info I0315 10:57:41.286178 105877 clientconn.go:106] scheme "unix" not registered, fallback to default scheme 2021-03-15T10:57:41.286 controller-0 kubelet[105877]: info I0315 10:57:41.286236 105877 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] } 2021-03-15T10:57:41.286 controller-0 kubelet[105877]: info I0315 10:57:41.286243 105877 clientconn.go:933] ClientConn switching balancer to "pick_first" 2021-03-15T10:57:41.303 controller-0 kubelet[105877]: info E0315 10:57:41.300086 105877 cri_stats_provider.go:375] Failed to get the info of the filesystem with mountpoint "/var/lib/docker/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache. 2021-03-15T10:57:41.303 controller-0 kubelet[105877]: info E0315 10:57:41.300128 105877 kubelet.go:1301] Image garbage collection failed multiple times in a row: invalid capacity 0 on image filesystem 2021-03-15T10:57:41.591 controller-0 kubelet[105877]: info I0315 10:57:41.388127 105877 kuberuntime_manager.go:978] updating runtime config through cri with podcidr 172.16.0.0/24 2021-03-15T10:57:41.592 controller-0 kubelet[105877]: info I0315 10:57:41.592091 105877 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach 2021-03-15T10:57:41.592 controller-0 kubelet[105877]: info I0315 10:57:41.592820 105877 status_manager.go:158] Starting to sync pod status with apiserver 2021-03-15T10:57:41.592 controller-0 kubelet[105877]: info I0315 10:57:41.592856 105877 kubelet.go:1821] Starting kubelet main sync loop. 2021-03-15T10:57:41.592 controller-0 kubelet[105877]: info E0315 10:57:41.592902 105877 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] 2021-03-15T10:57:41.596 controller-0 kubelet[105877]: info I0315 10:57:41.594608 105877 kubelet_network.go:77] Setting Pod CIDR: -> 172.16.0.0/24 2021-03-15T10:57:41.596 controller-0 kubelet[105877]: info I0315 10:57:41.595817 105877 kubelet_node_status.go:70] Attempting to register node controller-0 2021-03-15T10:57:41.613 controller-0 dockerd[77210]: info time="2021-03-15T10:57:41.613083035Z" level=warning msg="failed to retrieve runc version: unknown output format: runc version 1.0.0-rc10\nspec: 1.0.1-dev\n" 2021-03-15T10:57:41.659 controller-0 kubelet[105877]: info I0315 10:57:41.659110 105877 kubelet_node_status.go:112] Node controller-0 was previously registered 2021-03-15T10:57:41.659 controller-0 kubelet[105877]: info I0315 10:57:41.659193 105877 kubelet_node_status.go:73] Successfully registered node controller-0 2021-03-15T10:57:41.679 controller-0 kubelet[105877]: info I0315 10:57:41.679875 105877 setters.go:559] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2021-03-15 10:57:41.679849484 +0000 UTC m=+6.675315526 LastTransitionTime:2021-03-15 10:57:41.679849484 +0000 UTC m=+6.675315526 Reason:KubeletNotReady Message:container runtime status check may not have completed yet} 2021-03-15T10:57:41.699 controller-0 kubelet[105877]: info E0315 10:57:41.699432 105877 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet 2021-03-15T10:57:41.916 controller-0 kubelet[105877]: info E0315 10:57:41.913453 105877 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info ptp plugin controller-0 is virtual 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info ptp plugin initialization completed 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info NTP query plugin configuration completed 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info NTP query plugin node ready count 1 of 3 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info platform memory usage configuration completed 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info platform memory usage: init function for controller-0 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info platform memory usage: strict_memory_accounting: False 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info platform memory usage: reserve_all: False, reserved_MiB: 4600 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info platform memory usage initialization completed 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info platform cpu usage plugin configuration completed 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info platform cpu usage plugin node ready count 1 of 3 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info alarm notifier mtce port 2101 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info alarm notifier controller:controller-0 init function 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info plugin path: /etc/collectd.d/starlingx/ 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info alarm notifier monitoring Platform CPU usage 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info alarm notifier monitoring Memory usage 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info alarm notifier monitoring / usage 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info alarm notifier monitoring /tmp usage 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info alarm notifier monitoring /dev usage 2021-03-15T10:57:41.918 controller-0 collectd[106099]: info alarm notifier monitoring /dev/shm usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /var/run usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /var/log usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /var/lock usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /boot usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /scratch usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /opt/etcd usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /opt/platform usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /opt/extension usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /var/lib/rabbitmq usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /var/lib/postgresql usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /var/lib/ceph/mon usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /var/lib/docker usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /var/lib/docker-distribution usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /var/lib/kubelet usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /var/lib/nova/instances usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier monitoring /opt/backups usage 2021-03-15T10:57:41.919 controller-0 collectd[106099]: info alarm notifier setting up influxdb:collectd database 2021-03-15T10:57:41.930 controller-0 kubelet[105877]: info I0315 10:57:41.928979 105877 cpu_manager.go:184] [cpumanager] starting with none policy 2021-03-15T10:57:41.930 controller-0 kubelet[105877]: info I0315 10:57:41.928995 105877 cpu_manager.go:185] [cpumanager] reconciling every 10s 2021-03-15T10:57:41.930 controller-0 kubelet[105877]: info I0315 10:57:41.929012 105877 state_mem.go:36] [cpumanager] initializing new in-memory state store 2021-03-15T10:57:41.930 controller-0 kubelet[105877]: info I0315 10:57:41.929202 105877 state_mem.go:88] [cpumanager] updated default cpuset: "" 2021-03-15T10:57:41.930 controller-0 kubelet[105877]: info I0315 10:57:41.929211 105877 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]" 2021-03-15T10:57:41.930 controller-0 kubelet[105877]: info I0315 10:57:41.929220 105877 policy_none.go:43] [cpumanager] none policy: Start 2021-03-15T10:57:41.932 controller-0 kubelet[105877]: info I0315 10:57:41.931271 105877 plugin_manager.go:114] Starting Kubelet Plugin Manager 2021-03-15T10:57:41.934 controller-0 collectd[106099]: info alarm notifier influxdb:collectd database already exists 2021-03-15T10:57:41.947 controller-0 collectd[106099]: info alarm notifier influxdb:collectd retention policy already exists 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info alarm notifier influxdb:collectd samples retention policy: {u'duration': u'168h0m0s', u'default': True, u'replicaN': 1, u'name': u'collectd samples'} 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info alarm notifier influxdb:collectd is setup 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info alarm notifier initialization completed 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info Initialization complete, entering read-loop. 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info platform cpu usage plugin node ready count 2 of 3 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info NTP query plugin node ready count 2 of 3 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info interface plugin node ready count 1 of 3 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info remote logging server node ready count 1 of 3 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info alarm notifier configuration completed 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info alarm notifier controller-0 not ready ; from:controller-0:df:dev 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info alarm notifier node ready count 1 of 3 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info alarm notifier controller-0 not ready ; from:controller-0:df:dev-shm 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info alarm notifier node ready count 2 of 3 2021-03-15T10:57:41.968 controller-0 collectd[106099]: info alarm notifier controller-0 not ready ; from:controller-0:df:root 2021-03-15T10:57:42.321 controller-0 kubelet[105877]: info I0315 10:57:42.317210 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:42.321 controller-0 kubelet[105877]: info I0315 10:57:42.318351 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:42.321 controller-0 kubelet[105877]: info I0315 10:57:42.320866 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:42.323 controller-0 kubelet[105877]: info W0315 10:57:42.322025 105877 pod_container_deletor.go:77] Container "0d1d47940ae47e844df6ebd02c0b223d97cf5875872fcee82ee5d52a3eb520b9" not found in pod's containers 2021-03-15T10:57:42.323 controller-0 kubelet[105877]: info I0315 10:57:42.322100 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:42.330 controller-0 kubelet[105877]: info I0315 10:57:42.330132 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:42.333 controller-0 kubelet[105877]: info I0315 10:57:42.333120 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:42.339 controller-0 kubelet[105877]: info W0315 10:57:42.338456 105877 pod_container_deletor.go:77] Container "f057bfcd4360dd381874100891e5c64299a3e396ea88a0e1c0cbd48299ce4423" not found in pod's containers 2021-03-15T10:57:42.339 controller-0 kubelet[105877]: info W0315 10:57:42.338582 105877 pod_container_deletor.go:77] Container "348fb49faf663ea444e445c35dda1c2209221159422ca63d771f121375f14cc3" not found in pod's containers 2021-03-15T10:57:42.339 controller-0 kubelet[105877]: info W0315 10:57:42.338612 105877 pod_container_deletor.go:77] Container "48c3bb4b18802cf072b1942488bc1bed9e7ee4d066d4d41f5dbc1d79b00e0f20" not found in pod's containers 2021-03-15T10:57:42.339 controller-0 kubelet[105877]: info W0315 10:57:42.338638 105877 pod_container_deletor.go:77] Container "db9913b7588565ebb4d93fc4f41993f0373ccab00c816ad12ee8f89db1ca9824" not found in pod's containers 2021-03-15T10:57:42.339 controller-0 kubelet[105877]: info W0315 10:57:42.338665 105877 pod_container_deletor.go:77] Container "e48264665bbc2abd32fae8a68e7a097ab423c025fbbcf449e614f985a8affe18" not found in pod's containers 2021-03-15T10:57:42.339 controller-0 kubelet[105877]: info W0315 10:57:42.338700 105877 pod_container_deletor.go:77] Container "e2c0e7aca89162814240097fdf0a666799dc6c39b5f0db4c43263e9a3b5bc611" not found in pod's containers 2021-03-15T10:57:42.374 controller-0 kubelet[105877]: info E0315 10:57:42.374622 105877 kubelet.go:1663] Failed creating a mirror pod for "kube-controller-manager-controller-0_kube-system(beb4cf3721fe7ab7384230d84f609a39)": pods "kube-controller-manager-controller-0" already exists 2021-03-15T10:57:42.374 controller-0 kubelet[105877]: info E0315 10:57:42.374696 105877 kubelet.go:1663] Failed creating a mirror pod for "kube-apiserver-controller-0_kube-system(eea3f7ab53a44b935832ed67b7d00029)": pods "kube-apiserver-controller-0" already exists 2021-03-15T10:57:42.393 controller-0 kubelet[105877]: info E0315 10:57:42.390662 105877 kubelet.go:1663] Failed creating a mirror pod for "kube-scheduler-controller-0_kube-system(04abb2ef72685c7615231f0f216c924e)": pods "kube-scheduler-controller-0" already exists 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.415855 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "encryption-config" (UniqueName: "kubernetes.io/host-path/eea3f7ab53a44b935832ed67b7d00029-encryption-config") pod "kube-apiserver-controller-0" (UID: "eea3f7ab53a44b935832ed67b7d00029") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.415891 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/beb4cf3721fe7ab7384230d84f609a39-flexvolume-dir") pod "kube-controller-manager-controller-0" (UID: "beb4cf3721fe7ab7384230d84f609a39") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.415908 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/2318749d-2064-4647-90f0-f6c8cbf4d753-kube-proxy") pod "kube-proxy-vnhrw" (UID: "2318749d-2064-4647-90f0-f6c8cbf4d753") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.415924 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "host-local-net-dir" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-host-local-net-dir") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.415938 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-var-run-calico") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.415950 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/eea3f7ab53a44b935832ed67b7d00029-k8s-certs") pod "kube-apiserver-controller-0" (UID: "eea3f7ab53a44b935832ed67b7d00029") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.415962 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/beb4cf3721fe7ab7384230d84f609a39-k8s-certs") pod "kube-controller-manager-controller-0" (UID: "beb4cf3721fe7ab7384230d84f609a39") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.415973 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-lib-modules") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.415985 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-xtables-lock") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.415997 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-mvgqm" (UniqueName: "kubernetes.io/secret/623835c9-005a-4584-8ad8-af75cc21e13a-calico-node-token-mvgqm") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416009 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/eea3f7ab53a44b935832ed67b7d00029-ca-certs") pod "kube-apiserver-controller-0" (UID: "eea3f7ab53a44b935832ed67b7d00029") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416020 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cert" (UniqueName: "kubernetes.io/secret/5976a9f2-ed39-4e2a-8de9-183538e6164b-cert") pod "platform-deployment-manager-0" (UID: "5976a9f2-ed39-4e2a-8de9-183538e6164b") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416031 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/eea3f7ab53a44b935832ed67b7d00029-etc-pki") pod "kube-apiserver-controller-0" (UID: "eea3f7ab53a44b935832ed67b7d00029") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416043 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/04abb2ef72685c7615231f0f216c924e-kubeconfig") pod "kube-scheduler-controller-0" (UID: "04abb2ef72685c7615231f0f216c924e") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416055 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-cj6rp" (UniqueName: "kubernetes.io/secret/2318749d-2064-4647-90f0-f6c8cbf4d753-kube-proxy-token-cj6rp") pod "kube-proxy-vnhrw" (UID: "2318749d-2064-4647-90f0-f6c8cbf4d753") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416067 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-zzkzx" (UniqueName: "kubernetes.io/secret/5976a9f2-ed39-4e2a-8de9-183538e6164b-default-token-zzkzx") pod "platform-deployment-manager-0" (UID: "5976a9f2-ed39-4e2a-8de9-183538e6164b") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416086 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-cni-net-dir") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416100 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "policysync" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-policysync") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416112 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/beb4cf3721fe7ab7384230d84f609a39-etc-pki") pod "kube-controller-manager-controller-0" (UID: "beb4cf3721fe7ab7384230d84f609a39") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416123 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/2318749d-2064-4647-90f0-f6c8cbf4d753-xtables-lock") pod "kube-proxy-vnhrw" (UID: "2318749d-2064-4647-90f0-f6c8cbf4d753") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416136 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-cni-bin-dir") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416148 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvol-driver-host" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-flexvol-driver-host") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416161 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/623835c9-005a-4584-8ad8-af75cc21e13a-var-lib-calico") pod "calico-node-bbjjj" (UID: "623835c9-005a-4584-8ad8-af75cc21e13a") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416172 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/beb4cf3721fe7ab7384230d84f609a39-ca-certs") pod "kube-controller-manager-controller-0" (UID: "beb4cf3721fe7ab7384230d84f609a39") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416183 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/beb4cf3721fe7ab7384230d84f609a39-kubeconfig") pod "kube-controller-manager-controller-0" (UID: "beb4cf3721fe7ab7384230d84f609a39") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416194 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/2318749d-2064-4647-90f0-f6c8cbf4d753-lib-modules") pod "kube-proxy-vnhrw" (UID: "2318749d-2064-4647-90f0-f6c8cbf4d753") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416205 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/configmap/5976a9f2-ed39-4e2a-8de9-183538e6164b-config") pod "platform-deployment-manager-0" (UID: "5976a9f2-ed39-4e2a-8de9-183538e6164b") 2021-03-15T10:57:42.416 controller-0 kubelet[105877]: info I0315 10:57:42.416210 105877 reconciler.go:157] Reconciler: start to sync state 2021-03-15T10:57:42.698 controller-0 collectd[106099]: info alarm notifier node ready 2021-03-15T10:57:42.700 controller-0 collectd[106099]: info alarm notifier reading: 0.01 % usage - /tmp 2021-03-15T10:57:42.700 controller-0 collectd[106099]: info alarm notifier reading: 0.09 % usage - /var/lib/kubelet 2021-03-15T10:57:42.700 controller-0 collectd[106099]: info degrade notifier controller ip: 192.168.204.1 2021-03-15T10:57:42.700 controller-0 collectd[106099]: info alarm notifier reading: 0.02 % usage - /opt/backups 2021-03-15T10:57:42.700 controller-0 collectd[106099]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"clear","resource":""} 2021-03-15T10:57:42.700 controller-0 collectd[106099]: info alarm notifier reading: 26.08 % usage - /boot 2021-03-15T10:57:42.700 controller-0 collectd[106099]: info alarm notifier reading: 3.22 % usage - /var/log 2021-03-15T10:57:42.700 controller-0 collectd[106099]: info alarm notifier reading: 0.30 % usage - /scratch 2021-03-15T10:57:42.701 controller-0 collectd[106099]: info alarm notifier reading: 0.37 % usage - /var/lib/nova/instances 2021-03-15T10:57:42.701 controller-0 collectd[106099]: info alarm notifier reading: 23.94 % usage - /var/lib/docker 2021-03-15T10:57:42.735 controller-0 systemd[1]: info Reloading. 2021-03-15T10:57:50.410 controller-0 k8s-pod-recovery[105887]: info No resources found 2021-03-15T10:57:50.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 0 seconds. 2021-03-15T10:57:50.761 controller-0 collectd[106099]: info interface plugin node ready count 2 of 3 2021-03-15T10:57:54.872 controller-0 kubelet[105877]: info I0315 10:57:54.872758 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.886 controller-0 kubelet[105877]: info I0315 10:57:54.884808 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.912 controller-0 kubelet[105877]: info I0315 10:57:54.912720 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.919 controller-0 kubelet[105877]: info I0315 10:57:54.914151 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.924 controller-0 kubelet[105877]: info I0315 10:57:54.923992 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.927 controller-0 kubelet[105877]: info I0315 10:57:54.927741 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.928 controller-0 kubelet[105877]: info I0315 10:57:54.928678 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.929 controller-0 kubelet[105877]: info I0315 10:57:54.929566 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.975 controller-0 kubelet[105877]: info I0315 10:57:54.963927 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.975 controller-0 kubelet[105877]: info I0315 10:57:54.968493 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.975 controller-0 kubelet[105877]: info I0315 10:57:54.969098 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.975 controller-0 kubelet[105877]: info I0315 10:57:54.969634 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.982 controller-0 kubelet[105877]: info I0315 10:57:54.979236 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:54.995 controller-0 kubelet[105877]: info I0315 10:57:54.991609 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003328 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/9e84b7ab-5ee3-4cdc-8c60-9c4dcb900b6b-default-token-fxs22") pod "test-7484bfb64b-zk9kb" (UID: "9e84b7ab-5ee3-4cdc-8c60-9c4dcb900b6b") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003354 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/6c3213e3-073b-4a5b-805d-19d5e6e1355d-default-token-fxs22") pod "test-7484bfb64b-j982c" (UID: "6c3213e3-073b-4a5b-805d-19d5e6e1355d") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003372 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cnibin" (UniqueName: "kubernetes.io/host-path/d87c220c-1398-46ec-8111-2044758fb9ec-cnibin") pod "kube-sriov-cni-ds-amd64-slfnv" (UID: "d87c220c-1398-46ec-8111-2044758fb9ec") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003388 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/333220c4-992b-4bf7-ad55-17b03e493120-default-token-fxs22") pod "test-7484bfb64b-f59jr" (UID: "333220c4-992b-4bf7-ad55-17b03e493120") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003401 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-dvbg8" (UniqueName: "kubernetes.io/secret/d87c220c-1398-46ec-8111-2044758fb9ec-default-token-dvbg8") pod "kube-sriov-cni-ds-amd64-slfnv" (UID: "d87c220c-1398-46ec-8111-2044758fb9ec") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003414 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/a7b32821-826d-4ad2-8a48-be9c151404bb-default-token-fxs22") pod "test-7484bfb64b-m6sl7" (UID: "a7b32821-826d-4ad2-8a48-be9c151404bb") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003426 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/b6abc015-7398-4152-b33a-654c7f5b34b1-default-token-fxs22") pod "test-7484bfb64b-pfzz5" (UID: "b6abc015-7398-4152-b33a-654c7f5b34b1") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003439 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/3d34b63d-f194-4cd5-92d3-5239582b24a1-default-token-fxs22") pod "test-7484bfb64b-np7bp" (UID: "3d34b63d-f194-4cd5-92d3-5239582b24a1") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003452 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/b5faadb4-b4f2-4a95-9b8c-3b5ae4060e50-default-token-fxs22") pod "test-7484bfb64b-w8p7n" (UID: "b5faadb4-b4f2-4a95-9b8c-3b5ae4060e50") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003465 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/4c110c70-3629-4c2b-8829-847e0705b394-default-token-fxs22") pod "test-7484bfb64b-g7mcc" (UID: "4c110c70-3629-4c2b-8829-847e0705b394") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003478 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/a65c23f2-4544-492a-a335-9ef07bcaf114-default-token-fxs22") pod "test-7484bfb64b-tsswj" (UID: "a65c23f2-4544-492a-a335-9ef07bcaf114") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003490 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/8740e02b-692e-43b2-ae98-881f21d8381f-default-token-fxs22") pod "test-7484bfb64b-pd8h4" (UID: "8740e02b-692e-43b2-ae98-881f21d8381f") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003504 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/92febbca-a535-419d-914f-e57ccd412cee-default-token-fxs22") pod "test-7484bfb64b-dt6tf" (UID: "92febbca-a535-419d-914f-e57ccd412cee") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003517 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ic-nginx-ingress-token-lckbb" (UniqueName: "kubernetes.io/secret/fbfae784-08b3-4845-b630-dbf30064f1ab-ic-nginx-ingress-token-lckbb") pod "ic-nginx-ingress-controller-jj92v" (UID: "fbfae784-08b3-4845-b630-dbf30064f1ab") 2021-03-15T10:57:55.004 controller-0 kubelet[105877]: info I0315 10:57:55.003529 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/b48f7f15-7bad-4f71-bfd2-6ac35df9a468-default-token-fxs22") pod "test-7484bfb64b-9bghk" (UID: "b48f7f15-7bad-4f71-bfd2-6ac35df9a468") 2021-03-15T10:57:55.026 controller-0 kubelet[105877]: info I0315 10:57:55.026023 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:57:55.104 controller-0 kubelet[105877]: info I0315 10:57:55.103818 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "multus-token-5xrsp" (UniqueName: "kubernetes.io/secret/4d86e53a-decd-45e2-87f3-c63727399959-multus-token-5xrsp") pod "kube-multus-ds-amd64-psg2c" (UID: "4d86e53a-decd-45e2-87f3-c63727399959") 2021-03-15T10:57:55.104 controller-0 kubelet[105877]: info I0315 10:57:55.103877 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/4d86e53a-decd-45e2-87f3-c63727399959-cni") pod "kube-multus-ds-amd64-psg2c" (UID: "4d86e53a-decd-45e2-87f3-c63727399959") 2021-03-15T10:57:55.104 controller-0 kubelet[105877]: info I0315 10:57:55.103949 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cnibin" (UniqueName: "kubernetes.io/host-path/4d86e53a-decd-45e2-87f3-c63727399959-cnibin") pod "kube-multus-ds-amd64-psg2c" (UID: "4d86e53a-decd-45e2-87f3-c63727399959") 2021-03-15T10:57:55.104 controller-0 kubelet[105877]: info I0315 10:57:55.103962 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "multus-cfg" (UniqueName: "kubernetes.io/configmap/4d86e53a-decd-45e2-87f3-c63727399959-multus-cfg") pod "kube-multus-ds-amd64-psg2c" (UID: "4d86e53a-decd-45e2-87f3-c63727399959") 2021-03-15T10:57:55.139 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b5faadb4-b4f2-4a95-9b8c-3b5ae4060e50/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.214 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/3d34b63d-f194-4cd5-92d3-5239582b24a1/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.215 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b6abc015-7398-4152-b33a-654c7f5b34b1/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.216 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d87c220c-1398-46ec-8111-2044758fb9ec/volumes/kubernetes.io~secret/default-token-dvbg8. 2021-03-15T10:57:55.217 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/333220c4-992b-4bf7-ad55-17b03e493120/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.218 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/a7b32821-826d-4ad2-8a48-be9c151404bb/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.218 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/9e84b7ab-5ee3-4cdc-8c60-9c4dcb900b6b/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.219 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b48f7f15-7bad-4f71-bfd2-6ac35df9a468/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.220 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/fbfae784-08b3-4845-b630-dbf30064f1ab/volumes/kubernetes.io~secret/ic-nginx-ingress-token-lckbb. 2021-03-15T10:57:55.220 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/92febbca-a535-419d-914f-e57ccd412cee/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.241 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/a65c23f2-4544-492a-a335-9ef07bcaf114/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.242 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/8740e02b-692e-43b2-ae98-881f21d8381f/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.245 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/6c3213e3-073b-4a5b-805d-19d5e6e1355d/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.246 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/4c110c70-3629-4c2b-8829-847e0705b394/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:57:55.277 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/4d86e53a-decd-45e2-87f3-c63727399959/volumes/kubernetes.io~secret/multus-token-5xrsp. 2021-03-15T10:57:55.000 controller-0 k8s-pod-recovery(105887): info : Waiting on pod transitions to stabilize... 12 pods are not Running/Completed 2021-03-15T10:57:56.332 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T10:57:56.335 controller-0 systemd[1]: info Starting Name Service Cache Daemon... 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 monitoring file `/etc/passwd` (1) 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 monitoring directory `/etc` (2) 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 monitoring file `/etc/group` (3) 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 monitoring directory `/etc` (2) 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 monitoring file `/etc/hosts` (4) 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 monitoring directory `/etc` (2) 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 monitoring file `/etc/resolv.conf` (5) 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 monitoring directory `/etc` (2) 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 monitoring file `/etc/services` (6) 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 monitoring directory `/etc` (2) 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory 2021-03-15T10:57:56.000 controller-0 nscd: notice 109652 stat failed for file `/etc/netgroup'; will try again later: No such file or directory 2021-03-15T10:57:56.358 controller-0 systemd[1]: info Started Name Service Cache Daemon. 2021-03-15T10:58:00.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 0 seconds. 2021-03-15T10:58:02.866 controller-0 kubelet[105877]: info I0315 10:58:02.866375 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:02.880 controller-0 kubelet[105877]: info I0315 10:58:02.880572 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:02.891 controller-0 kubelet[105877]: info I0315 10:58:02.891317 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:02.903 controller-0 kubelet[105877]: info I0315 10:58:02.902963 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:02.977 controller-0 kubelet[105877]: info I0315 10:58:02.973220 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cm-cert-manager-token-968dk" (UniqueName: "kubernetes.io/secret/1f722583-5cad-43f1-bb19-15b70df7655f-cm-cert-manager-token-968dk") pod "cm-cert-manager-856678cfb7-lg9gc" (UID: "1f722583-5cad-43f1-bb19-15b70df7655f") 2021-03-15T10:58:02.977 controller-0 kubelet[105877]: info I0315 10:58:02.973259 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/693ed33d-6b96-40f0-b174-48c158f76c01-config-volume") pod "coredns-78d9fd7cb9-t4gb7" (UID: "693ed33d-6b96-40f0-b174-48c158f76c01") 2021-03-15T10:58:02.977 controller-0 kubelet[105877]: info I0315 10:58:02.973275 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/aa4be24a-b653-4c67-a634-c45873652571-default-token-fxs22") pod "test-7484bfb64b-48ml5" (UID: "aa4be24a-b653-4c67-a634-c45873652571") 2021-03-15T10:58:02.977 controller-0 kubelet[105877]: info I0315 10:58:02.973290 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cm-cert-manager-webhook-token-vkvrv" (UniqueName: "kubernetes.io/secret/d2fbbeb9-ae64-415b-923b-78023f197724-cm-cert-manager-webhook-token-vkvrv") pod "cm-cert-manager-webhook-5745478cbc-tvgxb" (UID: "d2fbbeb9-ae64-415b-923b-78023f197724") 2021-03-15T10:58:02.977 controller-0 kubelet[105877]: info I0315 10:58:02.973382 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-lnvxh" (UniqueName: "kubernetes.io/secret/693ed33d-6b96-40f0-b174-48c158f76c01-coredns-token-lnvxh") pod "coredns-78d9fd7cb9-t4gb7" (UID: "693ed33d-6b96-40f0-b174-48c158f76c01") 2021-03-15T10:58:03.176 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/1f722583-5cad-43f1-bb19-15b70df7655f/volumes/kubernetes.io~secret/cm-cert-manager-token-968dk. 2021-03-15T10:58:03.294 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/aa4be24a-b653-4c67-a634-c45873652571/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:03.294 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d2fbbeb9-ae64-415b-923b-78023f197724/volumes/kubernetes.io~secret/cm-cert-manager-webhook-token-vkvrv. 2021-03-15T10:58:03.295 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/693ed33d-6b96-40f0-b174-48c158f76c01/volumes/kubernetes.io~secret/coredns-token-lnvxh. 2021-03-15T10:58:03.834 controller-0 kubelet[105877]: info I0315 10:58:03.827130 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:03.900 controller-0 kubelet[105877]: info I0315 10:58:03.898009 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:03.941 controller-0 kubelet[105877]: info I0315 10:58:03.931019 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:03.968 controller-0 kubelet[105877]: info I0315 10:58:03.967396 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:03.973 controller-0 kubelet[105877]: info I0315 10:58:03.972770 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:03.978 controller-0 kubelet[105877]: info I0315 10:58:03.978723 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:03.982 controller-0 kubelet[105877]: info I0315 10:58:03.981672 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:03.986 controller-0 kubelet[105877]: info I0315 10:58:03.985726 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:03.991 controller-0 kubelet[105877]: info I0315 10:58:03.989117 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:03.993 controller-0 kubelet[105877]: info I0315 10:58:03.993236 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:03.996 controller-0 kubelet[105877]: info I0315 10:58:03.996029 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:03.999 controller-0 kubelet[105877]: info I0315 10:58:03.998396 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:04.003 controller-0 kubelet[105877]: info I0315 10:58:04.003076 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004507 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/81c8f94c-238c-4de9-89c4-beebd1ca84f8-default-token-fxs22") pod "test-7484bfb64b-s8jgr" (UID: "81c8f94c-238c-4de9-89c4-beebd1ca84f8") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004533 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/04febe4f-9802-4fea-aa77-43af9949d444-default-token-fxs22") pod "test-7484bfb64b-5xq6v" (UID: "04febe4f-9802-4fea-aa77-43af9949d444") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004549 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/f0fe047f-a2f1-4d9b-ab96-3585bd070d41-default-token-fxs22") pod "test-7484bfb64b-2wlvq" (UID: "f0fe047f-a2f1-4d9b-ab96-3585bd070d41") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004561 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/6b4f670e-41e5-4c5a-98d6-0e8b14be79d5-default-token-fxs22") pod "test-7484bfb64b-tz4bw" (UID: "6b4f670e-41e5-4c5a-98d6-0e8b14be79d5") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004578 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "rbd-provisioner-token-9gq5x" (UniqueName: "kubernetes.io/secret/acf9bf24-d2b2-4975-b4e5-9a211940e72e-rbd-provisioner-token-9gq5x") pod "rbd-provisioner-77bfb6dbb-t7vhg" (UID: "acf9bf24-d2b2-4975-b4e5-9a211940e72e") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004595 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/3862d8ce-f977-478a-bf21-0c1d5a71fb7e-default-token-fxs22") pod "test-7484bfb64b-f84tj" (UID: "3862d8ce-f977-478a-bf21-0c1d5a71fb7e") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004618 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/d5dd148c-8470-4ae4-9785-f06eea1eba7e-default-token-fxs22") pod "test-7484bfb64b-gsxr9" (UID: "d5dd148c-8470-4ae4-9785-f06eea1eba7e") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004631 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/5bd71b06-ad8b-401f-b898-45418d3a6e25-default-token-fxs22") pod "test-7484bfb64b-ghf5n" (UID: "5bd71b06-ad8b-401f-b898-45418d3a6e25") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004643 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/c19b01d9-c6cd-495b-b5c2-4d9f6e1b913e-default-token-fxs22") pod "test-7484bfb64b-l5nn9" (UID: "c19b01d9-c6cd-495b-b5c2-4d9f6e1b913e") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004657 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cephfs-provisioner-token-chncz" (UniqueName: "kubernetes.io/secret/b20daf69-c543-4a8f-8537-aca884e50d37-cephfs-provisioner-token-chncz") pod "cephfs-provisioner-54847c557b-bnhm5" (UID: "b20daf69-c543-4a8f-8537-aca884e50d37") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004670 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/db29690f-531a-421c-be01-cbd66a55302a-default-token-fxs22") pod "test-7484bfb64b-v4qlx" (UID: "db29690f-531a-421c-be01-cbd66a55302a") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.004683 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/d0f1e928-bd01-4992-b3be-23d4561d0495-default-token-fxs22") pod "test-7484bfb64b-hcx72" (UID: "d0f1e928-bd01-4992-b3be-23d4561d0495") 2021-03-15T10:58:04.005 controller-0 kubelet[105877]: info I0315 10:58:04.005788 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:04.009 controller-0 kubelet[105877]: info I0315 10:58:04.009184 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:04.020 controller-0 kubelet[105877]: info I0315 10:58:04.020529 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:04.024 controller-0 kubelet[105877]: info I0315 10:58:04.023894 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:04.031 controller-0 kubelet[105877]: info I0315 10:58:04.026157 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:04.031 controller-0 kubelet[105877]: info I0315 10:58:04.030257 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:04.041 controller-0 kubelet[105877]: info I0315 10:58:04.040704 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:58:04.108 controller-0 kubelet[105877]: info I0315 10:58:04.107582 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/58ca8222-2663-47e3-a7a1-3016d3db3b42-default-token-fxs22") pod "test-7484bfb64b-m42sp" (UID: "58ca8222-2663-47e3-a7a1-3016d3db3b42") 2021-03-15T10:58:04.108 controller-0 kubelet[105877]: info I0315 10:58:04.107617 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/fd2fcbbd-5e41-4bbb-a2ac-dd48e182f457-default-token-fxs22") pod "test-7484bfb64b-5dfml" (UID: "fd2fcbbd-5e41-4bbb-a2ac-dd48e182f457") 2021-03-15T10:58:04.108 controller-0 kubelet[105877]: info I0315 10:58:04.107634 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/2ea68f46-5ae5-4ef4-b724-5b930a48b50c-default-token-fxs22") pod "test-7484bfb64b-nxblg" (UID: "2ea68f46-5ae5-4ef4-b724-5b930a48b50c") 2021-03-15T10:58:04.108 controller-0 kubelet[105877]: info I0315 10:58:04.107660 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/57588715-06a6-4300-a1d6-1d577e2794e3-default-token-fxs22") pod "test-7484bfb64b-dnshd" (UID: "57588715-06a6-4300-a1d6-1d577e2794e3") 2021-03-15T10:58:04.108 controller-0 kubelet[105877]: info I0315 10:58:04.107702 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cm-cert-manager-cainjector-token-cklbw" (UniqueName: "kubernetes.io/secret/b50a90e8-2b6c-46a6-9a6a-041d27641793-cm-cert-manager-cainjector-token-cklbw") pod "cm-cert-manager-cainjector-85849bd97-j9fg9" (UID: "b50a90e8-2b6c-46a6-9a6a-041d27641793") 2021-03-15T10:58:04.108 controller-0 kubelet[105877]: info I0315 10:58:04.107744 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/3898ae5a-02e5-4006-b712-8238e66a6df1-default-token-fxs22") pod "test-7484bfb64b-7r2tf" (UID: "3898ae5a-02e5-4006-b712-8238e66a6df1") 2021-03-15T10:58:04.108 controller-0 kubelet[105877]: info I0315 10:58:04.107765 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/166dc949-b96b-4554-84fc-260f937e0474-default-token-fxs22") pod "test-7484bfb64b-jphfn" (UID: "166dc949-b96b-4554-84fc-260f937e0474") 2021-03-15T10:58:04.108 controller-0 kubelet[105877]: info I0315 10:58:04.107777 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/15a9b75d-87ec-4f6d-8fcf-2f4ad4ff39ce-default-token-fxs22") pod "test-7484bfb64b-r9jkm" (UID: "15a9b75d-87ec-4f6d-8fcf-2f4ad4ff39ce") 2021-03-15T10:58:04.144 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/db29690f-531a-421c-be01-cbd66a55302a/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.145 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d0f1e928-bd01-4992-b3be-23d4561d0495/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.146 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/04febe4f-9802-4fea-aa77-43af9949d444/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.147 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/6b4f670e-41e5-4c5a-98d6-0e8b14be79d5/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.147 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/f0fe047f-a2f1-4d9b-ab96-3585bd070d41/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.148 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/81c8f94c-238c-4de9-89c4-beebd1ca84f8/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.148 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d5dd148c-8470-4ae4-9785-f06eea1eba7e/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.327 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/58ca8222-2663-47e3-a7a1-3016d3db3b42/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.328 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/166dc949-b96b-4554-84fc-260f937e0474/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.328 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/15a9b75d-87ec-4f6d-8fcf-2f4ad4ff39ce/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.329 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/3862d8ce-f977-478a-bf21-0c1d5a71fb7e/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.330 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/c19b01d9-c6cd-495b-b5c2-4d9f6e1b913e/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.330 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/5bd71b06-ad8b-401f-b898-45418d3a6e25/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.341 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/3898ae5a-02e5-4006-b712-8238e66a6df1/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.342 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/57588715-06a6-4300-a1d6-1d577e2794e3/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.343 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/2ea68f46-5ae5-4ef4-b724-5b930a48b50c/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.343 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/fd2fcbbd-5e41-4bbb-a2ac-dd48e182f457/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:58:04.344 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b20daf69-c543-4a8f-8537-aca884e50d37/volumes/kubernetes.io~secret/cephfs-provisioner-token-chncz. 2021-03-15T10:58:04.440 controller-0 kubelet[105877]: info E0315 10:58:04.440035 105877 cadvisor_stats_provider.go:400] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/k8s-infra/kubepods/pod6c3213e3-073b-4a5b-805d-19d5e6e1355d/08d3d0b420ba71edb8f3e2b8813aac96043a33184c26ddd952c36890b9f2ca62": RecentStats: unable to find data in memory cache] 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505535 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111023.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111023.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505612 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111023.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111023.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505626 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111023.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111023.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505639 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111023.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111023.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505649 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111023.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111023.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505660 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111028.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111028.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505672 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111028.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111028.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505683 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111028.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111028.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505694 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111028.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111028.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505704 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111028.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111028.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505734 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111031.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111031.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505746 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111031.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111031.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505756 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111031.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111031.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505767 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111031.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111031.scope: no such file or directory 2021-03-15T10:58:04.506 controller-0 kubelet[105877]: info W0315 10:58:04.505776 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111031.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111031.scope: no such file or directory 2021-03-15T10:58:04.531 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/acf9bf24-d2b2-4975-b4e5-9a211940e72e/volumes/kubernetes.io~secret/rbd-provisioner-token-9gq5x. 2021-03-15T10:58:04.742 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b50a90e8-2b6c-46a6-9a6a-041d27641793/volumes/kubernetes.io~secret/cm-cert-manager-cainjector-token-cklbw. 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468611 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111364.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111364.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468649 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111364.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111364.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468664 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111364.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111364.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468675 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111364.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111364.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468713 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111364.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111364.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468722 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111365.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111365.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468733 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111365.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111365.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468760 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111365.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111365.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468774 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111365.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111365.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468784 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111365.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111365.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468795 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111362.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111362.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468804 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111362.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111362.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468827 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111362.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111362.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468836 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111362.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111362.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468845 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111362.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111362.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468853 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111359.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111359.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468866 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111359.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111359.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468875 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111359.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111359.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468885 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111359.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111359.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.468895 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111359.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111359.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469149 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111357.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111357.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469163 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111357.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111357.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469172 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111357.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111357.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469186 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111357.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111357.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469196 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111357.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111357.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469205 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111346.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111346.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469213 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111346.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111346.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469222 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111346.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111346.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469234 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111346.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111346.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469244 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111346.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111346.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469255 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111352.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111352.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469265 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111352.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111352.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469279 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111352.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111352.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469289 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111352.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111352.scope: no such file or directory 2021-03-15T10:58:05.470 controller-0 kubelet[105877]: info W0315 10:58:05.469298 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111352.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111352.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.805960 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111413.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111413.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806006 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111413.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111413.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806020 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111413.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111413.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806031 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111413.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111413.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806044 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111413.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111413.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806053 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111408.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111408.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806064 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111408.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111408.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806080 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111408.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111408.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806091 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111408.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111408.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806100 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111408.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111408.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806112 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111405.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111405.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806121 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111405.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111405.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806131 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111405.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111405.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806142 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111405.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111405.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806153 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111405.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111405.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806163 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111379.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111379.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806174 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111379.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111379.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806183 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111379.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111379.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806194 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111379.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111379.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806202 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111379.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111379.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806226 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111377.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111377.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806236 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111377.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111377.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806246 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111377.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111377.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806255 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111377.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111377.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806265 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111377.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111377.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806273 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111373.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111373.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806283 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111373.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111373.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806291 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111373.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111373.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806301 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111373.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111373.scope: no such file or directory 2021-03-15T10:58:05.807 controller-0 kubelet[105877]: info W0315 10:58:05.806310 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111373.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111373.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809527 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111406.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111406.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809585 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111406.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111406.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809598 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111406.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111406.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809610 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111406.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111406.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809621 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111406.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111406.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809631 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111416.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111416.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809640 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111416.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111416.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809650 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111416.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111416.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809666 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111416.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111416.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809677 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111416.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111416.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809686 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111415.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111415.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809696 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111415.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111415.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809705 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111415.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111415.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809848 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111415.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111415.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809866 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111415.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111415.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809918 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111414.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111414.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809931 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111414.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111414.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809943 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111414.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111414.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809953 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111414.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111414.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809963 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111414.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111414.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.809977 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111472.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111472.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.811446 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111472.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111472.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.811474 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111472.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111472.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.811488 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111472.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111472.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.811500 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111472.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111472.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.812379 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111684.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111684.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.812400 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111684.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111684.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.812415 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111684.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111684.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.812427 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111684.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111684.scope: no such file or directory 2021-03-15T10:58:05.813 controller-0 kubelet[105877]: info W0315 10:58:05.812438 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111684.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111684.scope: no such file or directory 2021-03-15T10:58:05.826 controller-0 kubelet[105877]: info W0315 10:58:05.826179 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-111987.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-111987.scope: no such file or directory 2021-03-15T10:58:05.826 controller-0 kubelet[105877]: info W0315 10:58:05.826219 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-111987.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-111987.scope: no such file or directory 2021-03-15T10:58:05.826 controller-0 kubelet[105877]: info W0315 10:58:05.826235 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-111987.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-111987.scope: no such file or directory 2021-03-15T10:58:05.826 controller-0 kubelet[105877]: info W0315 10:58:05.826246 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-111987.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-111987.scope: no such file or directory 2021-03-15T10:58:05.826 controller-0 kubelet[105877]: info W0315 10:58:05.826257 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-111987.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-111987.scope: no such file or directory 2021-03-15T10:58:05.000 controller-0 k8s-pod-recovery(105887): info : Waiting on pod transitions to stabilize... 29 pods are not Running/Completed 2021-03-15T10:58:06.000 controller-0 affine-tasks.sh(2360): info : kubelet is ready 2021-03-15T10:58:06.000 controller-0 affine-tasks.sh(2360): info : Update /sys/fs/cgroup/cpuset/k8s-infra, ONLINE_NODES=0, NONISOL_CPUS=0-3 2021-03-15T10:58:06.000 controller-0 affine-tasks.sh(2360): info : Affine drbd tasks, CPUS=0-3 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 14 calia5cec93509a fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 15 cali11646e65f05 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 16 cali5cf40ae32ab fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 17 cali36febaf9e44 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 18 calib8337b53dd2 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 19 calia446f3a1608 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 20 cali224177977fe fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 21 calic126abb9a30 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 22 calia1d49d75934 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 23 calidae17860453 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 24 cali5163e0839fc fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: info Listen normally on 25 cali09a5641c863 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:06.000 controller-0 ntpd[82287]: debug new interface(s) found: waking up resolver 2021-03-15T10:58:10.762 controller-0 collectd[106099]: info alarm notifier reading: 0.00 % usage - /dev 2021-03-15T10:58:10.762 controller-0 collectd[106099]: info alarm notifier reading: 0.00 % usage - /dev/shm 2021-03-15T10:58:10.762 controller-0 collectd[106099]: info alarm notifier reading: 41.45 % usage - / 2021-03-15T10:58:11.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 0 seconds. 2021-03-15T10:58:11.713 controller-0 collectd[106099]: info interface plugin found no startup alarms 2021-03-15T10:58:11.760 controller-0 collectd[106099]: info interface plugin Link Status Query Response:2: 2021-03-15T10:58:11.760 controller-0 collectd[106099]: info {u'status': u'pass', u'link_info': [{u'network': u'mgmt', u'links': [{u'state': u'Up', u'name': u'enp0s8', u'time': u'1615805818706658'}]}, {u'network': u'cluster-host', u'links': [{u'state': u'Up', u'name': u'enp0s8', u'time': u'1615805818706684'}]}, {u'network': u'oam', u'links': [{u'state': u'Up', u'name': u'enp0s3', u'time': u'1615805818706712'}]}]} 2021-03-15T10:58:11.760 controller-0 collectd[106099]: info interface plugin mgmt 100% ; link one 'enp0s8' went Up at 2021-03-15 10:56:58 2021-03-15T10:58:11.760 controller-0 collectd[106099]: info interface plugin cluster-host 100% ; link one 'enp0s8' went Up at 2021-03-15 10:56:58 2021-03-15T10:58:11.760 controller-0 collectd[106099]: info interface plugin oam 100% ; link one 'enp0s3' went Up at 2021-03-15 10:56:58 2021-03-15T10:58:12.174 controller-0 collectd[106099]: info platform memory usage: Usage: 41.3%; Reserved: 4600.0 MiB, Platform: 1901.6 MiB (Base: 1437.9, k8s-system: 463.6), k8s-addon: 0.0 2021-03-15T10:58:12.174 controller-0 collectd[106099]: info 4K memory usage: Anon: 7.0%, Anon: 1755.7 MiB, cgroup-rss: 1784.0 MiB, Avail: 23296.1 MiB, Total: 25051.8 MiB 2021-03-15T10:58:12.174 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.87%, Anon: 1755.9 MiB, Avail: 23796.1 MiB, Total: 25552.0 MiB 2021-03-15T10:58:12.174 controller-0 collectd[106099]: info alarm notifier monitoring Memory platform % usage 2021-03-15T10:58:12.174 controller-0 collectd[106099]: info alarm notifier reading: 41.34 % usage - platform 2021-03-15T10:58:12.174 controller-0 collectd[106099]: info alarm notifier monitoring Memory node0 % usage 2021-03-15T10:58:12.174 controller-0 collectd[106099]: info alarm notifier reading: 6.87 % usage - node0 2021-03-15T10:58:12.174 controller-0 collectd[106099]: info alarm notifier monitoring Memory total % usage 2021-03-15T10:58:12.174 controller-0 collectd[106099]: info alarm notifier reading: 7.01 % usage - total 2021-03-15T10:58:15.000 controller-0 nscd: notice 109652 checking for monitored file `/etc/netgroup': No such file or directory 2021-03-15T10:58:15.746 controller-0 kubelet[105877]: info E0315 10:58:15.746291 105877 cadvisor_stats_provider.go:400] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/k8s-infra/kubepods/podfd2fcbbd-5e41-4bbb-a2ac-dd48e182f457/cfeb00dbfbefddcb6a93ef3911da52dc71242699103291f021d8cc83f6eb634e": RecentStats: unable to find data in memory cache] 2021-03-15T10:58:16.000 controller-0 k8s-pod-recovery(105887): info : Waiting on pod transitions to stabilize... 22 pods are not Running/Completed 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 26 cali4589981c65d fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 27 calif27c695d51c fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 28 cali3e5b0c60e29 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 29 cali54c4e2dd08f fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 30 cali2abab3b5a0e fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 31 cali1d997cc966f fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 32 cali542020ba3a8 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 33 calidc935fc553d fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 34 cali95268ea00d7 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 35 cali8d9fa322cb2 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 36 cali1fbf706526b fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 37 cali53ebe99d223 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 38 cali0a09b68f47c fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 39 calicbb9a4acb86 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 40 cali567935869c1 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 41 cali396089a9449 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 42 cali6ba94563837 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 43 calibe18b8da647 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 44 calia1e9ef4a8f5 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 45 calic81f215f57f fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 46 cali54ce6db7ecf fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 47 cali46b03095acb fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 48 cali2aecf0c2f8c fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: info Listen normally on 49 cali6a0d91d69ad fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:58:19.000 controller-0 ntpd[82287]: debug new interface(s) found: waking up resolver 2021-03-15T10:58:21.000 controller-0 k8s-pod-recovery(105887): info : Waiting on pod transitions to stabilize... 15 pods are not Running/Completed 2021-03-15T10:58:26.000 controller-0 k8s-pod-recovery(105887): info : Waiting on pod transitions to stabilize... 4 pods are not Running/Completed 2021-03-15T10:58:32.000 controller-0 k8s-pod-recovery(105887): info : Waiting on pod transitions to stabilize... 0 pods are not Running/Completed 2021-03-15T10:58:37.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 0 seconds. 2021-03-15T10:58:40.766 controller-0 collectd[106099]: info platform cpu usage plugin found 4 cpus total; monitoring 2 cpus, cpu list: 0-1 2021-03-15T10:58:40.766 controller-0 collectd[106099]: info remote logging server node ready count 2 of 3 2021-03-15T10:58:40.795 controller-0 collectd[106099]: info platform memory usage: Usage: 37.3%; Reserved: 4600.0 MiB, Platform: 1714.7 MiB (Base: 1234.3, k8s-system: 480.3), k8s-addon: 0.0 2021-03-15T10:58:40.795 controller-0 collectd[106099]: info 4K memory usage: Anon: 6.9%, Anon: 1728.3 MiB, cgroup-rss: 1778.3 MiB, Avail: 23226.3 MiB, Total: 24954.6 MiB 2021-03-15T10:58:40.795 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.79%, Anon: 1728.3 MiB, Avail: 23706.8 MiB, Total: 25435.0 MiB 2021-03-15T10:58:40.948 controller-0 collectd[106099]: info platform cpu usage plugin initialization completed 2021-03-15T10:58:42.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 5 seconds. 2021-03-15T10:58:48.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 10 seconds. 2021-03-15T10:58:53.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 15 seconds. 2021-03-15T10:58:58.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 20 seconds. 2021-03-15T10:59:03.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 25 seconds. 2021-03-15T10:59:08.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-2wlvq 2021-03-15T10:59:09.004 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-2wlvq" deleted 2021-03-15T10:59:09.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-48ml5 2021-03-15T10:59:09.587 controller-0 kubelet[105877]: info I0315 10:59:09.584334 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:09.735 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-48ml5" deleted 2021-03-15T10:59:09.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-5dfml 2021-03-15T10:59:09.785 controller-0 kubelet[105877]: info I0315 10:59:09.785501 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/d5224b99-d11d-4605-95c6-930434d9eea2-default-token-fxs22") pod "test-7484bfb64b-zrpm9" (UID: "d5224b99-d11d-4605-95c6-930434d9eea2") 2021-03-15T10:59:09.881 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-5dfml" deleted 2021-03-15T10:59:09.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-5xq6v 2021-03-15T10:59:09.898 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d5224b99-d11d-4605-95c6-930434d9eea2/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:10.059 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-5xq6v" deleted 2021-03-15T10:59:10.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-7r2tf 2021-03-15T10:59:10.712 controller-0 kubelet[105877]: info I0315 10:59:10.712244 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 8596c5ef867fed9c05c6fb47ddc6c3f5c2a233e49982bea5589f0d5d707607fc 2021-03-15T10:59:10.755 controller-0 kubelet[105877]: info I0315 10:59:10.753838 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 8596c5ef867fed9c05c6fb47ddc6c3f5c2a233e49982bea5589f0d5d707607fc 2021-03-15T10:59:10.755 controller-0 kubelet[105877]: info E0315 10:59:10.754828 105877 remote_runtime.go:295] ContainerStatus "8596c5ef867fed9c05c6fb47ddc6c3f5c2a233e49982bea5589f0d5d707607fc" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "8596c5ef867fed9c05c6fb47ddc6c3f5c2a233e49982bea5589f0d5d707607fc": does not exist 2021-03-15T10:59:10.755 controller-0 kubelet[105877]: info I0315 10:59:10.754886 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f361f20e2a974eb67fa4337d5089a18574e3e4de9516b4ea9169b74406bc4c81 2021-03-15T10:59:10.789 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-7r2tf" deleted 2021-03-15T10:59:10.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-9bghk 2021-03-15T10:59:10.803 controller-0 kubelet[105877]: info I0315 10:59:10.802933 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: cfeb00dbfbefddcb6a93ef3911da52dc71242699103291f021d8cc83f6eb634e 2021-03-15T10:59:10.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%45, but no knowledge of it 2021-03-15T10:59:11.119 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-9bghk" deleted 2021-03-15T10:59:11.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-dnshd 2021-03-15T10:59:11.203 controller-0 collectd[106099]: info platform cpu usage plugin Usage: 19.2% (avg per cpu); cpus: 2, Platform: 30.0% (Base: 24.6, k8s-system: 5.4), k8s-addon: 0.0 2021-03-15T10:59:11.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%25, but no knowledge of it 2021-03-15T10:59:11.224 controller-0 collectd[106099]: info alarm notifier reading: 19.22 % usage - Platform CPU 2021-03-15T10:59:11.339 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-dnshd" deleted 2021-03-15T10:59:11.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-dt6tf 2021-03-15T10:59:11.427 controller-0 collectd[106099]: info platform memory usage: Usage: 38.1%; Reserved: 4600.0 MiB, Platform: 1753.7 MiB (Base: 1272.1, k8s-system: 481.6), k8s-addon: 0.0 2021-03-15T10:59:11.427 controller-0 collectd[106099]: info 4K memory usage: Anon: 7.0%, Anon: 1739.5 MiB, cgroup-rss: 1816.5 MiB, Avail: 23217.6 MiB, Total: 24957.1 MiB 2021-03-15T10:59:11.427 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.85%, Anon: 1740.2 MiB, Avail: 23672.7 MiB, Total: 25412.9 MiB 2021-03-15T10:59:11.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%34, but no knowledge of it 2021-03-15T10:59:11.552 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-dt6tf" deleted 2021-03-15T10:59:11.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-f59jr 2021-03-15T10:59:11.747 controller-0 kubelet[105877]: info I0315 10:59:11.743011 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: e05ecbe8152194cf28c7f1ba1a9b88771e1236ff5808a47855db25dab8e9e094 2021-03-15T10:59:11.772 controller-0 kubelet[105877]: info I0315 10:59:11.772242 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: acc50296a3b747510c535896f9fc1d2f720f0965ac9c87bb5e414864d746b629 2021-03-15T10:59:11.784 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-f59jr" deleted 2021-03-15T10:59:11.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-f84tj 2021-03-15T10:59:11.798 controller-0 kubelet[105877]: info I0315 10:59:11.794638 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: acc50296a3b747510c535896f9fc1d2f720f0965ac9c87bb5e414864d746b629 2021-03-15T10:59:11.802 controller-0 kubelet[105877]: info E0315 10:59:11.802316 105877 remote_runtime.go:295] ContainerStatus "acc50296a3b747510c535896f9fc1d2f720f0965ac9c87bb5e414864d746b629" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "acc50296a3b747510c535896f9fc1d2f720f0965ac9c87bb5e414864d746b629": does not exist 2021-03-15T10:59:11.802 controller-0 kubelet[105877]: info I0315 10:59:11.802366 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 3e300d2f06f51d2565604d0e6e750b16a981706d29ce3d4ffcea6706d4c043e1 2021-03-15T10:59:11.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%29, but no knowledge of it 2021-03-15T10:59:11.901 controller-0 kubelet[105877]: info I0315 10:59:11.890721 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 3e300d2f06f51d2565604d0e6e750b16a981706d29ce3d4ffcea6706d4c043e1 2021-03-15T10:59:11.901 controller-0 kubelet[105877]: info E0315 10:59:11.891348 105877 remote_runtime.go:295] ContainerStatus "3e300d2f06f51d2565604d0e6e750b16a981706d29ce3d4ffcea6706d4c043e1" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "3e300d2f06f51d2565604d0e6e750b16a981706d29ce3d4ffcea6706d4c043e1": does not exist 2021-03-15T10:59:11.937 controller-0 kubelet[105877]: info I0315 10:59:11.937368 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/f0fe047f-a2f1-4d9b-ab96-3585bd070d41-default-token-fxs22") pod "f0fe047f-a2f1-4d9b-ab96-3585bd070d41" (UID: "f0fe047f-a2f1-4d9b-ab96-3585bd070d41") 2021-03-15T10:59:11.986 controller-0 kubelet[105877]: info I0315 10:59:11.981881 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0fe047f-a2f1-4d9b-ab96-3585bd070d41-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "f0fe047f-a2f1-4d9b-ab96-3585bd070d41" (UID: "f0fe047f-a2f1-4d9b-ab96-3585bd070d41"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:12.035 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-f84tj" deleted 2021-03-15T10:59:12.038 controller-0 kubelet[105877]: info I0315 10:59:12.038246 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/f0fe047f-a2f1-4d9b-ab96-3585bd070d41-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:12.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-g7mcc 2021-03-15T10:59:12.221 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-g7mcc" deleted 2021-03-15T10:59:12.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%33, but no knowledge of it 2021-03-15T10:59:12.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-ghf5n 2021-03-15T10:59:12.393 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-ghf5n" deleted 2021-03-15T10:59:12.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-gsxr9 2021-03-15T10:59:12.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%17, but no knowledge of it 2021-03-15T10:59:12.566 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-gsxr9" deleted 2021-03-15T10:59:12.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-hcx72 2021-03-15T10:59:12.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%35, but no knowledge of it 2021-03-15T10:59:12.780 controller-0 kubelet[105877]: info I0315 10:59:12.780537 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 6127a6d9bbc207b631117690e93332af6a8ec68d4c3ade33fd60067fc2287822 2021-03-15T10:59:12.814 controller-0 kubelet[105877]: info I0315 10:59:12.814281 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 45a368fde14043fbdafd18bb0fdd96ec968a67d8ef7af9e1fc8b34d7fb8e4e6b 2021-03-15T10:59:12.817 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-hcx72" deleted 2021-03-15T10:59:12.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-j982c 2021-03-15T10:59:12.860 controller-0 kubelet[105877]: info I0315 10:59:12.860098 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 2e8b3cda4e7321925b1b2e648a79fc0a92ccaeed967a3b7400385e55e0fe27ed 2021-03-15T10:59:12.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%19, but no knowledge of it 2021-03-15T10:59:12.916 controller-0 kubelet[105877]: info I0315 10:59:12.916565 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c1dd6b6a703a73f8644f605e7d5a9dfc5ec68c05bf7ecde70f2ea77cea8bcf9e 2021-03-15T10:59:12.973 controller-0 kubelet[105877]: info I0315 10:59:12.973375 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c1dd6b6a703a73f8644f605e7d5a9dfc5ec68c05bf7ecde70f2ea77cea8bcf9e 2021-03-15T10:59:12.977 controller-0 kubelet[105877]: info E0315 10:59:12.976970 105877 remote_runtime.go:295] ContainerStatus "c1dd6b6a703a73f8644f605e7d5a9dfc5ec68c05bf7ecde70f2ea77cea8bcf9e" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "c1dd6b6a703a73f8644f605e7d5a9dfc5ec68c05bf7ecde70f2ea77cea8bcf9e": does not exist 2021-03-15T10:59:12.977 controller-0 kubelet[105877]: info I0315 10:59:12.977014 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a0c419c16864f7d29d62673b94a5d7d39ded83ac8714dc714278f1a23bc8be89 2021-03-15T10:59:13.001 controller-0 kubelet[105877]: info I0315 10:59:13.001240 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a0c419c16864f7d29d62673b94a5d7d39ded83ac8714dc714278f1a23bc8be89 2021-03-15T10:59:13.001 controller-0 kubelet[105877]: info E0315 10:59:13.001618 105877 remote_runtime.go:295] ContainerStatus "a0c419c16864f7d29d62673b94a5d7d39ded83ac8714dc714278f1a23bc8be89" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "a0c419c16864f7d29d62673b94a5d7d39ded83ac8714dc714278f1a23bc8be89": does not exist 2021-03-15T10:59:13.067 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-j982c" deleted 2021-03-15T10:59:13.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-jphfn 2021-03-15T10:59:13.621 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-jphfn" deleted 2021-03-15T10:59:13.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-l5nn9 2021-03-15T10:59:13.905 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-l5nn9" deleted 2021-03-15T10:59:13.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%15, but no knowledge of it 2021-03-15T10:59:13.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-m42sp 2021-03-15T10:59:13.993 controller-0 kubelet[105877]: info I0315 10:59:13.992009 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 946139e126b076dafc4f6c83570a6af0f0b30b2d6b1a3c623229c89112289774 2021-03-15T10:59:14.029 controller-0 kubelet[105877]: info I0315 10:59:14.027468 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/aa4be24a-b653-4c67-a634-c45873652571-default-token-fxs22") pod "aa4be24a-b653-4c67-a634-c45873652571" (UID: "aa4be24a-b653-4c67-a634-c45873652571") 2021-03-15T10:59:14.029 controller-0 kubelet[105877]: info I0315 10:59:14.027505 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/04febe4f-9802-4fea-aa77-43af9949d444-default-token-fxs22") pod "04febe4f-9802-4fea-aa77-43af9949d444" (UID: "04febe4f-9802-4fea-aa77-43af9949d444") 2021-03-15T10:59:14.029 controller-0 kubelet[105877]: info I0315 10:59:14.027535 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/fd2fcbbd-5e41-4bbb-a2ac-dd48e182f457-default-token-fxs22") pod "fd2fcbbd-5e41-4bbb-a2ac-dd48e182f457" (UID: "fd2fcbbd-5e41-4bbb-a2ac-dd48e182f457") 2021-03-15T10:59:14.029 controller-0 kubelet[105877]: info I0315 10:59:14.027547 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/3898ae5a-02e5-4006-b712-8238e66a6df1-default-token-fxs22") pod "3898ae5a-02e5-4006-b712-8238e66a6df1" (UID: "3898ae5a-02e5-4006-b712-8238e66a6df1") 2021-03-15T10:59:14.049 controller-0 kubelet[105877]: info I0315 10:59:14.048686 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa4be24a-b653-4c67-a634-c45873652571-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "aa4be24a-b653-4c67-a634-c45873652571" (UID: "aa4be24a-b653-4c67-a634-c45873652571"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:14.049 controller-0 kubelet[105877]: info I0315 10:59:14.049061 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3898ae5a-02e5-4006-b712-8238e66a6df1-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "3898ae5a-02e5-4006-b712-8238e66a6df1" (UID: "3898ae5a-02e5-4006-b712-8238e66a6df1"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:14.070 controller-0 kubelet[105877]: info I0315 10:59:14.069125 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04febe4f-9802-4fea-aa77-43af9949d444-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "04febe4f-9802-4fea-aa77-43af9949d444" (UID: "04febe4f-9802-4fea-aa77-43af9949d444"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:14.079 controller-0 kubelet[105877]: info I0315 10:59:14.079172 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b0f3b79dc1d1ab2a10e0d7061fc0187c249b494c76b0d95684b5e307ab71957d 2021-03-15T10:59:14.083 controller-0 kubelet[105877]: info I0315 10:59:14.083041 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd2fcbbd-5e41-4bbb-a2ac-dd48e182f457-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "fd2fcbbd-5e41-4bbb-a2ac-dd48e182f457" (UID: "fd2fcbbd-5e41-4bbb-a2ac-dd48e182f457"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:14.111 controller-0 kubelet[105877]: info I0315 10:59:14.110954 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b0f3b79dc1d1ab2a10e0d7061fc0187c249b494c76b0d95684b5e307ab71957d 2021-03-15T10:59:14.111 controller-0 kubelet[105877]: info E0315 10:59:14.111268 105877 remote_runtime.go:295] ContainerStatus "b0f3b79dc1d1ab2a10e0d7061fc0187c249b494c76b0d95684b5e307ab71957d" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "b0f3b79dc1d1ab2a10e0d7061fc0187c249b494c76b0d95684b5e307ab71957d": does not exist 2021-03-15T10:59:14.133 controller-0 kubelet[105877]: info I0315 10:59:14.131245 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/fd2fcbbd-5e41-4bbb-a2ac-dd48e182f457-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:14.133 controller-0 kubelet[105877]: info I0315 10:59:14.131279 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/3898ae5a-02e5-4006-b712-8238e66a6df1-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:14.133 controller-0 kubelet[105877]: info I0315 10:59:14.131286 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/aa4be24a-b653-4c67-a634-c45873652571-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:14.133 controller-0 kubelet[105877]: info I0315 10:59:14.131292 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/04febe4f-9802-4fea-aa77-43af9949d444-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:14.186 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-m42sp" deleted 2021-03-15T10:59:14.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-m6sl7 2021-03-15T10:59:14.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%27, but no knowledge of it 2021-03-15T10:59:14.401 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-m6sl7" deleted 2021-03-15T10:59:14.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-np7bp 2021-03-15T10:59:14.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%16, but no knowledge of it 2021-03-15T10:59:14.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%46, but no knowledge of it 2021-03-15T10:59:14.646 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-np7bp" deleted 2021-03-15T10:59:14.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-nxblg 2021-03-15T10:59:14.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%30, but no knowledge of it 2021-03-15T10:59:14.833 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-nxblg" deleted 2021-03-15T10:59:14.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-pd8h4 2021-03-15T10:59:15.045 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-pd8h4" deleted 2021-03-15T10:59:15.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-pfzz5 2021-03-15T10:59:15.130 controller-0 kubelet[105877]: info I0315 10:59:15.125901 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c1a3e64bb6bcfc7255288392d83576d8a1177cb3e600b11668215f3844cf8542 2021-03-15T10:59:15.161 controller-0 kubelet[105877]: info I0315 10:59:15.156180 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c1a3e64bb6bcfc7255288392d83576d8a1177cb3e600b11668215f3844cf8542 2021-03-15T10:59:15.161 controller-0 kubelet[105877]: info E0315 10:59:15.158547 105877 remote_runtime.go:295] ContainerStatus "c1a3e64bb6bcfc7255288392d83576d8a1177cb3e600b11668215f3844cf8542" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "c1a3e64bb6bcfc7255288392d83576d8a1177cb3e600b11668215f3844cf8542": does not exist 2021-03-15T10:59:15.161 controller-0 kubelet[105877]: info I0315 10:59:15.158592 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a96345197a44b820cf735cbbf053d0c79782516d0c056db8765d032c71d21daf 2021-03-15T10:59:15.226 controller-0 kubelet[105877]: info I0315 10:59:15.223814 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 2cbe8f44078896383c54697a3e2874d722ce4536bb16ce1b9d2792c5c691d786 2021-03-15T10:59:15.248 controller-0 kubelet[105877]: info I0315 10:59:15.246593 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 89d56e25d148057b52e433b4d24165590de4b4d7dcb331a7be19414e87e93d89 2021-03-15T10:59:15.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%39, but no knowledge of it 2021-03-15T10:59:15.265 controller-0 kubelet[105877]: info I0315 10:59:15.263795 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 60c322eb3bfe67d24a2e465d4375f957bc64bdbac8660422215bbee355f2eab1 2021-03-15T10:59:15.305 controller-0 kubelet[105877]: info I0315 10:59:15.304982 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 60c322eb3bfe67d24a2e465d4375f957bc64bdbac8660422215bbee355f2eab1 2021-03-15T10:59:15.306 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-pfzz5" deleted 2021-03-15T10:59:15.308 controller-0 kubelet[105877]: info E0315 10:59:15.308640 105877 remote_runtime.go:295] ContainerStatus "60c322eb3bfe67d24a2e465d4375f957bc64bdbac8660422215bbee355f2eab1" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "60c322eb3bfe67d24a2e465d4375f957bc64bdbac8660422215bbee355f2eab1": does not exist 2021-03-15T10:59:15.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-r9jkm 2021-03-15T10:59:15.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%14, but no knowledge of it 2021-03-15T10:59:15.470 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-r9jkm" deleted 2021-03-15T10:59:15.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-s8jgr 2021-03-15T10:59:15.700 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-s8jgr" deleted 2021-03-15T10:59:15.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-tsswj 2021-03-15T10:59:15.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%38, but no knowledge of it 2021-03-15T10:59:15.904 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-tsswj" deleted 2021-03-15T10:59:15.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-tz4bw 2021-03-15T10:59:16.101 controller-0 kubelet[105877]: info I0315 10:59:16.099519 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/92febbca-a535-419d-914f-e57ccd412cee-default-token-fxs22") pod "92febbca-a535-419d-914f-e57ccd412cee" (UID: "92febbca-a535-419d-914f-e57ccd412cee") 2021-03-15T10:59:16.102 controller-0 kubelet[105877]: info I0315 10:59:16.101862 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/333220c4-992b-4bf7-ad55-17b03e493120-default-token-fxs22") pod "333220c4-992b-4bf7-ad55-17b03e493120" (UID: "333220c4-992b-4bf7-ad55-17b03e493120") 2021-03-15T10:59:16.102 controller-0 kubelet[105877]: info I0315 10:59:16.101909 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/b48f7f15-7bad-4f71-bfd2-6ac35df9a468-default-token-fxs22") pod "b48f7f15-7bad-4f71-bfd2-6ac35df9a468" (UID: "b48f7f15-7bad-4f71-bfd2-6ac35df9a468") 2021-03-15T10:59:16.102 controller-0 kubelet[105877]: info I0315 10:59:16.101924 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/57588715-06a6-4300-a1d6-1d577e2794e3-default-token-fxs22") pod "57588715-06a6-4300-a1d6-1d577e2794e3" (UID: "57588715-06a6-4300-a1d6-1d577e2794e3") 2021-03-15T10:59:16.136 controller-0 kubelet[105877]: info I0315 10:59:16.135607 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57588715-06a6-4300-a1d6-1d577e2794e3-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "57588715-06a6-4300-a1d6-1d577e2794e3" (UID: "57588715-06a6-4300-a1d6-1d577e2794e3"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:16.137 controller-0 kubelet[105877]: info I0315 10:59:16.135677 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/333220c4-992b-4bf7-ad55-17b03e493120-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "333220c4-992b-4bf7-ad55-17b03e493120" (UID: "333220c4-992b-4bf7-ad55-17b03e493120"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:16.137 controller-0 kubelet[105877]: info I0315 10:59:16.137062 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b48f7f15-7bad-4f71-bfd2-6ac35df9a468-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "b48f7f15-7bad-4f71-bfd2-6ac35df9a468" (UID: "b48f7f15-7bad-4f71-bfd2-6ac35df9a468"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:16.137 controller-0 kubelet[105877]: info I0315 10:59:16.137349 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92febbca-a535-419d-914f-e57ccd412cee-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "92febbca-a535-419d-914f-e57ccd412cee" (UID: "92febbca-a535-419d-914f-e57ccd412cee"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:16.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%41, but no knowledge of it 2021-03-15T10:59:16.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%43, but no knowledge of it 2021-03-15T10:59:16.158 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-tz4bw" deleted 2021-03-15T10:59:16.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-v4qlx 2021-03-15T10:59:16.202 controller-0 kubelet[105877]: info I0315 10:59:16.202348 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/333220c4-992b-4bf7-ad55-17b03e493120-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:16.202 controller-0 kubelet[105877]: info I0315 10:59:16.202368 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/b48f7f15-7bad-4f71-bfd2-6ac35df9a468-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:16.202 controller-0 kubelet[105877]: info I0315 10:59:16.202375 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/57588715-06a6-4300-a1d6-1d577e2794e3-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:16.202 controller-0 kubelet[105877]: info I0315 10:59:16.202380 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/92febbca-a535-419d-914f-e57ccd412cee-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:16.212 controller-0 kubelet[105877]: info I0315 10:59:16.211509 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 80495f704c78eb0366fa86d89cf3f5327c10b075ecde62ea9dffc3757130c8e3 2021-03-15T10:59:16.234 controller-0 kubelet[105877]: info I0315 10:59:16.233309 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: f6b9f331ca8a01bf8f9ebaace10d5916655f32688b71f686a701826576a0b764 2021-03-15T10:59:16.248 controller-0 kubelet[105877]: info I0315 10:59:16.246987 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: d488cdc11654efce2ad6d6a76f6f83411211b6d11f0644402ad4241e1cd5b88b 2021-03-15T10:59:16.272 controller-0 kubelet[105877]: info I0315 10:59:16.272332 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: d488cdc11654efce2ad6d6a76f6f83411211b6d11f0644402ad4241e1cd5b88b 2021-03-15T10:59:16.274 controller-0 kubelet[105877]: info E0315 10:59:16.273641 105877 remote_runtime.go:295] ContainerStatus "d488cdc11654efce2ad6d6a76f6f83411211b6d11f0644402ad4241e1cd5b88b" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "d488cdc11654efce2ad6d6a76f6f83411211b6d11f0644402ad4241e1cd5b88b": does not exist 2021-03-15T10:59:16.274 controller-0 kubelet[105877]: info I0315 10:59:16.273674 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: c2a8b881e12ee081b5b80ca711e09aa86189c43b73c2e9c54210452cd444ba2b 2021-03-15T10:59:16.306 controller-0 kubelet[105877]: info I0315 10:59:16.306574 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 5e9737a2b3846f5de93d639600d28256afb3baf1f6ea05cce7b954d2a5f182c7 2021-03-15T10:59:16.315 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-v4qlx" deleted 2021-03-15T10:59:16.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-w8p7n 2021-03-15T10:59:16.606 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-w8p7n" deleted 2021-03-15T10:59:16.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Recovering: default test-7484bfb64b-zk9kb 2021-03-15T10:59:16.851 controller-0 k8s-pod-recovery[105887]: info pod "test-7484bfb64b-zk9kb" deleted 2021-03-15T10:59:16.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%22, but no knowledge of it 2021-03-15T10:59:17.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%37, but no knowledge of it 2021-03-15T10:59:17.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%13, but no knowledge of it 2021-03-15T10:59:17.320 controller-0 kubelet[105877]: info I0315 10:59:17.312740 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: b751e150527dff1939c518a323df3f0a92d7a7bdba5cf235070f042f91fd96b7 2021-03-15T10:59:17.000 controller-0 k8s-pod-recovery(105887): info : Waiting on pod transitions to stabilize... 60 pods are not Running/Completed 2021-03-15T10:59:17.349 controller-0 kubelet[105877]: info I0315 10:59:17.348941 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 49fde9adf6cc11a9eac859c40865d630811ee171162a1092f695691d193da697 2021-03-15T10:59:17.375 controller-0 kubelet[105877]: info I0315 10:59:17.374611 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: d67a8a83cc6399aca160fa35b87a24be2e3aece2ebcc0238e92cd0999acb828f 2021-03-15T10:59:17.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%12, but no knowledge of it 2021-03-15T10:59:17.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%20, but no knowledge of it 2021-03-15T10:59:18.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%28, but no knowledge of it 2021-03-15T10:59:18.154 controller-0 kubelet[105877]: info I0315 10:59:18.152611 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/58ca8222-2663-47e3-a7a1-3016d3db3b42-default-token-fxs22") pod "58ca8222-2663-47e3-a7a1-3016d3db3b42" (UID: "58ca8222-2663-47e3-a7a1-3016d3db3b42") 2021-03-15T10:59:18.154 controller-0 kubelet[105877]: info I0315 10:59:18.152650 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/6c3213e3-073b-4a5b-805d-19d5e6e1355d-default-token-fxs22") pod "6c3213e3-073b-4a5b-805d-19d5e6e1355d" (UID: "6c3213e3-073b-4a5b-805d-19d5e6e1355d") 2021-03-15T10:59:18.154 controller-0 kubelet[105877]: info I0315 10:59:18.152667 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/5bd71b06-ad8b-401f-b898-45418d3a6e25-default-token-fxs22") pod "5bd71b06-ad8b-401f-b898-45418d3a6e25" (UID: "5bd71b06-ad8b-401f-b898-45418d3a6e25") 2021-03-15T10:59:18.154 controller-0 kubelet[105877]: info I0315 10:59:18.152683 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/c19b01d9-c6cd-495b-b5c2-4d9f6e1b913e-default-token-fxs22") pod "c19b01d9-c6cd-495b-b5c2-4d9f6e1b913e" (UID: "c19b01d9-c6cd-495b-b5c2-4d9f6e1b913e") 2021-03-15T10:59:18.154 controller-0 kubelet[105877]: info I0315 10:59:18.152697 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/d0f1e928-bd01-4992-b3be-23d4561d0495-default-token-fxs22") pod "d0f1e928-bd01-4992-b3be-23d4561d0495" (UID: "d0f1e928-bd01-4992-b3be-23d4561d0495") 2021-03-15T10:59:18.154 controller-0 kubelet[105877]: info I0315 10:59:18.152965 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/3862d8ce-f977-478a-bf21-0c1d5a71fb7e-default-token-fxs22") pod "3862d8ce-f977-478a-bf21-0c1d5a71fb7e" (UID: "3862d8ce-f977-478a-bf21-0c1d5a71fb7e") 2021-03-15T10:59:18.154 controller-0 kubelet[105877]: info I0315 10:59:18.152984 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/4c110c70-3629-4c2b-8829-847e0705b394-default-token-fxs22") pod "4c110c70-3629-4c2b-8829-847e0705b394" (UID: "4c110c70-3629-4c2b-8829-847e0705b394") 2021-03-15T10:59:18.154 controller-0 kubelet[105877]: info I0315 10:59:18.153000 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/d5dd148c-8470-4ae4-9785-f06eea1eba7e-default-token-fxs22") pod "d5dd148c-8470-4ae4-9785-f06eea1eba7e" (UID: "d5dd148c-8470-4ae4-9785-f06eea1eba7e") 2021-03-15T10:59:18.154 controller-0 kubelet[105877]: info I0315 10:59:18.153014 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/166dc949-b96b-4554-84fc-260f937e0474-default-token-fxs22") pod "166dc949-b96b-4554-84fc-260f937e0474" (UID: "166dc949-b96b-4554-84fc-260f937e0474") 2021-03-15T10:59:18.182 controller-0 kubelet[105877]: info I0315 10:59:18.181392 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5bd71b06-ad8b-401f-b898-45418d3a6e25-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "5bd71b06-ad8b-401f-b898-45418d3a6e25" (UID: "5bd71b06-ad8b-401f-b898-45418d3a6e25"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:18.182 controller-0 kubelet[105877]: info I0315 10:59:18.181457 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3213e3-073b-4a5b-805d-19d5e6e1355d-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "6c3213e3-073b-4a5b-805d-19d5e6e1355d" (UID: "6c3213e3-073b-4a5b-805d-19d5e6e1355d"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:18.182 controller-0 kubelet[105877]: info I0315 10:59:18.181533 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/58ca8222-2663-47e3-a7a1-3016d3db3b42-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "58ca8222-2663-47e3-a7a1-3016d3db3b42" (UID: "58ca8222-2663-47e3-a7a1-3016d3db3b42"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:18.182 controller-0 kubelet[105877]: info I0315 10:59:18.182813 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0f1e928-bd01-4992-b3be-23d4561d0495-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "d0f1e928-bd01-4992-b3be-23d4561d0495" (UID: "d0f1e928-bd01-4992-b3be-23d4561d0495"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:18.197 controller-0 kubelet[105877]: info I0315 10:59:18.197828 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c19b01d9-c6cd-495b-b5c2-4d9f6e1b913e-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "c19b01d9-c6cd-495b-b5c2-4d9f6e1b913e" (UID: "c19b01d9-c6cd-495b-b5c2-4d9f6e1b913e"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:18.198 controller-0 kubelet[105877]: info I0315 10:59:18.198292 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/166dc949-b96b-4554-84fc-260f937e0474-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "166dc949-b96b-4554-84fc-260f937e0474" (UID: "166dc949-b96b-4554-84fc-260f937e0474"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:18.201 controller-0 kubelet[105877]: info I0315 10:59:18.201600 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c110c70-3629-4c2b-8829-847e0705b394-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "4c110c70-3629-4c2b-8829-847e0705b394" (UID: "4c110c70-3629-4c2b-8829-847e0705b394"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:18.202 controller-0 kubelet[105877]: info I0315 10:59:18.201851 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5dd148c-8470-4ae4-9785-f06eea1eba7e-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "d5dd148c-8470-4ae4-9785-f06eea1eba7e" (UID: "d5dd148c-8470-4ae4-9785-f06eea1eba7e"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:18.202 controller-0 kubelet[105877]: info I0315 10:59:18.201915 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3862d8ce-f977-478a-bf21-0c1d5a71fb7e-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "3862d8ce-f977-478a-bf21-0c1d5a71fb7e" (UID: "3862d8ce-f977-478a-bf21-0c1d5a71fb7e"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:18.254 controller-0 kubelet[105877]: info I0315 10:59:18.253965 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/58ca8222-2663-47e3-a7a1-3016d3db3b42-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:18.254 controller-0 kubelet[105877]: info I0315 10:59:18.253991 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/6c3213e3-073b-4a5b-805d-19d5e6e1355d-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:18.254 controller-0 kubelet[105877]: info I0315 10:59:18.253998 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/5bd71b06-ad8b-401f-b898-45418d3a6e25-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:18.254 controller-0 kubelet[105877]: info I0315 10:59:18.254004 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/c19b01d9-c6cd-495b-b5c2-4d9f6e1b913e-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:18.254 controller-0 kubelet[105877]: info I0315 10:59:18.254010 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/d0f1e928-bd01-4992-b3be-23d4561d0495-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:18.254 controller-0 kubelet[105877]: info I0315 10:59:18.254015 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/3862d8ce-f977-478a-bf21-0c1d5a71fb7e-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:18.254 controller-0 kubelet[105877]: info I0315 10:59:18.254021 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/4c110c70-3629-4c2b-8829-847e0705b394-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:18.254 controller-0 kubelet[105877]: info I0315 10:59:18.254061 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/d5dd148c-8470-4ae4-9785-f06eea1eba7e-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:18.254 controller-0 kubelet[105877]: info I0315 10:59:18.254069 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/166dc949-b96b-4554-84fc-260f937e0474-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:18.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%40, but no knowledge of it 2021-03-15T10:59:18.336 controller-0 kubelet[105877]: info I0315 10:59:18.336044 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 1c0de0f260844c249c043b20c68376ea618f416c2d0f884d568c5815f4852c1a 2021-03-15T10:59:18.352 controller-0 kubelet[105877]: info I0315 10:59:18.349208 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 829212497adb81bb9192970e88af226cd5614b05ceac0eb312f5cc6a9c138a0b 2021-03-15T10:59:18.364 controller-0 kubelet[105877]: info I0315 10:59:18.364008 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 52c625bfd9dee1dcef30c5438002a1eb3f30fbc8894623dd999b5376ca7bb7f8 2021-03-15T10:59:18.412 controller-0 kubelet[105877]: info I0315 10:59:18.410679 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9c7a2ad28f568b3e269ec815f5c781574d143f750efb533df3626e6f60ee5885 2021-03-15T10:59:18.439 controller-0 kubelet[105877]: info I0315 10:59:18.439635 105877 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 9c7a2ad28f568b3e269ec815f5c781574d143f750efb533df3626e6f60ee5885 2021-03-15T10:59:18.443 controller-0 kubelet[105877]: info E0315 10:59:18.443318 105877 remote_runtime.go:295] ContainerStatus "9c7a2ad28f568b3e269ec815f5c781574d143f750efb533df3626e6f60ee5885" from runtime service failed: rpc error: code = Unknown desc = an error occurred when try to find container "9c7a2ad28f568b3e269ec815f5c781574d143f750efb533df3626e6f60ee5885": does not exist 2021-03-15T10:59:18.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%11, but no knowledge of it 2021-03-15T10:59:18.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%44, but no knowledge of it 2021-03-15T10:59:18.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%36, but no knowledge of it 2021-03-15T10:59:19.020 controller-0 kubelet[105877]: info I0315 10:59:19.020642 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:19.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%21, but no knowledge of it 2021-03-15T10:59:19.080 controller-0 kubelet[105877]: info I0315 10:59:19.078253 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/2a0947eb-6b8a-40f9-829f-102c745409f1-default-token-fxs22") pod "test-7484bfb64b-zr6vm" (UID: "2a0947eb-6b8a-40f9-829f-102c745409f1") 2021-03-15T10:59:19.000 controller-0 lldpd[1548]: warning removal request for address of fe80::ecee:eeff:feee:eeee%18, but no knowledge of it 2021-03-15T10:59:19.185 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/2a0947eb-6b8a-40f9-829f-102c745409f1/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:20.221 controller-0 kubelet[105877]: info I0315 10:59:20.219059 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/2ea68f46-5ae5-4ef4-b724-5b930a48b50c-default-token-fxs22") pod "2ea68f46-5ae5-4ef4-b724-5b930a48b50c" (UID: "2ea68f46-5ae5-4ef4-b724-5b930a48b50c") 2021-03-15T10:59:20.221 controller-0 kubelet[105877]: info I0315 10:59:20.219096 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/3d34b63d-f194-4cd5-92d3-5239582b24a1-default-token-fxs22") pod "3d34b63d-f194-4cd5-92d3-5239582b24a1" (UID: "3d34b63d-f194-4cd5-92d3-5239582b24a1") 2021-03-15T10:59:20.221 controller-0 kubelet[105877]: info I0315 10:59:20.219109 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/a65c23f2-4544-492a-a335-9ef07bcaf114-default-token-fxs22") pod "a65c23f2-4544-492a-a335-9ef07bcaf114" (UID: "a65c23f2-4544-492a-a335-9ef07bcaf114") 2021-03-15T10:59:20.221 controller-0 kubelet[105877]: info I0315 10:59:20.219123 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/8740e02b-692e-43b2-ae98-881f21d8381f-default-token-fxs22") pod "8740e02b-692e-43b2-ae98-881f21d8381f" (UID: "8740e02b-692e-43b2-ae98-881f21d8381f") 2021-03-15T10:59:20.221 controller-0 kubelet[105877]: info I0315 10:59:20.219144 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/a7b32821-826d-4ad2-8a48-be9c151404bb-default-token-fxs22") pod "a7b32821-826d-4ad2-8a48-be9c151404bb" (UID: "a7b32821-826d-4ad2-8a48-be9c151404bb") 2021-03-15T10:59:20.221 controller-0 kubelet[105877]: info I0315 10:59:20.219158 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/81c8f94c-238c-4de9-89c4-beebd1ca84f8-default-token-fxs22") pod "81c8f94c-238c-4de9-89c4-beebd1ca84f8" (UID: "81c8f94c-238c-4de9-89c4-beebd1ca84f8") 2021-03-15T10:59:20.221 controller-0 kubelet[105877]: info I0315 10:59:20.219171 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/db29690f-531a-421c-be01-cbd66a55302a-default-token-fxs22") pod "db29690f-531a-421c-be01-cbd66a55302a" (UID: "db29690f-531a-421c-be01-cbd66a55302a") 2021-03-15T10:59:20.221 controller-0 kubelet[105877]: info I0315 10:59:20.219186 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/6b4f670e-41e5-4c5a-98d6-0e8b14be79d5-default-token-fxs22") pod "6b4f670e-41e5-4c5a-98d6-0e8b14be79d5" (UID: "6b4f670e-41e5-4c5a-98d6-0e8b14be79d5") 2021-03-15T10:59:20.221 controller-0 kubelet[105877]: info I0315 10:59:20.219198 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/b6abc015-7398-4152-b33a-654c7f5b34b1-default-token-fxs22") pod "b6abc015-7398-4152-b33a-654c7f5b34b1" (UID: "b6abc015-7398-4152-b33a-654c7f5b34b1") 2021-03-15T10:59:20.221 controller-0 kubelet[105877]: info I0315 10:59:20.219215 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/15a9b75d-87ec-4f6d-8fcf-2f4ad4ff39ce-default-token-fxs22") pod "15a9b75d-87ec-4f6d-8fcf-2f4ad4ff39ce" (UID: "15a9b75d-87ec-4f6d-8fcf-2f4ad4ff39ce") 2021-03-15T10:59:20.236 controller-0 kubelet[105877]: info I0315 10:59:20.236490 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15a9b75d-87ec-4f6d-8fcf-2f4ad4ff39ce-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "15a9b75d-87ec-4f6d-8fcf-2f4ad4ff39ce" (UID: "15a9b75d-87ec-4f6d-8fcf-2f4ad4ff39ce"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:20.245 controller-0 kubelet[105877]: info I0315 10:59:20.245373 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7b32821-826d-4ad2-8a48-be9c151404bb-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "a7b32821-826d-4ad2-8a48-be9c151404bb" (UID: "a7b32821-826d-4ad2-8a48-be9c151404bb"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:20.254 controller-0 kubelet[105877]: info I0315 10:59:20.253043 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ea68f46-5ae5-4ef4-b724-5b930a48b50c-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "2ea68f46-5ae5-4ef4-b724-5b930a48b50c" (UID: "2ea68f46-5ae5-4ef4-b724-5b930a48b50c"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:20.275 controller-0 kubelet[105877]: info I0315 10:59:20.272829 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d34b63d-f194-4cd5-92d3-5239582b24a1-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "3d34b63d-f194-4cd5-92d3-5239582b24a1" (UID: "3d34b63d-f194-4cd5-92d3-5239582b24a1"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:20.277 controller-0 kubelet[105877]: info I0315 10:59:20.275915 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a65c23f2-4544-492a-a335-9ef07bcaf114-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "a65c23f2-4544-492a-a335-9ef07bcaf114" (UID: "a65c23f2-4544-492a-a335-9ef07bcaf114"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:20.291 controller-0 kubelet[105877]: info I0315 10:59:20.291518 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8740e02b-692e-43b2-ae98-881f21d8381f-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "8740e02b-692e-43b2-ae98-881f21d8381f" (UID: "8740e02b-692e-43b2-ae98-881f21d8381f"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:20.304 controller-0 kubelet[105877]: info I0315 10:59:20.302007 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b4f670e-41e5-4c5a-98d6-0e8b14be79d5-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "6b4f670e-41e5-4c5a-98d6-0e8b14be79d5" (UID: "6b4f670e-41e5-4c5a-98d6-0e8b14be79d5"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:20.304 controller-0 kubelet[105877]: info I0315 10:59:20.302655 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db29690f-531a-421c-be01-cbd66a55302a-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "db29690f-531a-421c-be01-cbd66a55302a" (UID: "db29690f-531a-421c-be01-cbd66a55302a"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:20.304 controller-0 kubelet[105877]: info I0315 10:59:20.302702 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81c8f94c-238c-4de9-89c4-beebd1ca84f8-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "81c8f94c-238c-4de9-89c4-beebd1ca84f8" (UID: "81c8f94c-238c-4de9-89c4-beebd1ca84f8"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:20.306 controller-0 kubelet[105877]: info I0315 10:59:20.305674 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6abc015-7398-4152-b33a-654c7f5b34b1-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "b6abc015-7398-4152-b33a-654c7f5b34b1" (UID: "b6abc015-7398-4152-b33a-654c7f5b34b1"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:20.321 controller-0 kubelet[105877]: info I0315 10:59:20.319602 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/b6abc015-7398-4152-b33a-654c7f5b34b1-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:20.321 controller-0 kubelet[105877]: info I0315 10:59:20.319629 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/15a9b75d-87ec-4f6d-8fcf-2f4ad4ff39ce-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:20.321 controller-0 kubelet[105877]: info I0315 10:59:20.319635 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/2ea68f46-5ae5-4ef4-b724-5b930a48b50c-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:20.321 controller-0 kubelet[105877]: info I0315 10:59:20.319640 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/3d34b63d-f194-4cd5-92d3-5239582b24a1-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:20.321 controller-0 kubelet[105877]: info I0315 10:59:20.319646 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/a65c23f2-4544-492a-a335-9ef07bcaf114-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:20.321 controller-0 kubelet[105877]: info I0315 10:59:20.319652 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/8740e02b-692e-43b2-ae98-881f21d8381f-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:20.321 controller-0 kubelet[105877]: info I0315 10:59:20.319657 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/a7b32821-826d-4ad2-8a48-be9c151404bb-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:20.321 controller-0 kubelet[105877]: info I0315 10:59:20.319662 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/81c8f94c-238c-4de9-89c4-beebd1ca84f8-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:20.321 controller-0 kubelet[105877]: info I0315 10:59:20.319669 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/db29690f-531a-421c-be01-cbd66a55302a-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:20.321 controller-0 kubelet[105877]: info I0315 10:59:20.319676 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/6b4f670e-41e5-4c5a-98d6-0e8b14be79d5-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Listen normally on 50 cali2a946449cba fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #47 cali46b03095acb, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #45 calic81f215f57f, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #44 calia1e9ef4a8f5, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #43 calibe18b8da647, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #42 *multiple*, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #39 calicbb9a4acb86, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #38 cali0a09b68f47c, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #37 cali53ebe99d223, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #36 cali1fbf706526b, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #35 cali8d9fa322cb2, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #34 cali95268ea00d7, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #33 calidc935fc553d, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #32 cali542020ba3a8, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #31 cali1d997cc966f, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #29 cali54c4e2dd08f, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #28 cali3e5b0c60e29, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #27 calif27c695d51c, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #26 cali4589981c65d, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=61 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #25 cali09a5641c863, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #24 cali5163e0839fc, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #23 calidae17860453, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #22 *multiple*, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #21 calic126abb9a30, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #20 cali224177977fe, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #19 calia446f3a1608, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #18 calib8337b53dd2, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #17 cali36febaf9e44, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #16 cali5cf40ae32ab, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #15 cali11646e65f05, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: info Deleting interface #14 calia5cec93509a, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=74 secs 2021-03-15T10:59:20.000 controller-0 ntpd[82287]: debug new interface(s) found: waking up resolver 2021-03-15T10:59:21.690 controller-0 kubelet[105877]: info I0315 10:59:21.688379 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:21.770 controller-0 kubelet[105877]: info I0315 10:59:21.769172 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/e37200b4-5c8a-4cc0-addb-b31bf16f7d47-default-token-fxs22") pod "test-7484bfb64b-drs8k" (UID: "e37200b4-5c8a-4cc0-addb-b31bf16f7d47") 2021-03-15T10:59:22.295 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/e37200b4-5c8a-4cc0-addb-b31bf16f7d47/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:22.403 controller-0 kubelet[105877]: info I0315 10:59:22.402942 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/9e84b7ab-5ee3-4cdc-8c60-9c4dcb900b6b-default-token-fxs22") pod "9e84b7ab-5ee3-4cdc-8c60-9c4dcb900b6b" (UID: "9e84b7ab-5ee3-4cdc-8c60-9c4dcb900b6b") 2021-03-15T10:59:22.403 controller-0 kubelet[105877]: info I0315 10:59:22.402992 105877 reconciler.go:196] operationExecutor.UnmountVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/b5faadb4-b4f2-4a95-9b8c-3b5ae4060e50-default-token-fxs22") pod "b5faadb4-b4f2-4a95-9b8c-3b5ae4060e50" (UID: "b5faadb4-b4f2-4a95-9b8c-3b5ae4060e50") 2021-03-15T10:59:22.415 controller-0 kubelet[105877]: info I0315 10:59:22.415138 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5faadb4-b4f2-4a95-9b8c-3b5ae4060e50-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "b5faadb4-b4f2-4a95-9b8c-3b5ae4060e50" (UID: "b5faadb4-b4f2-4a95-9b8c-3b5ae4060e50"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:22.428 controller-0 kubelet[105877]: info I0315 10:59:22.427925 105877 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e84b7ab-5ee3-4cdc-8c60-9c4dcb900b6b-default-token-fxs22" (OuterVolumeSpecName: "default-token-fxs22") pod "9e84b7ab-5ee3-4cdc-8c60-9c4dcb900b6b" (UID: "9e84b7ab-5ee3-4cdc-8c60-9c4dcb900b6b"). InnerVolumeSpecName "default-token-fxs22". PluginName "kubernetes.io/secret", VolumeGidValue "" 2021-03-15T10:59:22.504 controller-0 kubelet[105877]: info I0315 10:59:22.504368 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/9e84b7ab-5ee3-4cdc-8c60-9c4dcb900b6b-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:22.504 controller-0 kubelet[105877]: info I0315 10:59:22.504410 105877 reconciler.go:319] Volume detached for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/b5faadb4-b4f2-4a95-9b8c-3b5ae4060e50-default-token-fxs22") on node "controller-0" DevicePath "" 2021-03-15T10:59:22.859 controller-0 kubelet[105877]: info I0315 10:59:22.854741 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:23.018 controller-0 kubelet[105877]: info I0315 10:59:23.018030 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/59b3ed25-686c-4b97-9ccd-83986c7ba570-default-token-fxs22") pod "test-7484bfb64b-m2q8s" (UID: "59b3ed25-686c-4b97-9ccd-83986c7ba570") 2021-03-15T10:59:23.138 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/59b3ed25-686c-4b97-9ccd-83986c7ba570/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:23.985 controller-0 kubelet[105877]: info I0315 10:59:23.981979 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:23.992 controller-0 kubelet[105877]: info I0315 10:59:23.992401 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:24.134 controller-0 kubelet[105877]: info I0315 10:59:24.134549 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/1cfef526-da0b-4476-8cd3-c977b6c5bdff-default-token-fxs22") pod "test-7484bfb64b-nqmfg" (UID: "1cfef526-da0b-4476-8cd3-c977b6c5bdff") 2021-03-15T10:59:24.134 controller-0 kubelet[105877]: info I0315 10:59:24.134611 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/cef94dfe-e600-4e51-8033-c8303f86b5c2-default-token-fxs22") pod "test-7484bfb64b-xkvmx" (UID: "cef94dfe-e600-4e51-8033-c8303f86b5c2") 2021-03-15T10:59:24.253 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/1cfef526-da0b-4476-8cd3-c977b6c5bdff/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:24.255 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/cef94dfe-e600-4e51-8033-c8303f86b5c2/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:25.233 controller-0 kubelet[105877]: info I0315 10:59:25.233592 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:25.402 controller-0 kubelet[105877]: info I0315 10:59:25.393691 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/06b878a4-7ece-4b36-96a3-e932fe35090a-default-token-fxs22") pod "test-7484bfb64b-5gxtj" (UID: "06b878a4-7ece-4b36-96a3-e932fe35090a") 2021-03-15T10:59:25.497 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/06b878a4-7ece-4b36-96a3-e932fe35090a/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:26.263 controller-0 kubelet[105877]: info I0315 10:59:26.261257 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:26.265 controller-0 kubelet[105877]: info I0315 10:59:26.265436 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:26.422 controller-0 kubelet[105877]: info I0315 10:59:26.422014 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/ae0fe001-3da5-4b66-8149-dc5e65bdecc2-default-token-fxs22") pod "test-7484bfb64b-6hncl" (UID: "ae0fe001-3da5-4b66-8149-dc5e65bdecc2") 2021-03-15T10:59:26.422 controller-0 kubelet[105877]: info I0315 10:59:26.422058 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/65e19997-37fe-4538-89f6-3c530b1de9f0-default-token-fxs22") pod "test-7484bfb64b-zw5bm" (UID: "65e19997-37fe-4538-89f6-3c530b1de9f0") 2021-03-15T10:59:26.527 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/65e19997-37fe-4538-89f6-3c530b1de9f0/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:26.527 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/ae0fe001-3da5-4b66-8149-dc5e65bdecc2/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:27.459 controller-0 kubelet[105877]: info I0315 10:59:27.459424 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:27.540 controller-0 kubelet[105877]: info I0315 10:59:27.538728 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:27.648 controller-0 kubelet[105877]: info I0315 10:59:27.648090 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/b4ab4881-ade8-47b2-ad87-0be54c92e123-default-token-fxs22") pod "test-7484bfb64b-fh6fm" (UID: "b4ab4881-ade8-47b2-ad87-0be54c92e123") 2021-03-15T10:59:27.648 controller-0 kubelet[105877]: info I0315 10:59:27.648128 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/047f676e-f0b8-48e3-bbc3-b490792383bc-default-token-fxs22") pod "test-7484bfb64b-6p7kk" (UID: "047f676e-f0b8-48e3-bbc3-b490792383bc") 2021-03-15T10:59:27.754 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/047f676e-f0b8-48e3-bbc3-b490792383bc/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:27.760 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b4ab4881-ade8-47b2-ad87-0be54c92e123/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:28.391 controller-0 kubelet[105877]: info I0315 10:59:28.390574 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:28.472 controller-0 kubelet[105877]: info I0315 10:59:28.469549 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/d6156136-6347-40b4-834e-c6f0d97a917f-default-token-fxs22") pod "test-7484bfb64b-t5h49" (UID: "d6156136-6347-40b4-834e-c6f0d97a917f") 2021-03-15T10:59:28.490 controller-0 kubelet[105877]: info I0315 10:59:28.490116 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:28.573 controller-0 kubelet[105877]: info I0315 10:59:28.572978 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/d3075215-80e2-479f-a0a8-b6f1c0b9e483-default-token-fxs22") pod "test-7484bfb64b-j4qm9" (UID: "d3075215-80e2-479f-a0a8-b6f1c0b9e483") 2021-03-15T10:59:28.587 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d6156136-6347-40b4-834e-c6f0d97a917f/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:28.680 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d3075215-80e2-479f-a0a8-b6f1c0b9e483/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:30.171 controller-0 kubelet[105877]: info I0315 10:59:29.968334 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:30.176 controller-0 kubelet[105877]: info I0315 10:59:30.172187 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/df79c83b-3d14-42c8-82cb-bab18d3656de-default-token-fxs22") pod "test-7484bfb64b-zrtc8" (UID: "df79c83b-3d14-42c8-82cb-bab18d3656de") 2021-03-15T10:59:30.298 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/df79c83b-3d14-42c8-82cb-bab18d3656de/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:31.408 controller-0 kubelet[105877]: info I0315 10:59:31.408099 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:31.446 controller-0 kubelet[105877]: info I0315 10:59:31.440985 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/6e693e31-1d6d-409a-bbc1-6d078a508975-default-token-fxs22") pod "test-7484bfb64b-wb9xt" (UID: "6e693e31-1d6d-409a-bbc1-6d078a508975") 2021-03-15T10:59:31.464 controller-0 kubelet[105877]: info I0315 10:59:31.463969 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:31.470 controller-0 kubelet[105877]: info I0315 10:59:31.469875 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:31.478 controller-0 kubelet[105877]: info I0315 10:59:31.478053 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:31.555 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/6e693e31-1d6d-409a-bbc1-6d078a508975/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:31.642 controller-0 kubelet[105877]: info I0315 10:59:31.642583 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/583c8236-ec35-47f6-b394-f31ddcbdcb25-default-token-fxs22") pod "test-7484bfb64b-8ttnm" (UID: "583c8236-ec35-47f6-b394-f31ddcbdcb25") 2021-03-15T10:59:31.642 controller-0 kubelet[105877]: info I0315 10:59:31.642622 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/4be5d6ef-6ba9-411f-bdec-73c27103e3b1-default-token-fxs22") pod "test-7484bfb64b-7pkw2" (UID: "4be5d6ef-6ba9-411f-bdec-73c27103e3b1") 2021-03-15T10:59:31.642 controller-0 kubelet[105877]: info I0315 10:59:31.642641 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/3c202583-21db-4a3b-9dd8-b046b7c4c38c-default-token-fxs22") pod "test-7484bfb64b-ws7qf" (UID: "3c202583-21db-4a3b-9dd8-b046b7c4c38c") 2021-03-15T10:59:31.749 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/3c202583-21db-4a3b-9dd8-b046b7c4c38c/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:31.758 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/583c8236-ec35-47f6-b394-f31ddcbdcb25/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:31.870 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/4be5d6ef-6ba9-411f-bdec-73c27103e3b1/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:32.000 controller-0 k8s-pod-recovery(105887): info : Waiting on pod transitions to stabilize... 39 pods are not Running/Completed 2021-03-15T10:59:33.238 controller-0 kubelet[105877]: info I0315 10:59:33.225731 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:33.285 controller-0 kubelet[105877]: info I0315 10:59:33.282919 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:33.317 controller-0 kubelet[105877]: info I0315 10:59:33.317040 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:33.417 controller-0 kubelet[105877]: info I0315 10:59:33.417083 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/b0f0ba8e-7f56-4009-8dfe-146eb8493170-default-token-fxs22") pod "test-7484bfb64b-b7k45" (UID: "b0f0ba8e-7f56-4009-8dfe-146eb8493170") 2021-03-15T10:59:33.417 controller-0 kubelet[105877]: info I0315 10:59:33.417119 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/0196ffdf-3fc0-4eb0-97c1-d86c33cc1754-default-token-fxs22") pod "test-7484bfb64b-6f2v5" (UID: "0196ffdf-3fc0-4eb0-97c1-d86c33cc1754") 2021-03-15T10:59:33.417 controller-0 kubelet[105877]: info I0315 10:59:33.417165 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/0efafcb9-e1c0-48ae-974b-ef3c5a981743-default-token-fxs22") pod "test-7484bfb64b-pt4v5" (UID: "0efafcb9-e1c0-48ae-974b-ef3c5a981743") 2021-03-15T10:59:33.602 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b0f0ba8e-7f56-4009-8dfe-146eb8493170/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:33.603 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/0196ffdf-3fc0-4eb0-97c1-d86c33cc1754/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:33.604 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/0efafcb9-e1c0-48ae-974b-ef3c5a981743/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:33.985 controller-0 kubelet[105877]: info I0315 10:59:33.984793 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:33.989 controller-0 kubelet[105877]: info I0315 10:59:33.989555 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:34.003 controller-0 kubelet[105877]: info I0315 10:59:34.002224 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:34.130 controller-0 kubelet[105877]: info I0315 10:59:34.129549 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/d091ca5b-ab8d-4c1d-a158-fae8fc36e556-default-token-fxs22") pod "test-7484bfb64b-mfckm" (UID: "d091ca5b-ab8d-4c1d-a158-fae8fc36e556") 2021-03-15T10:59:34.130 controller-0 kubelet[105877]: info I0315 10:59:34.129587 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/5ef651d1-41f3-41a4-90ef-bbfd9345b4dc-default-token-fxs22") pod "test-7484bfb64b-h2n6p" (UID: "5ef651d1-41f3-41a4-90ef-bbfd9345b4dc") 2021-03-15T10:59:34.130 controller-0 kubelet[105877]: info I0315 10:59:34.129605 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/fa23bd51-2052-44a6-b0a2-e86977019263-default-token-fxs22") pod "test-7484bfb64b-6x862" (UID: "fa23bd51-2052-44a6-b0a2-e86977019263") 2021-03-15T10:59:34.236 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d091ca5b-ab8d-4c1d-a158-fae8fc36e556/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:34.248 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/fa23bd51-2052-44a6-b0a2-e86977019263/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:34.248 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/5ef651d1-41f3-41a4-90ef-bbfd9345b4dc/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:34.506 controller-0 kubelet[105877]: info W0315 10:59:34.506818 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-127915.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-127915.scope: no such file or directory 2021-03-15T10:59:34.506 controller-0 kubelet[105877]: info W0315 10:59:34.506870 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-127915.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-127915.scope: no such file or directory 2021-03-15T10:59:34.506 controller-0 kubelet[105877]: info W0315 10:59:34.506891 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-127915.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-127915.scope: no such file or directory 2021-03-15T10:59:34.506 controller-0 kubelet[105877]: info W0315 10:59:34.506911 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-127915.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-127915.scope: no such file or directory 2021-03-15T10:59:34.506 controller-0 kubelet[105877]: info W0315 10:59:34.506923 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-127915.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-127915.scope: no such file or directory 2021-03-15T10:59:34.506 controller-0 kubelet[105877]: info W0315 10:59:34.506933 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-127911.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-127911.scope: no such file or directory 2021-03-15T10:59:34.506 controller-0 kubelet[105877]: info W0315 10:59:34.506943 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-127911.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-127911.scope: no such file or directory 2021-03-15T10:59:34.506 controller-0 kubelet[105877]: info W0315 10:59:34.506952 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-127911.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-127911.scope: no such file or directory 2021-03-15T10:59:34.506 controller-0 kubelet[105877]: info W0315 10:59:34.506963 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-127911.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-127911.scope: no such file or directory 2021-03-15T10:59:34.506 controller-0 kubelet[105877]: info W0315 10:59:34.506974 105877 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-127911.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-127911.scope: no such file or directory 2021-03-15T10:59:35.709 controller-0 kubelet[105877]: info I0315 10:59:35.708783 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:35.768 controller-0 kubelet[105877]: info I0315 10:59:35.768277 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/de21d253-3aa1-42b3-8b01-7175ba8f0e71-default-token-fxs22") pod "test-7484bfb64b-q2g28" (UID: "de21d253-3aa1-42b3-8b01-7175ba8f0e71") 2021-03-15T10:59:35.874 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/de21d253-3aa1-42b3-8b01-7175ba8f0e71/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:36.984 controller-0 kubelet[105877]: info I0315 10:59:36.983974 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:37.481 controller-0 kubelet[105877]: info I0315 10:59:37.480738 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/9765bba1-0578-4cee-b174-ac8542c6c1ce-default-token-fxs22") pod "test-7484bfb64b-twbw6" (UID: "9765bba1-0578-4cee-b174-ac8542c6c1ce") 2021-03-15T10:59:37.589 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/9765bba1-0578-4cee-b174-ac8542c6c1ce/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:41.207 controller-0 collectd[106099]: info platform memory usage: Usage: 38.7%; Reserved: 4600.0 MiB, Platform: 1779.6 MiB (Base: 1296.5, k8s-system: 483.1), k8s-addon: 0.0 2021-03-15T10:59:41.207 controller-0 collectd[106099]: info 4K memory usage: Anon: 7.1%, Anon: 1771.6 MiB, cgroup-rss: 1838.5 MiB, Avail: 23216.9 MiB, Total: 24988.5 MiB 2021-03-15T10:59:41.207 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.96%, Anon: 1771.6 MiB, Avail: 23680.8 MiB, Total: 25452.4 MiB 2021-03-15T10:59:41.786 controller-0 collectd[106099]: info platform cpu usage plugin Usage: 45.3% (avg per cpu); cpus: 2, Platform: 69.8% (Base: 54.0, k8s-system: 15.9), k8s-addon: 0.0 2021-03-15T10:59:41.802 controller-0 collectd[106099]: info alarm notifier reading: 45.34 % usage - Platform CPU 2021-03-15T10:59:43.999 controller-0 kubelet[105877]: info I0315 10:59:43.999735 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:44.262 controller-0 kubelet[105877]: info I0315 10:59:44.262056 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/bc726be8-a782-4a27-889b-086f769712ac-default-token-fxs22") pod "test-7484bfb64b-pz94p" (UID: "bc726be8-a782-4a27-889b-086f769712ac") 2021-03-15T10:59:44.375 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/bc726be8-a782-4a27-889b-086f769712ac/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 51 cali8d770b88f86 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 52 cali9d7018e736a fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 53 cali1d14cce4fa6 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 54 cali9357e81bf35 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 55 cali99a7a2e2be3 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 56 cali2e47fe4c6a9 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 57 califc4f29ad598 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 58 calia4c92f29791 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 59 cali36f8fabc6b1 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 60 caliea17fc3c259 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 61 cali1a5467da0ef fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 62 cali69fbb7d5e3d fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 63 cali9c0e6f2c9bb fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 64 cali4f4e47a4eb1 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 65 cali7fdbf483dea fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 66 cali6c97220f370 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 67 cali1b3c66ee5bc fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 68 calieda990d89f3 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 69 cali34464335f55 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 70 calie07ec0bab45 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 71 cali357e411b4d2 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 72 cali09b4e106cec fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 73 calif5924c1e133 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 74 calieba4a521ff7 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: info Listen normally on 75 cali6847d5a7bae fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:46.000 controller-0 ntpd[82287]: debug new interface(s) found: waking up resolver 2021-03-15T10:59:47.487 controller-0 kubelet[105877]: info I0315 10:59:47.487206 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:47.611 controller-0 kubelet[105877]: info I0315 10:59:47.608396 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/ef734c66-a126-47fd-8875-609442d90001-default-token-fxs22") pod "test-7484bfb64b-bqvn9" (UID: "ef734c66-a126-47fd-8875-609442d90001") 2021-03-15T10:59:47.725 controller-0 kubelet[105877]: info I0315 10:59:47.723833 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:47.733 controller-0 kubelet[105877]: info I0315 10:59:47.730399 105877 topology_manager.go:233] [topologymanager] Topology Admit Handler 2021-03-15T10:59:47.740 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/ef734c66-a126-47fd-8875-609442d90001/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:47.827 controller-0 kubelet[105877]: info I0315 10:59:47.827324 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/e9b3513e-daa1-4f70-95df-532e0feccb0e-default-token-fxs22") pod "test-7484bfb64b-8bs5b" (UID: "e9b3513e-daa1-4f70-95df-532e0feccb0e") 2021-03-15T10:59:47.000 controller-0 k8s-pod-recovery(105887): info : Waiting on pod transitions to stabilize... 19 pods are not Running/Completed 2021-03-15T10:59:47.928 controller-0 kubelet[105877]: info I0315 10:59:47.928231 105877 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-fxs22" (UniqueName: "kubernetes.io/secret/e9ad2fbb-ba21-4571-8c32-12126c584fe6-default-token-fxs22") pod "test-7484bfb64b-slvpg" (UID: "e9ad2fbb-ba21-4571-8c32-12126c584fe6") 2021-03-15T10:59:47.933 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/e9b3513e-daa1-4f70-95df-532e0feccb0e/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:48.032 controller-0 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/e9ad2fbb-ba21-4571-8c32-12126c584fe6/volumes/kubernetes.io~secret/default-token-fxs22. 2021-03-15T10:59:58.000 controller-0 ntpd[82287]: info Listen normally on 76 cali388e3fc5582 fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:58.000 controller-0 ntpd[82287]: info Listen normally on 77 cali4342e88781e fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:58.000 controller-0 ntpd[82287]: info Listen normally on 78 cali755080ed22c fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:58.000 controller-0 ntpd[82287]: info Listen normally on 79 calic8d468b312d fe80::ecee:eeff:feee:eeee UDP 123 2021-03-15T10:59:58.000 controller-0 ntpd[82287]: debug new interface(s) found: waking up resolver 2021-03-15T11:00:01.813 controller-0 systemd[1]: info Created slice User Slice of root. 2021-03-15T11:00:01.816 controller-0 systemd[1]: info Started Session 3 of user root. 2021-03-15T11:00:01.818 controller-0 systemd[1]: info Started Session 2 of user root. 2021-03-15T11:00:01.874 controller-0 systemd[1]: info Removed slice User Slice of root. 2021-03-15T11:00:03.000 controller-0 k8s-pod-recovery(105887): info : Waiting on pod transitions to stabilize... 0 pods are not Running/Completed 2021-03-15T11:00:11.599 controller-0 collectd[106099]: info platform memory usage: Usage: 37.9%; Reserved: 4600.0 MiB, Platform: 1744.0 MiB (Base: 1259.7, k8s-system: 484.3), k8s-addon: 0.0 2021-03-15T11:00:11.599 controller-0 collectd[106099]: info 4K memory usage: Anon: 6.9%, Anon: 1729.4 MiB, cgroup-rss: 1808.4 MiB, Avail: 23217.5 MiB, Total: 24946.9 MiB 2021-03-15T11:00:11.599 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.81%, Anon: 1729.4 MiB, Avail: 23670.9 MiB, Total: 25400.3 MiB 2021-03-15T11:00:11.811 controller-0 collectd[106099]: info platform cpu usage plugin Usage: 26.5% (avg per cpu); cpus: 2, Platform: 41.8% (Base: 32.4, k8s-system: 9.4), k8s-addon: 0.0 2021-03-15T11:00:11.811 controller-0 collectd[106099]: info alarm notifier reading: 26.55 % usage - Platform CPU 2021-03-15T11:00:18.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 0 seconds. 2021-03-15T11:00:33.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 15 seconds. 2021-03-15T11:00:40.779 controller-0 collectd[106099]: info remote logging server 100.118:host=controller-0 alarm clear 2021-03-15T11:00:40.779 controller-0 collectd[106099]: info remote logging server is disabled 2021-03-15T11:00:40.797 controller-0 collectd[106099]: info platform memory usage: Usage: 38.2%; Reserved: 4600.0 MiB, Platform: 1756.0 MiB (Base: 1271.1, k8s-system: 484.9), k8s-addon: 0.0 2021-03-15T11:00:40.797 controller-0 collectd[106099]: info 4K memory usage: Anon: 6.9%, Anon: 1724.1 MiB, cgroup-rss: 1820.5 MiB, Avail: 23221.7 MiB, Total: 24945.8 MiB 2021-03-15T11:00:40.797 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.79%, Anon: 1724.1 MiB, Avail: 23657.0 MiB, Total: 25381.2 MiB 2021-03-15T11:00:41.352 controller-0 collectd[106099]: info platform cpu usage plugin Usage: 15.9% (avg per cpu); cpus: 2, Platform: 27.1% (Base: 21.7, k8s-system: 5.3), k8s-addon: 0.0 2021-03-15T11:00:41.352 controller-0 collectd[106099]: info alarm notifier reading: 15.90 % usage - Platform CPU 2021-03-15T11:00:48.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 30 seconds. 2021-03-15T11:01:01.904 controller-0 systemd[1]: info Created slice User Slice of root. 2021-03-15T11:01:01.909 controller-0 systemd[1]: info Started Session 4 of user root. 2021-03-15T11:01:02.012 controller-0 systemd[1]: info Removed slice User Slice of root. 2021-03-15T11:01:04.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 45 seconds. 2021-03-15T11:01:10.771 controller-0 collectd[106099]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-0","degrade":"clear","resource":""} 2021-03-15T11:01:10.826 controller-0 collectd[106099]: info platform cpu usage plugin Usage: 16.0% (avg per cpu); cpus: 2, Platform: 25.6% (Base: 20.1, k8s-system: 5.5), k8s-addon: 0.0 2021-03-15T11:01:10.867 controller-0 collectd[106099]: info platform memory usage: Usage: 38.3%; Reserved: 4600.0 MiB, Platform: 1760.2 MiB (Base: 1275.1, k8s-system: 485.1), k8s-addon: 0.0 2021-03-15T11:01:10.867 controller-0 collectd[106099]: info 4K memory usage: Anon: 6.9%, Anon: 1723.5 MiB, cgroup-rss: 1824.8 MiB, Avail: 23222.7 MiB, Total: 24946.2 MiB 2021-03-15T11:01:10.867 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.79%, Anon: 1723.5 MiB, Avail: 23654.0 MiB, Total: 25377.5 MiB 2021-03-15T11:01:19.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 60 seconds. 2021-03-15T11:01:35.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 75 seconds. 2021-03-15T11:01:40.493 controller-0 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2021-03-15T11:01:40.501 controller-0 systemd[1]: info Starting dnf makecache... 2021-03-15T11:01:40.684 controller-0 dnf[141329]: info Metadata cache refreshed recently. 2021-03-15T11:01:40.699 controller-0 systemd[1]: info Started dnf makecache. 2021-03-15T11:01:40.778 controller-0 collectd[106099]: info platform cpu usage plugin Usage: 15.0% (avg per cpu); cpus: 2, Platform: 25.9% (Base: 20.9, k8s-system: 5.0), k8s-addon: 0.0 2021-03-15T11:01:40.783 controller-0 collectd[106099]: info platform memory usage: Usage: 38.4%; Reserved: 4600.0 MiB, Platform: 1766.1 MiB (Base: 1280.8, k8s-system: 485.3), k8s-addon: 0.0 2021-03-15T11:01:40.784 controller-0 collectd[106099]: info 4K memory usage: Anon: 6.9%, Anon: 1726.5 MiB, cgroup-rss: 1830.7 MiB, Avail: 23217.8 MiB, Total: 24944.3 MiB 2021-03-15T11:01:40.784 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.80%, Anon: 1726.5 MiB, Avail: 23646.0 MiB, Total: 25372.4 MiB 2021-03-15T11:01:50.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-5gxtj 2021-03-15T11:01:50.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-5gxtj: recovered 2021-03-15T11:01:50.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-6f2v5 2021-03-15T11:01:50.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-6f2v5: recovered 2021-03-15T11:01:50.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-6hncl 2021-03-15T11:01:50.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-6hncl: recovered 2021-03-15T11:01:50.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-6p7kk 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-6p7kk: recovered 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-6x862 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-6x862: recovered 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-7pkw2 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-7pkw2: recovered 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-8bs5b 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-8bs5b: recovered 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-8ttnm 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-8ttnm: recovered 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-b7k45 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-b7k45: recovered 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-bqvn9 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-bqvn9: recovered 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-drs8k 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-drs8k: recovered 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-fh6fm 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-fh6fm: recovered 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-h2n6p 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-h2n6p: recovered 2021-03-15T11:01:51.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-j4qm9 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-j4qm9: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-m2q8s 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-m2q8s: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-mfckm 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-mfckm: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-nqmfg 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-nqmfg: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-pt4v5 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-pt4v5: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-pz94p 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-pz94p: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-q2g28 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-q2g28: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-slvpg 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-slvpg: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-t5h49 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-t5h49: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-twbw6 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-twbw6: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-wb9xt 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-wb9xt: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-ws7qf 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-ws7qf: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-xkvmx 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-xkvmx: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-zr6vm 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-zr6vm: recovered 2021-03-15T11:01:52.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-zrpm9 2021-03-15T11:01:53.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-zrpm9: recovered 2021-03-15T11:01:53.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-zrtc8 2021-03-15T11:01:53.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-zrtc8: recovered 2021-03-15T11:01:53.000 controller-0 k8s-pod-recovery(105887): info : restart-on-reboot labeled pods: Verifying: default test-7484bfb64b-zw5bm 2021-03-15T11:01:53.000 controller-0 k8s-pod-recovery(105887): info : default/test-7484bfb64b-zw5bm: recovered 2021-03-15T11:01:53.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 0 seconds. 2021-03-15T11:02:08.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 15 seconds. 2021-03-15T11:02:10.802 controller-0 collectd[106099]: info platform cpu usage plugin Usage: 19.3% (avg per cpu); cpus: 2, Platform: 31.2% (Base: 25.8, k8s-system: 5.4), k8s-addon: 0.0 2021-03-15T11:02:10.807 controller-0 collectd[106099]: info platform memory usage: Usage: 38.5%; Reserved: 4600.0 MiB, Platform: 1771.6 MiB (Base: 1285.6, k8s-system: 486.0), k8s-addon: 0.0 2021-03-15T11:02:10.808 controller-0 collectd[106099]: info 4K memory usage: Anon: 6.9%, Anon: 1730.5 MiB, cgroup-rss: 1836.4 MiB, Avail: 23214.0 MiB, Total: 24944.5 MiB 2021-03-15T11:02:10.808 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.82%, Anon: 1730.5 MiB, Avail: 23641.0 MiB, Total: 25371.5 MiB 2021-03-15T11:02:24.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 30 seconds. 2021-03-15T11:02:39.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 45 seconds. 2021-03-15T11:02:40.775 controller-0 collectd[106099]: info platform cpu usage plugin Usage: 15.0% (avg per cpu); cpus: 2, Platform: 25.0% (Base: 20.1, k8s-system: 4.9), k8s-addon: 0.0 2021-03-15T11:02:40.790 controller-0 collectd[106099]: info platform memory usage: Usage: 38.6%; Reserved: 4600.0 MiB, Platform: 1776.9 MiB (Base: 1289.5, k8s-system: 487.4), k8s-addon: 0.0 2021-03-15T11:02:40.790 controller-0 collectd[106099]: info 4K memory usage: Anon: 7.0%, Anon: 1733.9 MiB, cgroup-rss: 1841.8 MiB, Avail: 23209.7 MiB, Total: 24943.6 MiB 2021-03-15T11:02:40.790 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.83%, Anon: 1733.9 MiB, Avail: 23634.4 MiB, Total: 25368.3 MiB 2021-03-15T11:02:54.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 60 seconds. 2021-03-15T11:03:09.000 controller-0 k8s-pod-recovery(105887): info : Pods transitions are stable... for 75 seconds. 2021-03-15T11:03:10.777 controller-0 collectd[106099]: info platform cpu usage plugin Usage: 16.8% (avg per cpu); cpus: 2, Platform: 26.9% (Base: 21.4, k8s-system: 5.6), k8s-addon: 0.0 2021-03-15T11:03:10.784 controller-0 collectd[106099]: info platform memory usage: Usage: 38.7%; Reserved: 4600.0 MiB, Platform: 1781.3 MiB (Base: 1294.0, k8s-system: 487.3), k8s-addon: 0.0 2021-03-15T11:03:10.784 controller-0 collectd[106099]: info 4K memory usage: Anon: 7.0%, Anon: 1736.5 MiB, cgroup-rss: 1845.9 MiB, Avail: 23207.3 MiB, Total: 24943.8 MiB 2021-03-15T11:03:10.784 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.85%, Anon: 1736.5 MiB, Avail: 23630.5 MiB, Total: 25367.0 MiB 2021-03-15T11:03:10.000 controller-0 affine-tasks.sh(2360): info : Recovery wait, elapsed 515 seconds. Reason: nova-compute pod not running 2021-03-15T11:03:24.000 controller-0 k8s-pod-recovery(105887): info : Unknown pods: None present for namespace: openstack 2021-03-15T11:03:24.000 controller-0 k8s-pod-recovery(105887): info : Unknown pods: None present for namespace: monitor 2021-03-15T11:03:25.000 controller-0 k8s-pod-recovery(105887): info : NodeAffinity pods: None present. 2021-03-15T11:03:25.000 controller-0 k8s-pod-recovery(148434): info : Stopping. 2021-03-15T11:03:40.781 controller-0 collectd[106099]: info platform cpu usage plugin Usage: 17.2% (avg per cpu); cpus: 2, Platform: 22.9% (Base: 17.0, k8s-system: 5.8), k8s-addon: 0.0 2021-03-15T11:03:40.783 controller-0 collectd[106099]: info platform memory usage: Usage: 38.6%; Reserved: 4600.0 MiB, Platform: 1775.4 MiB (Base: 1287.8, k8s-system: 487.7), k8s-addon: 0.0 2021-03-15T11:03:40.783 controller-0 collectd[106099]: info 4K memory usage: Anon: 6.9%, Anon: 1730.5 MiB, cgroup-rss: 1840.5 MiB, Avail: 23215.5 MiB, Total: 24946.0 MiB 2021-03-15T11:03:40.783 controller-0 collectd[106099]: info 4K numa memory usage: node0, Anon: 6.82%, Anon: 1730.5 MiB, Avail: 23638.4 MiB, Total: 25368.9 MiB