2019-11-04T18:42:15.260 controller-1 systemd[1]: info Reloading System Logger Daemon. 2019-11-04T18:42:15.262 controller-1 systemd[1]: info Reloaded System Logger Daemon. 2019-11-04T18:42:23.644 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 11.8% (avg per cpu); cpus: 36, Platform: 10.8% (Base: 9.7, k8s-system: 1.0), k8s-addon: 0.9 2019-11-04T18:42:23.650 controller-1 collectd[12249]: info platform memory usage: Usage: 7.1%; Reserved: 125930.7 MiB, Platform: 8896.0 MiB (Base: 7849.0, k8s-system: 1046.9), k8s-addon: 7457.1 2019-11-04T18:42:23.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.1%, Anon: 16434.3 MiB, cgroup-rss: 16357.0 MiB, Avail: 109496.4 MiB, Total: 125930.7 MiB 2019-11-04T18:42:23.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.31%, Anon: 9704.5 MiB, Avail: 53690.8 MiB, Total: 63395.2 MiB 2019-11-04T18:42:23.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.62%, Anon: 6729.8 MiB, Avail: 56656.3 MiB, Total: 63386.1 MiB 2019-11-04T18:42:33.644 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 11.4% (avg per cpu); cpus: 36, Platform: 9.5% (Base: 8.6, k8s-system: 0.9), k8s-addon: 1.7 2019-11-04T18:42:33.650 controller-1 collectd[12249]: info platform memory usage: Usage: 7.1%; Reserved: 125929.9 MiB, Platform: 8887.1 MiB (Base: 7840.1, k8s-system: 1047.0), k8s-addon: 7456.8 2019-11-04T18:42:33.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.0%, Anon: 16420.5 MiB, cgroup-rss: 16347.2 MiB, Avail: 109509.4 MiB, Total: 125929.9 MiB 2019-11-04T18:42:33.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.20%, Anon: 9635.9 MiB, Avail: 53761.8 MiB, Total: 63397.7 MiB 2019-11-04T18:42:33.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.70%, Anon: 6784.6 MiB, Avail: 56599.1 MiB, Total: 63383.6 MiB 2019-11-04T18:42:43.644 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 9.1% (avg per cpu); cpus: 36, Platform: 8.3% (Base: 7.3, k8s-system: 1.0), k8s-addon: 0.8 2019-11-04T18:42:43.650 controller-1 collectd[12249]: info platform memory usage: Usage: 7.1%; Reserved: 125928.9 MiB, Platform: 8984.2 MiB (Base: 7937.2, k8s-system: 1047.0), k8s-addon: 7457.3 2019-11-04T18:42:43.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.1%, Anon: 16518.8 MiB, cgroup-rss: 16445.7 MiB, Avail: 109410.0 MiB, Total: 125928.9 MiB 2019-11-04T18:42:43.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.31%, Anon: 9704.0 MiB, Avail: 53691.8 MiB, Total: 63395.8 MiB 2019-11-04T18:42:43.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.75%, Anon: 6814.8 MiB, Avail: 56568.9 MiB, Total: 63383.7 MiB 2019-11-04T18:42:53.644 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 11.0% (avg per cpu); cpus: 36, Platform: 9.6% (Base: 8.5, k8s-system: 1.0), k8s-addon: 1.3 2019-11-04T18:42:53.650 controller-1 collectd[12249]: info platform memory usage: Usage: 7.1%; Reserved: 125922.8 MiB, Platform: 8899.3 MiB (Base: 7852.2, k8s-system: 1047.1), k8s-addon: 7458.4 2019-11-04T18:42:53.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.1%, Anon: 16434.0 MiB, cgroup-rss: 16361.6 MiB, Avail: 109488.8 MiB, Total: 125922.8 MiB 2019-11-04T18:42:53.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.21%, Anon: 9642.1 MiB, Avail: 53751.9 MiB, Total: 63394.0 MiB 2019-11-04T18:42:53.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.72%, Anon: 6791.9 MiB, Avail: 56586.9 MiB, Total: 63378.8 MiB 2019-11-04T18:42:56.000 controller-1 rpc.mountd[12389]: notice authenticated mount request from fd00:204::3:700 for /etc/platform (/etc/platform) 2019-11-04T18:42:56.000 controller-1 rpc.mountd[12389]: notice authenticated unmount request from fd00:204::3:951 for /etc/platform (/etc/platform) 2019-11-04T18:43:03.639 controller-1 collectd[12249]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T18:43:03.644 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 10.5% (avg per cpu); cpus: 36, Platform: 9.2% (Base: 8.1, k8s-system: 1.0), k8s-addon: 1.2 2019-11-04T18:43:03.650 controller-1 collectd[12249]: info platform memory usage: Usage: 7.1%; Reserved: 125917.4 MiB, Platform: 9003.1 MiB (Base: 7956.0, k8s-system: 1047.1), k8s-addon: 7459.0 2019-11-04T18:43:03.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.1%, Anon: 16537.5 MiB, cgroup-rss: 16466.0 MiB, Avail: 109379.9 MiB, Total: 125917.4 MiB 2019-11-04T18:43:03.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.39%, Anon: 9757.4 MiB, Avail: 53631.8 MiB, Total: 63389.2 MiB 2019-11-04T18:43:03.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.70%, Anon: 6780.1 MiB, Avail: 56598.2 MiB, Total: 63378.2 MiB 2019-11-04T18:43:13.644 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 11.1% (avg per cpu); cpus: 36, Platform: 9.8% (Base: 9.0, k8s-system: 0.8), k8s-addon: 1.2 2019-11-04T18:43:13.650 controller-1 collectd[12249]: info platform memory usage: Usage: 7.1%; Reserved: 125918.4 MiB, Platform: 8942.1 MiB (Base: 7895.0, k8s-system: 1047.1), k8s-addon: 7458.7 2019-11-04T18:43:13.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.1%, Anon: 16477.1 MiB, cgroup-rss: 16403.9 MiB, Avail: 109441.3 MiB, Total: 125918.4 MiB 2019-11-04T18:43:13.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.26%, Anon: 9672.1 MiB, Avail: 53718.0 MiB, Total: 63390.1 MiB 2019-11-04T18:43:13.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.74%, Anon: 6805.6 MiB, Avail: 56574.4 MiB, Total: 63380.0 MiB 2019-11-04T18:43:23.644 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 10.8% (avg per cpu); cpus: 36, Platform: 9.8% (Base: 8.6, k8s-system: 1.2), k8s-addon: 0.8 2019-11-04T18:43:23.650 controller-1 collectd[12249]: info platform memory usage: Usage: 7.2%; Reserved: 125917.4 MiB, Platform: 9042.5 MiB (Base: 7994.3, k8s-system: 1048.2), k8s-addon: 7459.2 2019-11-04T18:43:23.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.2%, Anon: 16579.8 MiB, cgroup-rss: 16505.8 MiB, Avail: 109337.6 MiB, Total: 125917.4 MiB 2019-11-04T18:43:23.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.37%, Anon: 9745.9 MiB, Avail: 53643.7 MiB, Total: 63389.6 MiB 2019-11-04T18:43:23.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.78%, Anon: 6833.9 MiB, Avail: 56544.0 MiB, Total: 63377.9 MiB 2019-11-04T18:43:33.643 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 10.3% (avg per cpu); cpus: 36, Platform: 9.0% (Base: 7.6, k8s-system: 1.4), k8s-addon: 1.2 2019-11-04T18:43:33.650 controller-1 collectd[12249]: info platform memory usage: Usage: 7.0%; Reserved: 125920.5 MiB, Platform: 8854.2 MiB (Base: 7805.6, k8s-system: 1048.6), k8s-addon: 7459.1 2019-11-04T18:43:33.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.0%, Anon: 16396.7 MiB, cgroup-rss: 16317.5 MiB, Avail: 109523.8 MiB, Total: 125920.5 MiB 2019-11-04T18:43:33.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.26%, Anon: 9676.7 MiB, Avail: 53715.0 MiB, Total: 63391.6 MiB 2019-11-04T18:43:33.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.60%, Anon: 6720.0 MiB, Avail: 56660.0 MiB, Total: 63380.0 MiB 2019-11-04T18:43:43.643 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 7.4% (avg per cpu); cpus: 36, Platform: 6.3% (Base: 4.9, k8s-system: 1.4), k8s-addon: 0.9 2019-11-04T18:43:43.650 controller-1 collectd[12249]: info platform memory usage: Usage: 7.1%; Reserved: 125921.9 MiB, Platform: 8945.5 MiB (Base: 7896.9, k8s-system: 1048.7), k8s-addon: 7459.7 2019-11-04T18:43:43.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.1%, Anon: 16481.3 MiB, cgroup-rss: 16409.4 MiB, Avail: 109440.6 MiB, Total: 125921.9 MiB 2019-11-04T18:43:43.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.23%, Anon: 9655.5 MiB, Avail: 53738.5 MiB, Total: 63394.0 MiB 2019-11-04T18:43:43.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.77%, Anon: 6825.8 MiB, Avail: 56553.3 MiB, Total: 63379.2 MiB 2019-11-04T18:43:53.644 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 7.0% (avg per cpu); cpus: 36, Platform: 5.6% (Base: 4.4, k8s-system: 1.1), k8s-addon: 1.3 2019-11-04T18:43:53.651 controller-1 collectd[12249]: info platform memory usage: Usage: 7.0%; Reserved: 125913.2 MiB, Platform: 8856.4 MiB (Base: 7807.4, k8s-system: 1049.1), k8s-addon: 7465.3 2019-11-04T18:43:53.651 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.0%, Anon: 16397.5 MiB, cgroup-rss: 16325.9 MiB, Avail: 109515.6 MiB, Total: 125913.2 MiB 2019-11-04T18:43:53.651 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.22%, Anon: 9641.3 MiB, Avail: 53716.8 MiB, Total: 63358.1 MiB 2019-11-04T18:43:53.651 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.66%, Anon: 6756.2 MiB, Avail: 56650.6 MiB, Total: 63406.8 MiB 2019-11-04T18:44:03.643 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 9.9% (avg per cpu); cpus: 36, Platform: 8.4% (Base: 7.2, k8s-system: 1.2), k8s-addon: 1.4 2019-11-04T18:44:03.649 controller-1 collectd[12249]: info platform memory usage: Usage: 7.1%; Reserved: 125919.1 MiB, Platform: 8991.2 MiB (Base: 7940.7, k8s-system: 1050.5), k8s-addon: 7465.9 2019-11-04T18:44:03.649 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.1%, Anon: 16532.7 MiB, cgroup-rss: 16461.2 MiB, Avail: 109386.5 MiB, Total: 125919.1 MiB 2019-11-04T18:44:03.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.34%, Anon: 9722.6 MiB, Avail: 53638.9 MiB, Total: 63361.4 MiB 2019-11-04T18:44:03.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.74%, Anon: 6810.1 MiB, Avail: 56599.0 MiB, Total: 63409.1 MiB 2019-11-04T18:44:08.548 controller-1 registry[188517]: info time="2019-11-04T18:44:08Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=638a626c-819f-4c99-9a82-41297bd7ad8d http.request.method=GET http.request.remoteaddr="[fd00:204::2]:52607" http.request.uri="/v2/" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=31d0b616-7bd4-4581-b105-da8fef43d2b9 version="v2.6.2+unknown" 2019-11-04T18:44:08.548 controller-1 registry[188517]: info fd00:204::2 - - [04/Nov/2019:18:44:08 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T18:44:08.553 controller-1 registry-token-server[187729]: info time="2019-11-04T18:44:08Z" level=info msg=getToken go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=a6ccc56d-a687-4b53-a57a-61cc27495538 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:51025" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fexternal_storage%2Frbd-provisioner%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=873d418f-57d8-449a-b4d7-32feec4d2853 2019-11-04T18:44:08.567 controller-1 registry[188517]: info time="2019-11-04T18:44:08Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=b95240a0-06c7-4753-810f-465522bb4093 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:52613" http.request.uri="/v2/" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=31d0b616-7bd4-4581-b105-da8fef43d2b9 version="v2.6.2+unknown" 2019-11-04T18:44:08.567 controller-1 registry[188517]: info fd00:204::2 - - [04/Nov/2019:18:44:08 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T18:44:08.572 controller-1 registry-token-server[187729]: info time="2019-11-04T18:44:08Z" level=info msg=getToken go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=da055deb-2ef7-4b01-be76-994037dd8fd4 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:51031" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fceph-config-helper%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=873d418f-57d8-449a-b4d7-32feec4d2853 2019-11-04T18:44:08.912 controller-1 registry-token-server[187729]: info time="2019-11-04T18:44:08Z" level=info msg="authenticated client" acctSubject=admin go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=a6ccc56d-a687-4b53-a57a-61cc27495538 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:51025" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fexternal_storage%2Frbd-provisioner%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=873d418f-57d8-449a-b4d7-32feec4d2853 2019-11-04T18:44:08.915 controller-1 registry-token-server[187729]: info time="2019-11-04T18:44:08Z" level=info msg="authorized client" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository quay.io/external_storage/rbd-provisioner} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=a6ccc56d-a687-4b53-a57a-61cc27495538 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:51025" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fexternal_storage%2Frbd-provisioner%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=873d418f-57d8-449a-b4d7-32feec4d2853 requestedAccess=[{{repository quay.io/external_storage/rbd-provisioner} pull}] 2019-11-04T18:44:08.915 controller-1 registry-token-server[187729]: info time="2019-11-04T18:44:08Z" level=info msg="get token complete" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository quay.io/external_storage/rbd-provisioner} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=a6ccc56d-a687-4b53-a57a-61cc27495538 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:51025" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fexternal_storage%2Frbd-provisioner%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/json" http.response.duration=361.532003ms http.response.status=200 http.response.written=1349 instance.id=873d418f-57d8-449a-b4d7-32feec4d2853 requestedAccess=[{{repository quay.io/external_storage/rbd-provisioner} pull}] 2019-11-04T18:44:08.923 controller-1 registry[188517]: info time="2019-11-04T18:44:08Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=7bc7a4ff-4fe1-4000-be2f-0d4ec1dba60b http.request.method=GET http.request.remoteaddr="[fd00:204::2]:52629" http.request.uri="/v2/quay.io/external_storage/rbd-provisioner/manifests/v2.1.1-k8s1.11" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.response.duration=3.444468ms http.response.status=200 http.response.written=953 instance.id=31d0b616-7bd4-4581-b105-da8fef43d2b9 version="v2.6.2+unknown" 2019-11-04T18:44:08.923 controller-1 registry[188517]: info fd00:204::2 - - [04/Nov/2019:18:44:08 +0000] "GET /v2/quay.io/external_storage/rbd-provisioner/manifests/v2.1.1-k8s1.11 HTTP/1.1" 200 953 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T18:44:08.975 controller-1 registry-token-server[187729]: info time="2019-11-04T18:44:08Z" level=info msg="authenticated client" acctSubject=admin go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=da055deb-2ef7-4b01-be76-994037dd8fd4 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:51031" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fceph-config-helper%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=873d418f-57d8-449a-b4d7-32feec4d2853 2019-11-04T18:44:08.978 controller-1 registry-token-server[187729]: info time="2019-11-04T18:44:08Z" level=info msg="authorized client" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository docker.io/starlingx/ceph-config-helper} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=da055deb-2ef7-4b01-be76-994037dd8fd4 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:51031" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fceph-config-helper%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=873d418f-57d8-449a-b4d7-32feec4d2853 requestedAccess=[{{repository docker.io/starlingx/ceph-config-helper} pull}] 2019-11-04T18:44:08.978 controller-1 registry-token-server[187729]: info time="2019-11-04T18:44:08Z" level=info msg="get token complete" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository docker.io/starlingx/ceph-config-helper} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=da055deb-2ef7-4b01-be76-994037dd8fd4 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:51031" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fceph-config-helper%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/json" http.response.duration=406.477604ms http.response.status=200 http.response.written=1346 instance.id=873d418f-57d8-449a-b4d7-32feec4d2853 requestedAccess=[{{repository docker.io/starlingx/ceph-config-helper} pull}] 2019-11-04T18:44:08.986 controller-1 registry[188517]: info time="2019-11-04T18:44:08Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=1adf729d-b0da-4d35-8356-dc00b6b83a2b http.request.method=GET http.request.remoteaddr="[fd00:204::2]:52631" http.request.uri="/v2/docker.io/starlingx/ceph-config-helper/manifests/v1.15.0" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.response.duration=3.335298ms http.response.status=200 http.response.written=1571 instance.id=31d0b616-7bd4-4581-b105-da8fef43d2b9 version="v2.6.2+unknown" 2019-11-04T18:44:08.986 controller-1 registry[188517]: info fd00:204::2 - - [04/Nov/2019:18:44:08 +0000] "GET /v2/docker.io/starlingx/ceph-config-helper/manifests/v1.15.0 HTTP/1.1" 200 1571 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T18:44:09.053 controller-1 containerd[12218]: info time="2019-11-04T18:44:09.053221031Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/11c82a13c83bb4b83e17cba6abef1fce2713453f2c77275d3db4bb3eacc61235/shim.sock" debug=false pid=269488 2019-11-04T18:44:10.000 controller-1 dnsmasq[196403]: warning nameserver fd00:207::a refused to do a recursive query 2019-11-04T18:44:13.644 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 12.4% (avg per cpu); cpus: 36, Platform: 10.8% (Base: 9.3, k8s-system: 1.5), k8s-addon: 1.5 2019-11-04T18:44:13.650 controller-1 collectd[12249]: info platform memory usage: Usage: 7.3%; Reserved: 125908.2 MiB, Platform: 9184.4 MiB (Base: 8118.8, k8s-system: 1065.6), k8s-addon: 7461.7 2019-11-04T18:44:13.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.3%, Anon: 16722.5 MiB, cgroup-rss: 16649.9 MiB, Avail: 109185.7 MiB, Total: 125908.2 MiB 2019-11-04T18:44:13.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.42%, Anon: 9770.4 MiB, Avail: 53585.7 MiB, Total: 63356.1 MiB 2019-11-04T18:44:13.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.96%, Anon: 6952.1 MiB, Avail: 56451.7 MiB, Total: 63403.8 MiB 2019-11-04T18:44:23.644 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 9.9% (avg per cpu); cpus: 36, Platform: 8.6% (Base: 7.3, k8s-system: 1.3), k8s-addon: 1.2 2019-11-04T18:44:23.651 controller-1 collectd[12249]: info platform memory usage: Usage: 7.2%; Reserved: 125904.0 MiB, Platform: 9125.9 MiB (Base: 8255.9, k8s-system: 870.1), k8s-addon: 7466.8 2019-11-04T18:44:23.651 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.3%, Anon: 16802.9 MiB, cgroup-rss: 16712.7 MiB, Avail: 109101.1 MiB, Total: 125904.0 MiB 2019-11-04T18:44:23.651 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.49%, Anon: 9815.0 MiB, Avail: 53537.6 MiB, Total: 63352.7 MiB 2019-11-04T18:44:23.651 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 11.01%, Anon: 6982.2 MiB, Avail: 56419.8 MiB, Total: 63402.0 MiB 2019-11-04T18:44:33.000 controller-1 dnsmasq-dhcp[196403]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:9f:71:08 2019-11-04T18:44:33.000 controller-1 dnsmasq-dhcp[196403]: info DHCPREPLY(vlan108) fd00:204::cfbc:a4c1:8864:e140 00:03:00:01:3c:fd:fe:9f:71:08 compute-10 2019-11-04T18:44:33.644 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 11.9% (avg per cpu); cpus: 36, Platform: 9.4% (Base: 8.4, k8s-system: 1.1), k8s-addon: 2.2 2019-11-04T18:44:33.650 controller-1 collectd[12249]: info platform memory usage: Usage: 7.2%; Reserved: 125899.1 MiB, Platform: 9037.2 MiB (Base: 8220.6, k8s-system: 816.6), k8s-addon: 7461.0 2019-11-04T18:44:33.651 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.2%, Anon: 16580.4 MiB, cgroup-rss: 16502.3 MiB, Avail: 109318.6 MiB, Total: 125899.1 MiB 2019-11-04T18:44:33.651 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.32%, Anon: 9705.8 MiB, Avail: 53645.7 MiB, Total: 63351.6 MiB 2019-11-04T18:44:33.651 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.84%, Anon: 6874.6 MiB, Avail: 56528.2 MiB, Total: 63402.8 MiB 2019-11-04T18:44:34.000 controller-1 dnsmasq-script[196403]: debug sysinv 2019-11-04 18:44:34.059 275359 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9f:71:08' with ip 'fd00:204::cfbc:a4c1:8864:e140' 2019-11-04T18:44:42.000 controller-1 dnsmasq-dhcp[196403]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:9e:66:b8 2019-11-04T18:44:42.000 controller-1 dnsmasq-dhcp[196403]: info DHCPREPLY(vlan108) fd00:204::78fe:8421:1cd1:51db 00:03:00:01:3c:fd:fe:9e:66:b8 compute-0 2019-11-04T18:44:42.000 controller-1 dnsmasq-script[196403]: debug sysinv 2019-11-04 18:44:42.616 276469 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:66:b8' with ip 'fd00:204::78fe:8421:1cd1:51db' 2019-11-04T18:44:43.643 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 8.1% (avg per cpu); cpus: 36, Platform: 6.1% (Base: 5.1, k8s-system: 1.0), k8s-addon: 1.8 2019-11-04T18:44:43.649 controller-1 collectd[12249]: info platform memory usage: Usage: 7.2%; Reserved: 125904.6 MiB, Platform: 9119.5 MiB (Base: 8302.7, k8s-system: 816.8), k8s-addon: 7462.3 2019-11-04T18:44:43.649 controller-1 collectd[12249]: info 4K memory usage: Anon: 13.2%, Anon: 16657.3 MiB, cgroup-rss: 16585.9 MiB, Avail: 109247.4 MiB, Total: 125904.6 MiB 2019-11-04T18:44:43.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 15.40%, Anon: 9757.1 MiB, Avail: 53600.2 MiB, Total: 63357.2 MiB 2019-11-04T18:44:43.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 10.88%, Anon: 6900.2 MiB, Avail: 56501.4 MiB, Total: 63401.7 MiB 2019-11-04T18:44:50.194 controller-1 systemd[1]: info Stopping v2 Registry server for Docker... 2019-11-04T18:44:50.000 controller-1 dnsmasq[196403]: info exiting on receipt of SIGTERM 2019-11-04T18:44:50.000 controller-1 snmpd[191423]: info Received TERM or STOP signal... shutting down... 2019-11-04T18:44:50.000 controller-1 snmpd[191423]: info deinit_cgtsAgentPlugin start 2019-11-04T18:44:50.000 controller-1 snmpd[191423]: info deinit_snmpAuditPlugin 2019-11-04T18:44:50.233 controller-1 systemd[1]: info Stopped v2 Registry server for Docker. 2019-11-04T18:44:50.000 controller-1 lldpd[12254]: warning removal request for address of fd00:205::2%13, but no knowledge of it 2019-11-04T18:44:50.236 controller-1 systemd[1]: info Stopping Etcd Server... 2019-11-04T18:44:50.238 controller-1 systemd[278213]: err Failed at step EXEC spawning /bin/bash/rm: Not a directory 2019-11-04T18:44:50.238 controller-1 systemd[1]: notice etcd.service: control process exited, code=exited status=203 2019-11-04T18:44:50.239 controller-1 systemd[1]: info Stopped Etcd Server. 2019-11-04T18:44:50.239 controller-1 systemd[1]: notice Unit etcd.service entered failed state. 2019-11-04T18:44:50.239 controller-1 systemd[1]: warning etcd.service failed. 2019-11-04T18:44:50.000 controller-1 lldpd[12254]: warning removal request for address of 192.168.202.2%11, but no knowledge of it 2019-11-04T18:44:50.697 controller-1 systemd[1]: info Stopping v2 Registry token server for Docker... 2019-11-04T18:44:50.712 controller-1 systemd[1]: info Stopped v2 Registry token server for Docker. 2019-11-04T18:44:50.000 controller-1 lldpd[12254]: warning removal request for address of 2620:10a:a001:a103::234%3, but no knowledge of it 2019-11-04T18:44:52.000 controller-1 ntpd[87544]: info Deleting interface #26 eno1, 2620:10a:a001:a103::234#123, interface stats: received=18, sent=18, dropped=0, active_time=389 secs 2019-11-04T18:44:52.000 controller-1 ntpd[87544]: info 64:ff9b::6c3d:3823 interface 2620:10a:a001:a103::234 -> (none) 2019-11-04T18:44:52.000 controller-1 ntpd[87544]: info 64:ff9b::d073:7e46 interface 2620:10a:a001:a103::234 -> (none) 2019-11-04T18:44:52.000 controller-1 ntpd[87544]: info 64:ff9b::607e:7a27 interface 2620:10a:a001:a103::234 -> (none) 2019-11-04T18:44:52.000 controller-1 ntpd[87544]: info Deleting interface #25 vlan109, fd00:205::2#123, interface stats: received=0, sent=0, dropped=0, active_time=394 secs 2019-11-04T18:44:52.000 controller-1 ntpd[87544]: info Deleting interface #23 pxeboot0, 192.168.202.2#123, interface stats: received=0, sent=0, dropped=0, active_time=404 secs 2019-11-04T18:44:53.643 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 9.3% (avg per cpu); cpus: 36, Platform: 6.8% (Base: 6.0, k8s-system: 0.8), k8s-addon: 2.3 2019-11-04T18:44:53.650 controller-1 collectd[12249]: info platform memory usage: Usage: 4.6%; Reserved: 125945.3 MiB, Platform: 5816.8 MiB (Base: 4997.9, k8s-system: 818.9), k8s-addon: 7463.1 2019-11-04T18:44:53.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 10.6%, Anon: 13363.0 MiB, cgroup-rss: 13284.1 MiB, Avail: 112582.2 MiB, Total: 125945.3 MiB 2019-11-04T18:44:53.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 13.02%, Anon: 8253.4 MiB, Avail: 55124.8 MiB, Total: 63378.2 MiB 2019-11-04T18:44:53.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 8.06%, Anon: 5109.6 MiB, Avail: 58309.2 MiB, Total: 63418.8 MiB 2019-11-04T18:44:54.000 controller-1 ntpd[87544]: info 0.0.0.0 0628 08 no_sys_peer 2019-11-04T18:44:54.485 controller-1 kubelet[88521]: info E1104 18:44:54.485888 88521 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "controller-1": Get https://[fd00:205::2]:6443/api/v1/nodes/controller-1?resourceVersion=0&timeout=4s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) 2019-11-04T18:44:58.486 controller-1 kubelet[88521]: info E1104 18:44:58.486710 88521 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "controller-1": Get https://[fd00:205::2]:6443/api/v1/nodes/controller-1?timeout=4s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) 2019-11-04T18:44:58.486 controller-1 kubelet[88521]: info E1104 18:44:58.486760 88521 controller.go:170] failed to update node lease, error: Put https://[fd00:205::2]:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/controller-1?timeout=4s: read tcp [fd00:205::4]:50412->[fd00:205::2]:6443: use of closed network connection 2019-11-04T18:44:58.511 controller-1 containerd[12218]: info time="2019-11-04T18:44:58.511639146Z" level=info msg="shim reaped" id=11c82a13c83bb4b83e17cba6abef1fce2713453f2c77275d3db4bb3eacc61235 2019-11-04T18:44:58.521 controller-1 dockerd[12258]: info time="2019-11-04T18:44:58.521496557Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:44:58.000 controller-1 lldpd[12254]: warning removal request for address of fd00:204::5%12, but no knowledge of it 2019-11-04T18:45:00.000 controller-1 ntpd[87544]: info Deleting interface #27 vlan108, fd00:204::5#123, interface stats: received=3, sent=6, dropped=0, active_time=397 secs 2019-11-04T18:45:00.000 controller-1 ntpd[87544]: info fd00:204::3 interface fd00:204::5 -> (none) 2019-11-04T18:45:02.202 controller-1 containerd[12218]: info time="2019-11-04T18:45:02.202761982Z" level=info msg="shim reaped" id=fdc86871d7895c0a3d407c48d84b73b50f2a31e201e4b658ac0b0b8e489144df 2019-11-04T18:45:02.212 controller-1 dockerd[12258]: info time="2019-11-04T18:45:02.212721727Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:45:02.247 controller-1 containerd[12218]: info time="2019-11-04T18:45:02.247689590Z" level=info msg="shim reaped" id=b13fda1349e2bc4b968e0c582198b617dc547cf327bd1412de4729a78b30a330 2019-11-04T18:45:02.257 controller-1 dockerd[12258]: info time="2019-11-04T18:45:02.257616068Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:45:02.487 controller-1 kubelet[88521]: info E1104 18:45:02.487033 88521 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "controller-1": Get https://[fd00:205::2]:6443/api/v1/nodes/controller-1?timeout=4s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 2019-11-04T18:45:02.487 controller-1 kubelet[88521]: info E1104 18:45:02.487032 88521 controller.go:170] failed to update node lease, error: Put https://[fd00:205::2]:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/controller-1?timeout=4s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) 2019-11-04T18:45:02.892 controller-1 containerd[12218]: info time="2019-11-04T18:45:02.892622955Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/62f1a7b299abd95d2dbc00e3aea65f2f932bac44bbc2e7bc00117b9685c32808/shim.sock" debug=false pid=284219 2019-11-04T18:45:02.898 controller-1 containerd[12218]: info time="2019-11-04T18:45:02.898754115Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ee44ef250a59a517a55a204d81cfbf50b6629e22a1046e3fdb75abb0d3492910/shim.sock" debug=false pid=284238 2019-11-04T18:45:03.000 controller-1 lldpd[12254]: warning removal request for address of fd00:204::2%12, but no knowledge of it 2019-11-04T18:45:03.643 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 6.0% (avg per cpu); cpus: 36, Platform: 4.9% (Base: 4.3, k8s-system: 0.6), k8s-addon: 1.1 2019-11-04T18:45:03.649 controller-1 collectd[12249]: info platform memory usage: Usage: 2.8%; Reserved: 126012.4 MiB, Platform: 3524.5 MiB (Base: 2794.2, k8s-system: 730.2), k8s-addon: 7463.9 2019-11-04T18:45:03.649 controller-1 collectd[12249]: info 4K memory usage: Anon: 8.7%, Anon: 11009.7 MiB, cgroup-rss: 10992.3 MiB, Avail: 115002.7 MiB, Total: 126012.4 MiB 2019-11-04T18:45:03.649 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 11.43%, Anon: 7248.0 MiB, Avail: 56169.8 MiB, Total: 63417.8 MiB 2019-11-04T18:45:03.649 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 5.93%, Anon: 3761.8 MiB, Avail: 59621.6 MiB, Total: 63383.3 MiB 2019-11-04T18:45:03.965 controller-1 collectd[12249]: info alarm notifier host=controller-1.filesystem=/opt/etcd controller-1/df-opt-etcd/percent_bytes-used has not been updated for 20.329 seconds. (1) 2019-11-04T18:45:03.965 controller-1 collectd[12249]: info alarm notifier host=controller-1.filesystem=/opt/extension controller-1/df-opt-extension/percent_bytes-used has not been updated for 20.329 seconds. (1) 2019-11-04T18:45:03.966 controller-1 collectd[12249]: info alarm notifier host=controller-1.filesystem=/var-lib-docker-distribution controller-1/df-var-lib-docker-distribution/percent_bytes-used has not been updated for 20.329 seconds. (1) 2019-11-04T18:45:05.000 controller-1 ntpd[87544]: info Deleting interface #24 vlan108, fd00:204::2#123, interface stats: received=1, sent=2, dropped=0, active_time=407 secs 2019-11-04T18:45:05.000 controller-1 ntpd[87544]: info fd00:204::3 interface fd00:204::2 -> (none) 2019-11-04T18:45:06.487 controller-1 kubelet[88521]: info E1104 18:45:06.487356 88521 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "controller-1": Get https://[fd00:205::2]:6443/api/v1/nodes/controller-1?timeout=4s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 2019-11-04T18:45:06.487 controller-1 kubelet[88521]: info E1104 18:45:06.487446 88521 controller.go:170] failed to update node lease, error: Put https://[fd00:205::2]:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/controller-1?timeout=4s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) 2019-11-04T18:45:09.243 controller-1 kubelet[88521]: info E1104 18:45:09.243473 88521 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/b307158c385f4289ceb0534c9c3f4c6662ce5825b1dc11d1a13cf22fbdba9da1/diff" to get inode usage: stat /var/lib/docker/overlay2/b307158c385f4289ceb0534c9c3f4c6662ce5825b1dc11d1a13cf22fbdba9da1/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/11c82a13c83bb4b83e17cba6abef1fce2713453f2c77275d3db4bb3eacc61235" to get inode usage: stat /var/lib/docker/containers/11c82a13c83bb4b83e17cba6abef1fce2713453f2c77275d3db4bb3eacc61235: no such file or directory 2019-11-04T18:45:10.487 controller-1 kubelet[88521]: info W1104 18:45:10.487851 88521 status_manager.go:529] Failed to get status for pod "kube-controller-manager-controller-1_kube-system(fa004662b97422f9bda923908ff7217d)": Get https://[fd00:205::2]:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-controller-1: read tcp [fd00:205::4]:50797->[fd00:205::2]:6443: use of closed network connection 2019-11-04T18:45:10.488 controller-1 kubelet[88521]: info E1104 18:45:10.487908 88521 event.go:246] Unable to write event: 'Patch https://[fd00:205::2]:6443/api/v1/namespaces/kube-system/events/kube-controller-manager-controller-1.15d408d0eafb8fa5: read tcp [fd00:205::4]:50797->[fd00:205::2]:6443: use of closed network connection' (may retry after sleeping) 2019-11-04T18:45:10.488 controller-1 kubelet[88521]: info E1104 18:45:10.487941 88521 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "controller-1": Get https://[fd00:205::2]:6443/api/v1/nodes/controller-1?timeout=4s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) 2019-11-04T18:45:10.488 controller-1 kubelet[88521]: info E1104 18:45:10.487962 88521 kubelet_node_status.go:375] Unable to update node status: update node status exceeds retry count 2019-11-04T18:45:10.488 controller-1 kubelet[88521]: info E1104 18:45:10.487956 88521 controller.go:170] failed to update node lease, error: Put https://[fd00:205::2]:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/controller-1?timeout=4s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) 2019-11-04T18:45:13.522 controller-1 kubelet[88521]: info W1104 18:45:13.522370 88521 reflector.go:299] object-"monitor"/"mon-filebeat-token-z6rf8": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.522 controller-1 kubelet[88521]: info W1104 18:45:13.522825 88521 reflector.go:299] object-"monitor"/"mon-kube-state-metrics-token-qj6tw": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.523 controller-1 kubelet[88521]: info W1104 18:45:13.523655 88521 reflector.go:299] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: too old resource version: 8136605 (8143357) 2019-11-04T18:45:13.523 controller-1 kubelet[88521]: info W1104 18:45:13.523701 88521 reflector.go:299] object-"kube-system"/"registry-local-secret": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.523 controller-1 kubelet[88521]: info W1104 18:45:13.523765 88521 reflector.go:299] object-"kube-system"/"default-token-jxtxx": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.523 controller-1 kubelet[88521]: info W1104 18:45:13.523804 88521 reflector.go:299] object-"monitor"/"mon-metricbeat-daemonset-modules": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.524 controller-1 kubelet[88521]: info W1104 18:45:13.524041 88521 reflector.go:299] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: watch of *v1.Service ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.524 controller-1 kubelet[88521]: info W1104 18:45:13.524066 88521 reflector.go:299] object-"monitor"/"mon-logstash-patterns": watch of *v1.ConfigMap ended with: too old resource version: 8139130 (8143357) 2019-11-04T18:45:13.524 controller-1 kubelet[88521]: info W1104 18:45:13.524192 88521 reflector.go:299] object-"monitor"/"mon-metricbeat-deployment-config": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.524 controller-1 kubelet[88521]: info W1104 18:45:13.524264 88521 reflector.go:299] object-"monitor"/"mon-filebeat": watch of *v1.ConfigMap ended with: too old resource version: 8139836 (8143357) 2019-11-04T18:45:13.524 controller-1 kubelet[88521]: info W1104 18:45:13.524284 88521 reflector.go:299] object-"monitor"/"default-token-88gsr": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.524 controller-1 kubelet[88521]: info W1104 18:45:13.524785 88521 reflector.go:299] object-"kube-system"/"multus-token-dtj6m": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.524 controller-1 kubelet[88521]: info W1104 18:45:13.524907 88521 reflector.go:299] object-"kube-system"/"calico-config": watch of *v1.ConfigMap ended with: too old resource version: 8138680 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525070 88521 reflector.go:299] object-"kube-system"/"calico-node-token-46p7c": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525071 88521 reflector.go:299] object-"monitor"/"mon-kibana": watch of *v1.ConfigMap ended with: too old resource version: 8140196 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525090 88521 reflector.go:299] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: watch of *v1.Pod ended with: too old resource version: 8141904 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525094 88521 reflector.go:299] object-"kube-system"/"kube-proxy-token-9m2nq": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525112 88521 reflector.go:299] object-"kube-system"/"default-registry-key": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525110 88521 reflector.go:299] object-"kube-system"/"rbd-provisioner-token-587hn": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525121 88521 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.RuntimeClass ended with: too old resource version: 8140260 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525116 88521 reflector.go:299] object-"monitor"/"mon-metricbeat-daemonset-config": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525123 88521 reflector.go:299] object-"monitor"/"mon-metricbeat-token-5vdfc": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525127 88521 reflector.go:299] object-"kube-system"/"coredns-token-x97rb": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525263 88521 reflector.go:299] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: too old resource version: 8137805 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525331 88521 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSIDriver ended with: too old resource version: 8140260 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525375 88521 reflector.go:299] object-"monitor"/"mon-filebeat": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525407 88521 reflector.go:299] object-"monitor"/"mon-metricbeat-deployment-modules": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525423 88521 reflector.go:299] object-"kube-system"/"tiller-token-c6p8n": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525462 88521 reflector.go:299] object-"kube-system"/"multus-cni-config": watch of *v1.ConfigMap ended with: too old resource version: 8137883 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525590 88521 reflector.go:299] object-"monitor"/"mon-nginx-ingress-token-dgbmq": watch of *v1.Secret ended with: too old resource version: 8140238 (8143357) 2019-11-04T18:45:13.525 controller-1 kubelet[88521]: info W1104 18:45:13.525663 88521 reflector.go:299] object-"monitor"/"mon-metricbeat": watch of *v1.ConfigMap ended with: too old resource version: 8136312 (8143357) 2019-11-04T18:45:13.643 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.6% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.6 2019-11-04T18:45:13.649 controller-1 collectd[12249]: info platform memory usage: Usage: 2.8%; Reserved: 126013.1 MiB, Platform: 3527.2 MiB (Base: 2782.3, k8s-system: 744.9), k8s-addon: 7458.6 2019-11-04T18:45:13.649 controller-1 collectd[12249]: info 4K memory usage: Anon: 8.7%, Anon: 11004.0 MiB, cgroup-rss: 10989.9 MiB, Avail: 115009.1 MiB, Total: 126013.1 MiB 2019-11-04T18:45:13.649 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 11.42%, Anon: 7240.8 MiB, Avail: 56182.7 MiB, Total: 63423.6 MiB 2019-11-04T18:45:13.649 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 5.94%, Anon: 3763.2 MiB, Avail: 59615.4 MiB, Total: 63378.5 MiB 2019-11-04T18:45:13.965 controller-1 collectd[12249]: info alarm notifier host=controller-1.filesystem=/opt/platform controller-1/df-opt-platform/percent_bytes-used has not been updated for 20.329 seconds. (1) 2019-11-04T18:45:13.965 controller-1 collectd[12249]: info alarm notifier host=controller-1.filesystem=/var/lib/postgresql controller-1/df-var-lib-postgresql/percent_bytes-used has not been updated for 20.329 seconds. (1) 2019-11-04T18:45:13.966 controller-1 collectd[12249]: info alarm notifier host=controller-1.filesystem=/var/lib/rabbitmq controller-1/df-var-lib-rabbitmq/percent_bytes-used has not been updated for 20.329 seconds. (1) 2019-11-04T18:45:23.643 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.5% (Base: 0.9, k8s-system: 0.6), k8s-addon: 0.5 2019-11-04T18:45:23.650 controller-1 collectd[12249]: info platform memory usage: Usage: 2.8%; Reserved: 126010.8 MiB, Platform: 3531.4 MiB (Base: 2782.3, k8s-system: 749.1), k8s-addon: 7459.1 2019-11-04T18:45:23.650 controller-1 collectd[12249]: info 4K memory usage: Anon: 8.7%, Anon: 11009.2 MiB, cgroup-rss: 10994.7 MiB, Avail: 115001.5 MiB, Total: 126010.8 MiB 2019-11-04T18:45:23.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 11.42%, Anon: 7241.5 MiB, Avail: 56183.3 MiB, Total: 63424.8 MiB 2019-11-04T18:45:23.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 5.95%, Anon: 3767.7 MiB, Avail: 59607.2 MiB, Total: 63375.0 MiB 2019-11-04T18:45:33.643 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 1.4% (Base: 1.0, k8s-system: 0.4), k8s-addon: 0.6 2019-11-04T18:45:33.649 controller-1 collectd[12249]: info platform memory usage: Usage: 2.5%; Reserved: 126016.1 MiB, Platform: 3209.2 MiB (Base: 2457.9, k8s-system: 751.2), k8s-addon: 7462.9 2019-11-04T18:45:33.649 controller-1 collectd[12249]: info 4K memory usage: Anon: 8.5%, Anon: 10690.7 MiB, cgroup-rss: 10676.2 MiB, Avail: 115325.4 MiB, Total: 126016.1 MiB 2019-11-04T18:45:33.650 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 11.14%, Anon: 7063.7 MiB, Avail: 56365.0 MiB, Total: 63428.7 MiB 2019-11-04T18:45:33.650 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 5.72%, Anon: 3626.9 MiB, Avail: 59749.4 MiB, Total: 63376.4 MiB 2019-11-04T18:45:34.000 controller-1 nslcd[84484]: warning [03e0c6] ldap_search_ext() failed: Can't contact LDAP server: Connection timed out 2019-11-04T18:45:34.000 controller-1 nslcd[84484]: warning [03e0c6] no available LDAP server found, sleeping 1 seconds 2019-11-04T18:45:35.000 controller-1 nslcd[84484]: info [03e0c6] connected to LDAP server ldap://controller 2019-11-04T18:45:40.052 controller-1 systemd[1]: info Reloading. 2019-11-04T18:45:43.386 controller-1 kubelet[88521]: info I1104 18:45:43.386083 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-etc") pod "ceph-pools-audit-1572893100-sg9bf" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb") 2019-11-04T18:45:43.386 controller-1 kubelet[88521]: info I1104 18:45:43.386117 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-pools-bin") pod "ceph-pools-audit-1572893100-sg9bf" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb") 2019-11-04T18:45:43.386 controller-1 kubelet[88521]: info I1104 18:45:43.386161 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-pools-audit-token-bsfbw") pod "ceph-pools-audit-1572893100-sg9bf" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb") 2019-11-04T18:45:43.386 controller-1 kubelet[88521]: info I1104 18:45:43.386254 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/5f76f380-b632-4adb-ae14-2f39938a10bb-etcceph") pod "ceph-pools-audit-1572893100-sg9bf" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb") 2019-11-04T18:45:43.532 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/5f76f380-b632-4adb-ae14-2f39938a10bb/volumes/kubernetes.io~secret/ceph-pools-audit-token-bsfbw. 2019-11-04T18:45:43.601 controller-1 dockerd[12258]: info time="2019-11-04T18:45:43.601151374Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T18:45:43.606 controller-1 containerd[12218]: info time="2019-11-04T18:45:43.606656608Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e/shim.sock" debug=false pid=289891 2019-11-04T18:45:49.561 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.560 [INFO][290394] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"ceph-pools-audit-1572893100-sg9bf", ContainerID:"15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e"}} 2019-11-04T18:45:49.578 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.578 [INFO][290394] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0 ceph-pools-audit-1572893100- kube-system 5f76f380-b632-4adb-ae14-2f39938a10bb 8144998 0 2019-11-04 18:45:43 +0000 UTC map[job-name:ceph-pools-audit-1572893100 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:ceph-pools-audit app:ceph-pools-audit controller-uid:ddb433ba-edd8-40cb-8946-aac4d44d2c60] map[] [] nil [] } {k8s controller-1 ceph-pools-audit-1572893100-sg9bf eth0 [] [] [kns.kube-system ksa.kube-system.ceph-pools-audit] cali6000aca4406 []}} ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Namespace="kube-system" Pod="ceph-pools-audit-1572893100-sg9bf" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-" 2019-11-04T18:45:49.578 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.578 [INFO][290394] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Namespace="kube-system" Pod="ceph-pools-audit-1572893100-sg9bf" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:45:49.581 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.581 [INFO][290394] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kube-system,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/kube-system,UID:5d016a6c-19e8-4b97-88a9-b6113a3cb736,ResourceVersion:5,Generation:0,CreationTimestamp:2019-10-25 15:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T18:45:49.582 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.582 [INFO][290394] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ceph-pools-audit-1572893100-sg9bf,GenerateName:ceph-pools-audit-1572893100-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/ceph-pools-audit-1572893100-sg9bf,UID:5f76f380-b632-4adb-ae14-2f39938a10bb,ResourceVersion:8144998,Generation:0,CreationTimestamp:2019-11-04 18:45:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: ceph-pools-audit,controller-uid: ddb433ba-edd8-40cb-8946-aac4d44d2c60,job-name: ceph-pools-audit-1572893100,},Annotations:map[string]string{},OwnerReferences:[{batch/v1 Job ceph-pools-audit-1572893100 ddb433ba-edd8-40cb-8946-aac4d44d2c60 0xc0007f5b6b 0xc0007f5b6c}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{ceph-pools-bin {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:ceph-pools-bin,},Items:[],DefaultMode:*365,Optional:nil,} nil nil nil nil nil nil nil nil}} {etcceph {nil &EmptyDirVolumeSource{Medium:,SizeLimit:,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {ceph-etc {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:ceph-etc,},Items:[],DefaultMode:*292,Optional:nil,} nil nil nil nil nil nil nil nil}} {ceph-pools-audit-token-bsfbw {nil nil nil nil nil &SecretVolumeSource{SecretName:ceph-pools-audit-token-bsfbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{ceph-pools-audit-ceph-store registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 [/tmp/ceph-pools-audit.sh] [] [] [] [{RBD_POOL_REPLICATION 2 nil} {RBD_POOL_MIN_REPLICATION 1 nil} {RBD_POOL_CRUSH_RULE_NAME storage_tier_ruleset nil}] {map[] map[]} [{ceph-pools-bin true /tmp/ceph-pools-audit.sh ceph-pools-audit.sh } {etcceph false /etc/ceph } {ceph-etc true /etc/ceph/ceph.conf ceph.conf } {ceph-pools-audit-token-bsfbw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:OnFailure,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:ceph-pools-audit,DeprecatedServiceAccount:ceph-pools-audit,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[{default-registry-key}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0007f5d90} {node.kubernetes.io/unreachable Exists NoExecute 0xc0007f5db0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:45:43 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:45:43 +0000 UTC ContainersNotReady containers with unready status: [ceph-pools-audit-ceph-store]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:45:43 +0000 UTC ContainersNotReady containers with unready status: [ceph-pools-audit-ceph-store]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:45:43 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 18:45:43 +0000 UTC,ContainerStatuses:[{ceph-pools-audit-ceph-store {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T18:45:49.602 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.601 [INFO][290422] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" HandleID="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Workload="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:45:49.610 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.610 [INFO][290422] ipam_plugin.go 220: Calico CNI IPAM handle=chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" HandleID="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Workload="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:45:49.610 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.610 [INFO][290422] ipam_plugin.go 230: Auto assigning IP ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" HandleID="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Workload="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc00000fb80), Attrs:map[string]string{"node":"controller-1", "pod":"ceph-pools-audit-1572893100-sg9bf", "namespace":"kube-system"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T18:45:49.610 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.610 [INFO][290422] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T18:45:49.614 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.614 [INFO][290422] ipam.go 309: Looking up existing affinities for host handle="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" host="controller-1" 2019-11-04T18:45:49.618 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.618 [INFO][290422] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" host="controller-1" 2019-11-04T18:45:49.620 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.620 [INFO][290422] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:45:49.622 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.622 [INFO][290422] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:45:49.622 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.622 [INFO][290422] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" host="controller-1" 2019-11-04T18:45:49.623 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.623 [INFO][290422] ipam.go 1244: Creating new handle: chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e 2019-11-04T18:45:49.626 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.626 [INFO][290422] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" host="controller-1" 2019-11-04T18:45:49.628 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.628 [INFO][290422] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e337/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" host="controller-1" 2019-11-04T18:45:49.628 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.628 [INFO][290422] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e337/122] handle="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" host="controller-1" 2019-11-04T18:45:49.629 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.629 [INFO][290422] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e337/122] handle="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" host="controller-1" 2019-11-04T18:45:49.629 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.629 [INFO][290422] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e337/122] ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" HandleID="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Workload="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:45:49.629 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.629 [INFO][290422] ipam_plugin.go 258: IPAM Result ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" HandleID="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Workload="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc000144180)} 2019-11-04T18:45:49.631 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.631 [INFO][290394] k8s.go 361: Populated endpoint ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Namespace="kube-system" Pod="ceph-pools-audit-1572893100-sg9bf" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0", GenerateName:"ceph-pools-audit-1572893100-", Namespace:"kube-system", SelfLink:"", UID:"5f76f380-b632-4adb-ae14-2f39938a10bb", ResourceVersion:"8144998", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489943, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ceph-pools-audit", "controller-uid":"ddb433ba-edd8-40cb-8946-aac4d44d2c60", "job-name":"ceph-pools-audit-1572893100", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"ceph-pools-audit-1572893100-sg9bf", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e337/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali6000aca4406", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:45:49.631 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.631 [INFO][290394] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e337/128] ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Namespace="kube-system" Pod="ceph-pools-audit-1572893100-sg9bf" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:45:49.631 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.631 [INFO][290394] network_linux.go 76: Setting the host side veth name to cali6000aca4406 ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Namespace="kube-system" Pod="ceph-pools-audit-1572893100-sg9bf" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:45:49.634 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.634 [INFO][290394] network_linux.go 411: Disabling IPv6 forwarding ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Namespace="kube-system" Pod="ceph-pools-audit-1572893100-sg9bf" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:45:49.670 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.670 [INFO][290394] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Namespace="kube-system" Pod="ceph-pools-audit-1572893100-sg9bf" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0", GenerateName:"ceph-pools-audit-1572893100-", Namespace:"kube-system", SelfLink:"", UID:"5f76f380-b632-4adb-ae14-2f39938a10bb", ResourceVersion:"8144998", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489943, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"job-name":"ceph-pools-audit-1572893100", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit", "app":"ceph-pools-audit", "controller-uid":"ddb433ba-edd8-40cb-8946-aac4d44d2c60"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e", Pod:"ceph-pools-audit-1572893100-sg9bf", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e337/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali6000aca4406", MAC:"6e:53:1c:76:c1:94", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:45:49.673 controller-1 kubelet[88521]: info 2019-11-04 18:45:49.673 [INFO][290394] k8s.go 420: Wrote updated endpoint to datastore ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Namespace="kube-system" Pod="ceph-pools-audit-1572893100-sg9bf" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:45:49.731 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/5f76f380-b632-4adb-ae14-2f39938a10bb/volume-subpaths/ceph-pools-bin/ceph-pools-audit-ceph-store/0. 2019-11-04T18:45:49.802 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/5f76f380-b632-4adb-ae14-2f39938a10bb/volume-subpaths/ceph-pools-bin/ceph-pools-audit-ceph-store/0. 2019-11-04T18:45:49.843 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/5f76f380-b632-4adb-ae14-2f39938a10bb/volume-subpaths/ceph-etc/ceph-pools-audit-ceph-store/2. 2019-11-04T18:45:49.866 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/5f76f380-b632-4adb-ae14-2f39938a10bb/volume-subpaths/ceph-etc/ceph-pools-audit-ceph-store/2. 2019-11-04T18:45:49.909 controller-1 containerd[12218]: info time="2019-11-04T18:45:49.909753350Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d85c7ff18681b09059471dc2826e5843c506319a0c40c3af2a806ea71123af37/shim.sock" debug=false pid=290487 2019-11-04T18:45:52.000 controller-1 ntpd[87544]: info Listen normally on 33 cali6000aca4406 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T18:45:52.000 controller-1 ntpd[87544]: debug new interface(s) found: waking up resolver 2019-11-04T18:45:53.965 controller-1 collectd[12249]: info alarm notifier host=controller-1 controller-1/cpu/percent-used has not been updated for 20.322 seconds. (1) 2019-11-04T18:45:53.965 controller-1 collectd[12249]: info alarm notifier host=controller-1.numa=node0 controller-1/memory-node0/percent-used has not been updated for 20.316 seconds. (1) 2019-11-04T18:45:53.966 controller-1 collectd[12249]: info alarm notifier host=controller-1.numa=node1 controller-1/memory-node1/percent-used has not been updated for 20.316 seconds. (1) 2019-11-04T18:45:53.966 controller-1 collectd[12249]: info alarm notifier host=controller-1 controller-1/memory-platform/percent-used has not been updated for 20.317 seconds. (1) 2019-11-04T18:45:53.966 controller-1 collectd[12249]: info alarm notifier host=controller-1.numa=total controller-1/memory-total/percent-used has not been updated for 20.317 seconds. (1) 2019-11-04T18:46:01.189 controller-1 containerd[12218]: info time="2019-11-04T18:46:01.189618723Z" level=info msg="shim reaped" id=d85c7ff18681b09059471dc2826e5843c506319a0c40c3af2a806ea71123af37 2019-11-04T18:46:01.199 controller-1 dockerd[12258]: info time="2019-11-04T18:46:01.199674421Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:01.834 controller-1 kubelet[88521]: info I1104 18:46:01.834113 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/5f76f380-b632-4adb-ae14-2f39938a10bb-etcceph") pod "5f76f380-b632-4adb-ae14-2f39938a10bb" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb") 2019-11-04T18:46:01.834 controller-1 kubelet[88521]: info I1104 18:46:01.834172 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-pools-bin") pod "5f76f380-b632-4adb-ae14-2f39938a10bb" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb") 2019-11-04T18:46:01.834 controller-1 kubelet[88521]: info I1104 18:46:01.834206 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-etc") pod "5f76f380-b632-4adb-ae14-2f39938a10bb" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb") 2019-11-04T18:46:01.834 controller-1 kubelet[88521]: info I1104 18:46:01.834250 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-pools-audit-token-bsfbw") pod "5f76f380-b632-4adb-ae14-2f39938a10bb" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb") 2019-11-04T18:46:01.834 controller-1 kubelet[88521]: info W1104 18:46:01.834251 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/5f76f380-b632-4adb-ae14-2f39938a10bb/volumes/kubernetes.io~empty-dir/etcceph: ClearQuota called, but quotas disabled 2019-11-04T18:46:01.834 controller-1 kubelet[88521]: info I1104 18:46:01.834383 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f76f380-b632-4adb-ae14-2f39938a10bb-etcceph" (OuterVolumeSpecName: "etcceph") pod "5f76f380-b632-4adb-ae14-2f39938a10bb" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb"). InnerVolumeSpecName "etcceph". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" 2019-11-04T18:46:01.841 controller-1 kubelet[88521]: info W1104 18:46:01.841878 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/5f76f380-b632-4adb-ae14-2f39938a10bb/volumes/kubernetes.io~configmap/ceph-etc: ClearQuota called, but quotas disabled 2019-11-04T18:46:01.842 controller-1 kubelet[88521]: info I1104 18:46:01.842040 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-etc" (OuterVolumeSpecName: "ceph-etc") pod "5f76f380-b632-4adb-ae14-2f39938a10bb" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb"). InnerVolumeSpecName "ceph-etc". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:46:01.846 controller-1 kubelet[88521]: info I1104 18:46:01.845948 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-pools-audit-token-bsfbw" (OuterVolumeSpecName: "ceph-pools-audit-token-bsfbw") pod "5f76f380-b632-4adb-ae14-2f39938a10bb" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb"). InnerVolumeSpecName "ceph-pools-audit-token-bsfbw". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:01.846 controller-1 kubelet[88521]: info W1104 18:46:01.846062 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/5f76f380-b632-4adb-ae14-2f39938a10bb/volumes/kubernetes.io~configmap/ceph-pools-bin: ClearQuota called, but quotas disabled 2019-11-04T18:46:01.846 controller-1 kubelet[88521]: info I1104 18:46:01.846213 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-pools-bin" (OuterVolumeSpecName: "ceph-pools-bin") pod "5f76f380-b632-4adb-ae14-2f39938a10bb" (UID: "5f76f380-b632-4adb-ae14-2f39938a10bb"). InnerVolumeSpecName "ceph-pools-bin". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:46:01.858 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.858 [INFO][292411] plugin.go 442: Extracted identifiers ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:46:01.866 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.866 [WARNING][292411] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:46:01.866 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.866 [INFO][292411] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0", GenerateName:"ceph-pools-audit-1572893100-", Namespace:"kube-system", SelfLink:"", UID:"5f76f380-b632-4adb-ae14-2f39938a10bb", ResourceVersion:"8145224", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489943, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit", "app":"ceph-pools-audit", "controller-uid":"ddb433ba-edd8-40cb-8946-aac4d44d2c60", "job-name":"ceph-pools-audit-1572893100", "projectcalico.org/namespace":"kube-system"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"ceph-pools-audit-1572893100-sg9bf", Endpoint:"eth0", IPNetworks:[]string(nil), IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali6000aca4406", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:46:01.866 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.866 [INFO][292411] k8s.go 477: Releasing IP address(es) ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" 2019-11-04T18:46:01.866 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.866 [INFO][292411] utils.go 171: Calico CNI releasing IP address ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" 2019-11-04T18:46:01.885 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.884 [INFO][292441] ipam_plugin.go 299: Releasing address using handleID ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" HandleID="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Workload="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:46:01.885 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.885 [INFO][292441] ipam.go 1145: Releasing all IPs with handle 'chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e' 2019-11-04T18:46:01.909 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.909 [INFO][292441] ipam_plugin.go 308: Released address using handleID ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" HandleID="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Workload="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:46:01.909 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.909 [INFO][292441] ipam_plugin.go 317: Releasing address using workloadID ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" HandleID="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Workload="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:46:01.909 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.909 [INFO][292441] ipam.go 1145: Releasing all IPs with handle 'kube-system.ceph-pools-audit-1572893100-sg9bf' 2019-11-04T18:46:01.912 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.912 [INFO][292411] k8s.go 481: Cleaning up netns ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" 2019-11-04T18:46:01.912 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.912 [INFO][292411] network_linux.go 450: Calico CNI deleting device in netns /proc/289911/ns/net ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" 2019-11-04T18:46:01.934 controller-1 kubelet[88521]: info I1104 18:46:01.934585 88521 reconciler.go:301] Volume detached for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/5f76f380-b632-4adb-ae14-2f39938a10bb-etcceph") on node "controller-1" DevicePath "" 2019-11-04T18:46:01.934 controller-1 kubelet[88521]: info I1104 18:46:01.934604 88521 reconciler.go:301] Volume detached for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-pools-bin") on node "controller-1" DevicePath "" 2019-11-04T18:46:01.934 controller-1 kubelet[88521]: info I1104 18:46:01.934613 88521 reconciler.go:301] Volume detached for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-etc") on node "controller-1" DevicePath "" 2019-11-04T18:46:01.934 controller-1 kubelet[88521]: info I1104 18:46:01.934623 88521 reconciler.go:301] Volume detached for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/5f76f380-b632-4adb-ae14-2f39938a10bb-ceph-pools-audit-token-bsfbw") on node "controller-1" DevicePath "" 2019-11-04T18:46:01.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%30, but no knowledge of it 2019-11-04T18:46:01.989 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.989 [INFO][292411] network_linux.go 467: Calico CNI deleted device in netns /proc/289911/ns/net ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" 2019-11-04T18:46:01.989 controller-1 kubelet[88521]: info 2019-11-04 18:46:01.989 [INFO][292411] k8s.go 493: Teardown processing complete. ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" 2019-11-04T18:46:02.014 controller-1 containerd[12218]: info time="2019-11-04T18:46:02.014913054Z" level=info msg="shim reaped" id=15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e 2019-11-04T18:46:02.025 controller-1 dockerd[12258]: info time="2019-11-04T18:46:02.024923886Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:02.058 controller-1 kubelet[88521]: info W1104 18:46:02.058417 88521 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" 2019-11-04T18:46:02.132 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.131 [INFO][292589] plugin.go 442: Extracted identifiers ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:46:02.138 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.138 [WARNING][292589] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:46:02.138 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.138 [INFO][292589] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0", GenerateName:"ceph-pools-audit-1572893100-", Namespace:"kube-system", SelfLink:"", UID:"5f76f380-b632-4adb-ae14-2f39938a10bb", ResourceVersion:"8145224", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489943, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ceph-pools-audit", "controller-uid":"ddb433ba-edd8-40cb-8946-aac4d44d2c60", "job-name":"ceph-pools-audit-1572893100", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"ceph-pools-audit-1572893100-sg9bf", Endpoint:"eth0", IPNetworks:[]string(nil), IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali6000aca4406", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:46:02.138 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.138 [INFO][292589] k8s.go 477: Releasing IP address(es) ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" 2019-11-04T18:46:02.138 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.138 [INFO][292589] utils.go 171: Calico CNI releasing IP address ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" 2019-11-04T18:46:02.155 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.155 [INFO][292610] ipam_plugin.go 299: Releasing address using handleID ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" HandleID="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Workload="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:46:02.155 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.155 [INFO][292610] ipam.go 1145: Releasing all IPs with handle 'chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e' 2019-11-04T18:46:02.161 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.161 [WARNING][292610] ipam_plugin.go 306: Asked to release address but it doesn't exist. Ignoring ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" HandleID="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Workload="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:46:02.161 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.161 [INFO][292610] ipam_plugin.go 317: Releasing address using workloadID ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" HandleID="chain.15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" Workload="controller--1-k8s-ceph--pools--audit--1572893100--sg9bf-eth0" 2019-11-04T18:46:02.161 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.161 [INFO][292610] ipam.go 1145: Releasing all IPs with handle 'kube-system.ceph-pools-audit-1572893100-sg9bf' 2019-11-04T18:46:02.163 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.163 [INFO][292589] k8s.go 481: Cleaning up netns ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" 2019-11-04T18:46:02.163 controller-1 kubelet[88521]: info 2019-11-04 18:46:02.163 [INFO][292589] k8s.go 493: Teardown processing complete. ContainerID="15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" 2019-11-04T18:46:02.779 controller-1 kubelet[88521]: info W1104 18:46:02.779007 88521 pod_container_deletor.go:75] Container "15fb2cc12be423e9b6a9b760d3e188c227c49a86c5cfd74594655b8167d6595e" not found in pod's containers 2019-11-04T18:46:03.000 controller-1 ntpd[87544]: info Deleting interface #33 cali6000aca4406, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=11 secs 2019-11-04T18:46:17.836 controller-1 containerd[12218]: info time="2019-11-04T18:46:17.835978458Z" level=info msg="shim reaped" id=f0ebce920e55df17c21e3bdd05267924b8ceb86494bd97cb3630a304395d8745 2019-11-04T18:46:17.836 controller-1 containerd[12218]: info time="2019-11-04T18:46:17.836499311Z" level=info msg="shim reaped" id=db4a1da0333d1c8f70202d5bb0dfd52f3a364c9fef9c33fd4dcb4c01f3383d10 2019-11-04T18:46:17.837 controller-1 containerd[12218]: info time="2019-11-04T18:46:17.837288391Z" level=info msg="shim reaped" id=81ff34d50337c4f5f8294cc146c19dedc7d9a938207ba4bfcc6311c2790daca7 2019-11-04T18:46:17.845 controller-1 dockerd[12258]: info time="2019-11-04T18:46:17.845820755Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:17.846 controller-1 dockerd[12258]: info time="2019-11-04T18:46:17.846285757Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:17.847 controller-1 dockerd[12258]: info time="2019-11-04T18:46:17.847152384Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:17.868 controller-1 containerd[12218]: info time="2019-11-04T18:46:17.868199473Z" level=info msg="shim reaped" id=e2c63a95764778630658b1063ca719b0f64549c5d30bc070205b9c89d1362098 2019-11-04T18:46:17.868 controller-1 containerd[12218]: info time="2019-11-04T18:46:17.868688713Z" level=info msg="shim reaped" id=e61f0b00323833f6499d765ae7745ee3fad09eb5cdb281d5b1ac131f46c7afed 2019-11-04T18:46:17.869 controller-1 containerd[12218]: info time="2019-11-04T18:46:17.869090805Z" level=info msg="shim reaped" id=ba0e89ffcfc76eac98b8e26f171e3417b6c3241af583fcd75a45f7b28af81ac1 2019-11-04T18:46:17.878 controller-1 dockerd[12258]: info time="2019-11-04T18:46:17.878059557Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:17.878 controller-1 dockerd[12258]: info time="2019-11-04T18:46:17.878604727Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:17.878 controller-1 dockerd[12258]: info time="2019-11-04T18:46:17.878796819Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:18.017 controller-1 containerd[12218]: info time="2019-11-04T18:46:18.017846850Z" level=info msg="shim reaped" id=319a6d8353e37831e21dcc8d2f44c0d781e5d62a91f628097341dec2fc682bda 2019-11-04T18:46:18.027 controller-1 dockerd[12258]: info time="2019-11-04T18:46:18.027743887Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:18.054 controller-1 containerd[12218]: info time="2019-11-04T18:46:18.054411219Z" level=info msg="shim reaped" id=f061a3692b357e4c28c3ff865355c556f9510700380696214fc8802809359f39 2019-11-04T18:46:18.064 controller-1 dockerd[12258]: info time="2019-11-04T18:46:18.064207876Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:18.127 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.126 [INFO][295782] plugin.go 442: Extracted identifiers ContainerID="7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--qd8zt-eth0" 2019-11-04T18:46:18.132 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.132 [INFO][295801] plugin.go 442: Extracted identifiers ContainerID="4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--kube--state--metrics--59947d74fb--n9xjb-eth0" 2019-11-04T18:46:18.133 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.133 [WARNING][295782] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:46:18.133 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.133 [INFO][295782] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--qd8zt-eth0", GenerateName:"mon-nginx-ingress-default-backend-5997cfc99f-", Namespace:"monitor", SelfLink:"", UID:"fd9861e3-2af5-4433-a8ec-2f3509f19b0b", ResourceVersion:"8145526", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489540, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/serviceaccount":"default", "app":"nginx-ingress", "component":"default-backend", "pod-template-hash":"5997cfc99f", "release":"mon-nginx-ingress", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-nginx-ingress-default-backend-5997cfc99f-qd8zt", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e31b/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"calia195ccff09a", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90}}}} 2019-11-04T18:46:18.133 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.133 [INFO][295782] k8s.go 477: Releasing IP address(es) ContainerID="7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" 2019-11-04T18:46:18.133 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.133 [INFO][295782] utils.go 171: Calico CNI releasing IP address ContainerID="7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" 2019-11-04T18:46:18.139 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.139 [WARNING][295801] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:46:18.139 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.139 [INFO][295801] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--kube--state--metrics--59947d74fb--n9xjb-eth0", GenerateName:"mon-kube-state-metrics-59947d74fb-", Namespace:"monitor", SelfLink:"", UID:"e048eafc-2ed6-4a66-8ad0-57799d976d11", ResourceVersion:"8145529", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489540, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/instance":"mon-kube-state-metrics", "app.kubernetes.io/name":"kube-state-metrics", "pod-template-hash":"59947d74fb", "release":"mon-kube-state-metrics", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-kube-state-metrics"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-kube-state-metrics-59947d74fb-n9xjb", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e314/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-kube-state-metrics"}, InterfaceName:"calia758801dfb9", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:46:18.139 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.139 [INFO][295801] k8s.go 477: Releasing IP address(es) ContainerID="4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" 2019-11-04T18:46:18.139 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.139 [INFO][295801] utils.go 171: Calico CNI releasing IP address ContainerID="4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" 2019-11-04T18:46:18.141 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.141 [INFO][295830] plugin.go 442: Extracted identifiers ContainerID="84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--kibana--6cf57cfd5b--h69qg-eth0" 2019-11-04T18:46:18.148 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.148 [INFO][295861] plugin.go 442: Extracted identifiers ContainerID="f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--frbhx-eth0" 2019-11-04T18:46:18.148 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.148 [WARNING][295830] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:46:18.148 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.148 [INFO][295830] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--kibana--6cf57cfd5b--h69qg-eth0", GenerateName:"mon-kibana-6cf57cfd5b-", Namespace:"monitor", SelfLink:"", UID:"8958d9a6-f190-4920-87e4-03c61bfa595b", ResourceVersion:"8145527", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489540, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"kibana", "pod-template-hash":"6cf57cfd5b", "release":"mon-kibana", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-kibana-6cf57cfd5b-h69qg", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e31a/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"calib02fdf4cf91", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"kibana", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x15e1}}}} 2019-11-04T18:46:18.148 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.148 [INFO][295830] k8s.go 477: Releasing IP address(es) ContainerID="84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" 2019-11-04T18:46:18.148 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.148 [INFO][295830] utils.go 171: Calico CNI releasing IP address ContainerID="84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" 2019-11-04T18:46:18.152 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.152 [INFO][295908] ipam_plugin.go 299: Releasing address using handleID ContainerID="7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" HandleID="chain.7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" Workload="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--qd8zt-eth0" 2019-11-04T18:46:18.152 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.152 [INFO][295908] ipam.go 1145: Releasing all IPs with handle 'chain.7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f' 2019-11-04T18:46:18.152 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.152 [INFO][295892] plugin.go 442: Extracted identifiers ContainerID="b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--2xcrj-eth0" 2019-11-04T18:46:18.154 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.154 [WARNING][295861] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:46:18.154 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.154 [INFO][295861] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-rbd--provisioner--7484d49cf6--frbhx-eth0", GenerateName:"rbd-provisioner-7484d49cf6-", Namespace:"kube-system", SelfLink:"", UID:"fbbf3d3e-ca3b-463b-9dc5-2d7dc9a750ba", ResourceVersion:"8145396", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708488755, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/serviceaccount":"rbd-provisioner", "app":"rbd-provisioner", "pod-template-hash":"7484d49cf6", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"rbd-provisioner-7484d49cf6-frbhx", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e335/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.rbd-provisioner"}, InterfaceName:"cali6499910b917", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:46:18.154 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.154 [INFO][295861] k8s.go 477: Releasing IP address(es) ContainerID="f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" 2019-11-04T18:46:18.154 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.154 [INFO][295861] utils.go 171: Calico CNI releasing IP address ContainerID="f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" 2019-11-04T18:46:18.158 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.158 [WARNING][295892] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:46:18.158 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.158 [INFO][295892] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-coredns--6bc668cd76--2xcrj-eth0", GenerateName:"coredns-6bc668cd76-", Namespace:"kube-system", SelfLink:"", UID:"0b61d7cb-a47f-4975-a90d-9d5745291ec3", ResourceVersion:"8145400", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708487229, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"pod-template-hash":"6bc668cd76", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns", "k8s-app":"kube-dns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"coredns-6bc668cd76-2xcrj", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e30d/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibaa4d7384d1", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} 2019-11-04T18:46:18.158 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.158 [INFO][295892] k8s.go 477: Releasing IP address(es) ContainerID="b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" 2019-11-04T18:46:18.158 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.158 [INFO][295892] utils.go 171: Calico CNI releasing IP address ContainerID="b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" 2019-11-04T18:46:18.159 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.159 [INFO][295948] ipam_plugin.go 299: Releasing address using handleID ContainerID="4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" HandleID="chain.4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" Workload="controller--1-k8s-mon--kube--state--metrics--59947d74fb--n9xjb-eth0" 2019-11-04T18:46:18.159 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.159 [INFO][295948] ipam.go 1145: Releasing all IPs with handle 'chain.4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7' 2019-11-04T18:46:18.166 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.166 [INFO][295992] ipam_plugin.go 299: Releasing address using handleID ContainerID="84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" HandleID="chain.84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" Workload="controller--1-k8s-mon--kibana--6cf57cfd5b--h69qg-eth0" 2019-11-04T18:46:18.166 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.166 [INFO][295992] ipam.go 1145: Releasing all IPs with handle 'chain.84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58' 2019-11-04T18:46:18.167 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.167 [INFO][295964] plugin.go 442: Extracted identifiers ContainerID="72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:46:18.172 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.172 [INFO][296028] ipam_plugin.go 299: Releasing address using handleID ContainerID="f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" HandleID="chain.f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--frbhx-eth0" 2019-11-04T18:46:18.172 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.172 [INFO][296028] ipam.go 1145: Releasing all IPs with handle 'chain.f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e' 2019-11-04T18:46:18.174 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.174 [WARNING][295964] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:46:18.174 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.174 [INFO][295964] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--client--1-eth0", GenerateName:"mon-elasticsearch-client-", Namespace:"monitor", SelfLink:"", UID:"193f83c1-6632-4268-8a94-8ce20a067385", ResourceVersion:"8145533", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708487233, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"mon-elasticsearch-client", "chart":"elasticsearch", "controller-revision-hash":"mon-elasticsearch-client-7c64d4f4fd", "heritage":"Tiller", "release":"mon-elasticsearch-client", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-client-1", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-elasticsearch-client-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e306/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"cali7eb1b3c61b4", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T18:46:18.174 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.174 [INFO][295964] k8s.go 477: Releasing IP address(es) ContainerID="72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" 2019-11-04T18:46:18.174 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.174 [INFO][295964] utils.go 171: Calico CNI releasing IP address ContainerID="72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" 2019-11-04T18:46:18.174 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.174 [INFO][295908] ipam_plugin.go 308: Released address using handleID ContainerID="7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" HandleID="chain.7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" Workload="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--qd8zt-eth0" 2019-11-04T18:46:18.174 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.174 [INFO][295908] ipam_plugin.go 317: Releasing address using workloadID ContainerID="7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" HandleID="chain.7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" Workload="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--qd8zt-eth0" 2019-11-04T18:46:18.174 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.174 [INFO][295908] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-nginx-ingress-default-backend-5997cfc99f-qd8zt' 2019-11-04T18:46:18.177 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.176 [INFO][295782] k8s.go 481: Cleaning up netns ContainerID="7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" 2019-11-04T18:46:18.177 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.177 [INFO][295782] network_linux.go 450: Calico CNI deleting device in netns /proc/198975/ns/net ContainerID="7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" 2019-11-04T18:46:18.178 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.178 [INFO][296034] ipam_plugin.go 299: Releasing address using handleID ContainerID="b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" HandleID="chain.b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" Workload="controller--1-k8s-coredns--6bc668cd76--2xcrj-eth0" 2019-11-04T18:46:18.178 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.178 [INFO][296034] ipam.go 1145: Releasing all IPs with handle 'chain.b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790' 2019-11-04T18:46:18.179 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.179 [INFO][295948] ipam_plugin.go 308: Released address using handleID ContainerID="4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" HandleID="chain.4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" Workload="controller--1-k8s-mon--kube--state--metrics--59947d74fb--n9xjb-eth0" 2019-11-04T18:46:18.179 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.179 [INFO][295948] ipam_plugin.go 317: Releasing address using workloadID ContainerID="4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" HandleID="chain.4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" Workload="controller--1-k8s-mon--kube--state--metrics--59947d74fb--n9xjb-eth0" 2019-11-04T18:46:18.179 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.179 [INFO][295948] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-kube-state-metrics-59947d74fb-n9xjb' 2019-11-04T18:46:18.182 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.182 [INFO][295801] k8s.go 481: Cleaning up netns ContainerID="4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" 2019-11-04T18:46:18.188 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.188 [INFO][295992] ipam_plugin.go 308: Released address using handleID ContainerID="84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" HandleID="chain.84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" Workload="controller--1-k8s-mon--kibana--6cf57cfd5b--h69qg-eth0" 2019-11-04T18:46:18.188 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.188 [INFO][295992] ipam_plugin.go 317: Releasing address using workloadID ContainerID="84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" HandleID="chain.84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" Workload="controller--1-k8s-mon--kibana--6cf57cfd5b--h69qg-eth0" 2019-11-04T18:46:18.188 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.188 [INFO][295992] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-kibana-6cf57cfd5b-h69qg' 2019-11-04T18:46:18.190 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.190 [INFO][295830] k8s.go 481: Cleaning up netns ContainerID="84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" 2019-11-04T18:46:18.193 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.193 [INFO][296087] ipam_plugin.go 299: Releasing address using handleID ContainerID="72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" HandleID="chain.72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:46:18.193 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.193 [INFO][296087] ipam.go 1145: Releasing all IPs with handle 'chain.72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae' 2019-11-04T18:46:18.195 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.195 [INFO][296028] ipam_plugin.go 308: Released address using handleID ContainerID="f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" HandleID="chain.f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--frbhx-eth0" 2019-11-04T18:46:18.195 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.195 [INFO][296028] ipam_plugin.go 317: Releasing address using workloadID ContainerID="f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" HandleID="chain.f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--frbhx-eth0" 2019-11-04T18:46:18.195 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.195 [INFO][296028] ipam.go 1145: Releasing all IPs with handle 'kube-system.rbd-provisioner-7484d49cf6-frbhx' 2019-11-04T18:46:18.198 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.198 [INFO][295861] k8s.go 481: Cleaning up netns ContainerID="f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" 2019-11-04T18:46:18.199 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.199 [INFO][296034] ipam_plugin.go 308: Released address using handleID ContainerID="b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" HandleID="chain.b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" Workload="controller--1-k8s-coredns--6bc668cd76--2xcrj-eth0" 2019-11-04T18:46:18.199 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.199 [INFO][296034] ipam_plugin.go 317: Releasing address using workloadID ContainerID="b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" HandleID="chain.b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" Workload="controller--1-k8s-coredns--6bc668cd76--2xcrj-eth0" 2019-11-04T18:46:18.199 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.199 [INFO][296034] ipam.go 1145: Releasing all IPs with handle 'kube-system.coredns-6bc668cd76-2xcrj' 2019-11-04T18:46:18.202 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.202 [INFO][295892] k8s.go 481: Cleaning up netns ContainerID="b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" 2019-11-04T18:46:18.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%25, but no knowledge of it 2019-11-04T18:46:18.214 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.214 [INFO][296087] ipam_plugin.go 308: Released address using handleID ContainerID="72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" HandleID="chain.72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:46:18.214 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.214 [INFO][296087] ipam_plugin.go 317: Releasing address using workloadID ContainerID="72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" HandleID="chain.72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:46:18.214 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.214 [INFO][296087] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-elasticsearch-client-1' 2019-11-04T18:46:18.216 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.216 [INFO][295964] k8s.go 481: Cleaning up netns ContainerID="72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" 2019-11-04T18:46:18.241 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.241 [INFO][295801] network_linux.go 450: Calico CNI deleting device in netns /proc/199012/ns/net ContainerID="4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" 2019-11-04T18:46:18.241 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.241 [INFO][295892] network_linux.go 450: Calico CNI deleting device in netns /proc/95219/ns/net ContainerID="b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" 2019-11-04T18:46:18.241 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.241 [INFO][295964] network_linux.go 450: Calico CNI deleting device in netns /proc/95247/ns/net ContainerID="72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" 2019-11-04T18:46:18.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%27, but no knowledge of it 2019-11-04T18:46:18.301 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.301 [INFO][295861] network_linux.go 450: Calico CNI deleting device in netns /proc/105141/ns/net ContainerID="f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" 2019-11-04T18:46:18.301 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.301 [INFO][295830] network_linux.go 450: Calico CNI deleting device in netns /proc/198965/ns/net ContainerID="84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" 2019-11-04T18:46:18.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%16, but no knowledge of it 2019-11-04T18:46:18.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%18, but no knowledge of it 2019-11-04T18:46:18.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%26, but no knowledge of it 2019-11-04T18:46:18.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%22, but no knowledge of it 2019-11-04T18:46:18.538 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.538 [INFO][295801] network_linux.go 467: Calico CNI deleted device in netns /proc/199012/ns/net ContainerID="4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" 2019-11-04T18:46:18.538 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.538 [INFO][295782] network_linux.go 467: Calico CNI deleted device in netns /proc/198975/ns/net ContainerID="7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" 2019-11-04T18:46:18.538 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.538 [INFO][295801] k8s.go 493: Teardown processing complete. ContainerID="4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7" 2019-11-04T18:46:18.538 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.538 [INFO][295782] k8s.go 493: Teardown processing complete. ContainerID="7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f" 2019-11-04T18:46:18.538 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.538 [INFO][295892] network_linux.go 467: Calico CNI deleted device in netns /proc/95219/ns/net ContainerID="b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" 2019-11-04T18:46:18.538 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.538 [INFO][295892] k8s.go 493: Teardown processing complete. ContainerID="b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790" 2019-11-04T18:46:18.552 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.552 [INFO][295830] network_linux.go 467: Calico CNI deleted device in netns /proc/198965/ns/net ContainerID="84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" 2019-11-04T18:46:18.552 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.552 [INFO][295964] network_linux.go 467: Calico CNI deleted device in netns /proc/95247/ns/net ContainerID="72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" 2019-11-04T18:46:18.552 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.552 [INFO][295830] k8s.go 493: Teardown processing complete. ContainerID="84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58" 2019-11-04T18:46:18.552 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.552 [INFO][295964] k8s.go 493: Teardown processing complete. ContainerID="72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae" 2019-11-04T18:46:18.568 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.568 [INFO][295861] network_linux.go 467: Calico CNI deleted device in netns /proc/105141/ns/net ContainerID="f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" 2019-11-04T18:46:18.568 controller-1 kubelet[88521]: info 2019-11-04 18:46:18.568 [INFO][295861] k8s.go 493: Teardown processing complete. ContainerID="f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e" 2019-11-04T18:46:18.579 controller-1 kubelet[88521]: info W1104 18:46:18.579098 88521 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "rbd-provisioner-7484d49cf6-frbhx_kube-system": unexpected command output Device "eth0" does not exist. 2019-11-04T18:46:18.579 controller-1 kubelet[88521]: info with error: exit status 1 2019-11-04T18:46:18.673 controller-1 containerd[12218]: info time="2019-11-04T18:46:18.673344164Z" level=info msg="shim reaped" id=7195904f5b5c57b53bea7d2656a39a045233e54a99e1718b593876fd4163324f 2019-11-04T18:46:18.674 controller-1 kubelet[88521]: info I1104 18:46:18.674255 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "tiller-token-c6p8n" (UniqueName: "kubernetes.io/secret/b0c7cd69-b649-4ea7-999d-1d5a3f18c8c2-tiller-token-c6p8n") pod "b0c7cd69-b649-4ea7-999d-1d5a3f18c8c2" (UID: "b0c7cd69-b649-4ea7-999d-1d5a3f18c8c2") 2019-11-04T18:46:18.680 controller-1 containerd[12218]: info time="2019-11-04T18:46:18.680075738Z" level=info msg="shim reaped" id=4f7054e42de884f31f59d8e5f3b3d87b6b9b81ac3501f9dd0e9da956e41d34e7 2019-11-04T18:46:18.683 controller-1 dockerd[12258]: info time="2019-11-04T18:46:18.683258246Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:18.685 controller-1 kubelet[88521]: info I1104 18:46:18.685783 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0c7cd69-b649-4ea7-999d-1d5a3f18c8c2-tiller-token-c6p8n" (OuterVolumeSpecName: "tiller-token-c6p8n") pod "b0c7cd69-b649-4ea7-999d-1d5a3f18c8c2" (UID: "b0c7cd69-b649-4ea7-999d-1d5a3f18c8c2"). InnerVolumeSpecName "tiller-token-c6p8n". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:18.689 controller-1 dockerd[12258]: info time="2019-11-04T18:46:18.689934050Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:18.717 controller-1 containerd[12218]: info time="2019-11-04T18:46:18.717061541Z" level=info msg="shim reaped" id=b2ecdffb4f3bfe3e5f4a1133e675a7d6b7fe20cc6b50c8dc3bcfcfe84508d790 2019-11-04T18:46:18.727 controller-1 dockerd[12258]: info time="2019-11-04T18:46:18.726989964Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:18.735 controller-1 containerd[12218]: info time="2019-11-04T18:46:18.735016612Z" level=info msg="shim reaped" id=f1d7c0e0907410a3f7f0fb6a9576b3f2aac6ede07202d5314c637e1e4963516e 2019-11-04T18:46:18.743 controller-1 containerd[12218]: info time="2019-11-04T18:46:18.743265059Z" level=info msg="shim reaped" id=84af8efee3b6112f04246c245a0162c920dfc2b00aa763f5a8b16b1a4ed4fd58 2019-11-04T18:46:18.744 controller-1 dockerd[12258]: info time="2019-11-04T18:46:18.744815199Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:18.751 controller-1 containerd[12218]: info time="2019-11-04T18:46:18.751415292Z" level=info msg="shim reaped" id=72387c521be6d3e4796af8ebe91a0e29c5c8a6e780deffb231c28e883aa498ae 2019-11-04T18:46:18.753 controller-1 dockerd[12258]: info time="2019-11-04T18:46:18.753123166Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:18.761 controller-1 dockerd[12258]: info time="2019-11-04T18:46:18.761256110Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:18.774 controller-1 kubelet[88521]: info I1104 18:46:18.774548 88521 reconciler.go:301] Volume detached for volume "tiller-token-c6p8n" (UniqueName: "kubernetes.io/secret/b0c7cd69-b649-4ea7-999d-1d5a3f18c8c2-tiller-token-c6p8n") on node "controller-1" DevicePath "" 2019-11-04T18:46:19.664 controller-1 kubelet[88521]: info E1104 18:46:19.664321 88521 remote_runtime.go:295] ContainerStatus "319a6d8353e37831e21dcc8d2f44c0d781e5d62a91f628097341dec2fc682bda" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 319a6d8353e37831e21dcc8d2f44c0d781e5d62a91f628097341dec2fc682bda 2019-11-04T18:46:19.664 controller-1 kubelet[88521]: info E1104 18:46:19.664765 88521 remote_runtime.go:295] ContainerStatus "3e6d2e3f86e58e553b54416990eb4c09276eed8de841c550d794bcb0db7f48c6" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 3e6d2e3f86e58e553b54416990eb4c09276eed8de841c550d794bcb0db7f48c6 2019-11-04T18:46:20.000 controller-1 ntpd[87544]: info Deleting interface #31 calia195ccff09a, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=429 secs 2019-11-04T18:46:20.000 controller-1 ntpd[87544]: info Deleting interface #30 calib02fdf4cf91, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=429 secs 2019-11-04T18:46:20.000 controller-1 ntpd[87544]: info Deleting interface #29 calia758801dfb9, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=429 secs 2019-11-04T18:46:20.000 controller-1 ntpd[87544]: info Deleting interface #20 cali6499910b917, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=1215 secs 2019-11-04T18:46:20.000 controller-1 ntpd[87544]: info Deleting interface #16 calibaa4d7384d1, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=1269 secs 2019-11-04T18:46:20.000 controller-1 ntpd[87544]: info Deleting interface #14 cali7eb1b3c61b4, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=1269 secs 2019-11-04T18:46:20.678 controller-1 kubelet[88521]: info I1104 18:46:20.678645 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/193f83c1-6632-4268-8a94-8ce20a067385-default-token-88gsr") pod "193f83c1-6632-4268-8a94-8ce20a067385" (UID: "193f83c1-6632-4268-8a94-8ce20a067385") 2019-11-04T18:46:20.678 controller-1 kubelet[88521]: info I1104 18:46:20.678689 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "mon-kube-state-metrics-token-qj6tw" (UniqueName: "kubernetes.io/secret/e048eafc-2ed6-4a66-8ad0-57799d976d11-mon-kube-state-metrics-token-qj6tw") pod "e048eafc-2ed6-4a66-8ad0-57799d976d11" (UID: "e048eafc-2ed6-4a66-8ad0-57799d976d11") 2019-11-04T18:46:20.678 controller-1 kubelet[88521]: info I1104 18:46:20.678714 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/fd9861e3-2af5-4433-a8ec-2f3509f19b0b-default-token-88gsr") pod "fd9861e3-2af5-4433-a8ec-2f3509f19b0b" (UID: "fd9861e3-2af5-4433-a8ec-2f3509f19b0b") 2019-11-04T18:46:20.678 controller-1 kubelet[88521]: info I1104 18:46:20.678739 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "kibana" (UniqueName: "kubernetes.io/configmap/8958d9a6-f190-4920-87e4-03c61bfa595b-kibana") pod "8958d9a6-f190-4920-87e4-03c61bfa595b" (UID: "8958d9a6-f190-4920-87e4-03c61bfa595b") 2019-11-04T18:46:20.678 controller-1 kubelet[88521]: info I1104 18:46:20.678902 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b61d7cb-a47f-4975-a90d-9d5745291ec3-config-volume") pod "0b61d7cb-a47f-4975-a90d-9d5745291ec3" (UID: "0b61d7cb-a47f-4975-a90d-9d5745291ec3") 2019-11-04T18:46:20.678 controller-1 kubelet[88521]: info I1104 18:46:20.678940 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "rbd-provisioner-token-587hn" (UniqueName: "kubernetes.io/secret/fbbf3d3e-ca3b-463b-9dc5-2d7dc9a750ba-rbd-provisioner-token-587hn") pod "fbbf3d3e-ca3b-463b-9dc5-2d7dc9a750ba" (UID: "fbbf3d3e-ca3b-463b-9dc5-2d7dc9a750ba") 2019-11-04T18:46:20.678 controller-1 kubelet[88521]: info I1104 18:46:20.678971 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "coredns-token-x97rb" (UniqueName: "kubernetes.io/secret/0b61d7cb-a47f-4975-a90d-9d5745291ec3-coredns-token-x97rb") pod "0b61d7cb-a47f-4975-a90d-9d5745291ec3" (UID: "0b61d7cb-a47f-4975-a90d-9d5745291ec3") 2019-11-04T18:46:20.679 controller-1 kubelet[88521]: info I1104 18:46:20.678998 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/8958d9a6-f190-4920-87e4-03c61bfa595b-default-token-88gsr") pod "8958d9a6-f190-4920-87e4-03c61bfa595b" (UID: "8958d9a6-f190-4920-87e4-03c61bfa595b") 2019-11-04T18:46:20.679 controller-1 kubelet[88521]: info W1104 18:46:20.679004 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/0b61d7cb-a47f-4975-a90d-9d5745291ec3/volumes/kubernetes.io~configmap/config-volume: ClearQuota called, but quotas disabled 2019-11-04T18:46:20.679 controller-1 kubelet[88521]: info I1104 18:46:20.679166 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b61d7cb-a47f-4975-a90d-9d5745291ec3-config-volume" (OuterVolumeSpecName: "config-volume") pod "0b61d7cb-a47f-4975-a90d-9d5745291ec3" (UID: "0b61d7cb-a47f-4975-a90d-9d5745291ec3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:46:20.692 controller-1 kubelet[88521]: info I1104 18:46:20.692755 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b61d7cb-a47f-4975-a90d-9d5745291ec3-coredns-token-x97rb" (OuterVolumeSpecName: "coredns-token-x97rb") pod "0b61d7cb-a47f-4975-a90d-9d5745291ec3" (UID: "0b61d7cb-a47f-4975-a90d-9d5745291ec3"). InnerVolumeSpecName "coredns-token-x97rb". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:20.692 controller-1 kubelet[88521]: info I1104 18:46:20.692886 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8958d9a6-f190-4920-87e4-03c61bfa595b-default-token-88gsr" (OuterVolumeSpecName: "default-token-88gsr") pod "8958d9a6-f190-4920-87e4-03c61bfa595b" (UID: "8958d9a6-f190-4920-87e4-03c61bfa595b"). InnerVolumeSpecName "default-token-88gsr". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:20.692 controller-1 kubelet[88521]: info I1104 18:46:20.692942 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbbf3d3e-ca3b-463b-9dc5-2d7dc9a750ba-rbd-provisioner-token-587hn" (OuterVolumeSpecName: "rbd-provisioner-token-587hn") pod "fbbf3d3e-ca3b-463b-9dc5-2d7dc9a750ba" (UID: "fbbf3d3e-ca3b-463b-9dc5-2d7dc9a750ba"). InnerVolumeSpecName "rbd-provisioner-token-587hn". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:20.692 controller-1 kubelet[88521]: info I1104 18:46:20.692942 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/193f83c1-6632-4268-8a94-8ce20a067385-default-token-88gsr" (OuterVolumeSpecName: "default-token-88gsr") pod "193f83c1-6632-4268-8a94-8ce20a067385" (UID: "193f83c1-6632-4268-8a94-8ce20a067385"). InnerVolumeSpecName "default-token-88gsr". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:20.693 controller-1 kubelet[88521]: info W1104 18:46:20.692994 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/8958d9a6-f190-4920-87e4-03c61bfa595b/volumes/kubernetes.io~configmap/kibana: ClearQuota called, but quotas disabled 2019-11-04T18:46:20.693 controller-1 kubelet[88521]: info I1104 18:46:20.693139 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8958d9a6-f190-4920-87e4-03c61bfa595b-kibana" (OuterVolumeSpecName: "kibana") pod "8958d9a6-f190-4920-87e4-03c61bfa595b" (UID: "8958d9a6-f190-4920-87e4-03c61bfa595b"). InnerVolumeSpecName "kibana". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:46:20.693 controller-1 kubelet[88521]: info I1104 18:46:20.693869 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd9861e3-2af5-4433-a8ec-2f3509f19b0b-default-token-88gsr" (OuterVolumeSpecName: "default-token-88gsr") pod "fd9861e3-2af5-4433-a8ec-2f3509f19b0b" (UID: "fd9861e3-2af5-4433-a8ec-2f3509f19b0b"). InnerVolumeSpecName "default-token-88gsr". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:20.693 controller-1 kubelet[88521]: info I1104 18:46:20.693886 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e048eafc-2ed6-4a66-8ad0-57799d976d11-mon-kube-state-metrics-token-qj6tw" (OuterVolumeSpecName: "mon-kube-state-metrics-token-qj6tw") pod "e048eafc-2ed6-4a66-8ad0-57799d976d11" (UID: "e048eafc-2ed6-4a66-8ad0-57799d976d11"). InnerVolumeSpecName "mon-kube-state-metrics-token-qj6tw". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:20.779 controller-1 kubelet[88521]: info I1104 18:46:20.779264 88521 reconciler.go:301] Volume detached for volume "coredns-token-x97rb" (UniqueName: "kubernetes.io/secret/0b61d7cb-a47f-4975-a90d-9d5745291ec3-coredns-token-x97rb") on node "controller-1" DevicePath "" 2019-11-04T18:46:20.779 controller-1 kubelet[88521]: info I1104 18:46:20.779293 88521 reconciler.go:301] Volume detached for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/8958d9a6-f190-4920-87e4-03c61bfa595b-default-token-88gsr") on node "controller-1" DevicePath "" 2019-11-04T18:46:20.779 controller-1 kubelet[88521]: info I1104 18:46:20.779312 88521 reconciler.go:301] Volume detached for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/193f83c1-6632-4268-8a94-8ce20a067385-default-token-88gsr") on node "controller-1" DevicePath "" 2019-11-04T18:46:20.779 controller-1 kubelet[88521]: info I1104 18:46:20.779320 88521 reconciler.go:301] Volume detached for volume "kibana" (UniqueName: "kubernetes.io/configmap/8958d9a6-f190-4920-87e4-03c61bfa595b-kibana") on node "controller-1" DevicePath "" 2019-11-04T18:46:20.779 controller-1 kubelet[88521]: info I1104 18:46:20.779339 88521 reconciler.go:301] Volume detached for volume "mon-kube-state-metrics-token-qj6tw" (UniqueName: "kubernetes.io/secret/e048eafc-2ed6-4a66-8ad0-57799d976d11-mon-kube-state-metrics-token-qj6tw") on node "controller-1" DevicePath "" 2019-11-04T18:46:20.779 controller-1 kubelet[88521]: info I1104 18:46:20.779346 88521 reconciler.go:301] Volume detached for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/fd9861e3-2af5-4433-a8ec-2f3509f19b0b-default-token-88gsr") on node "controller-1" DevicePath "" 2019-11-04T18:46:20.779 controller-1 kubelet[88521]: info I1104 18:46:20.779364 88521 reconciler.go:301] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/0b61d7cb-a47f-4975-a90d-9d5745291ec3-config-volume") on node "controller-1" DevicePath "" 2019-11-04T18:46:20.779 controller-1 kubelet[88521]: info I1104 18:46:20.779371 88521 reconciler.go:301] Volume detached for volume "rbd-provisioner-token-587hn" (UniqueName: "kubernetes.io/secret/fbbf3d3e-ca3b-463b-9dc5-2d7dc9a750ba-rbd-provisioner-token-587hn") on node "controller-1" DevicePath "" 2019-11-04T18:46:22.689 controller-1 kubelet[88521]: info E1104 18:46:22.689428 88521 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "a9e29e1e108a18713438cfdd1e04ffa81520c65a2d4fbbd130fb9a5dbc38b813" for pod "ceph-pools-audit-1572892200-8zlmd_kube-system(81d53b26-bb42-4bca-9d47-762f95801c55)" error: rpc error: code = Unknown desc = Error: No such container: a9e29e1e108a18713438cfdd1e04ffa81520c65a2d4fbbd130fb9a5dbc38b813 2019-11-04T18:46:30.495 controller-1 containerd[12218]: info time="2019-11-04T18:46:30.495898016Z" level=info msg="shim reaped" id=78cbc171cce118acbea731c23f29be730dae9180b46d7723dc4c5ece05a1cb32 2019-11-04T18:46:30.505 controller-1 dockerd[12258]: info time="2019-11-04T18:46:30.505709840Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:30.625 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.625 [INFO][298349] plugin.go 442: Extracted identifiers ContainerID="1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--hcp7h-eth0" 2019-11-04T18:46:30.632 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.632 [WARNING][298349] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:46:30.632 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.632 [INFO][298349] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--nginx--ingress--controller--hcp7h-eth0", GenerateName:"mon-nginx-ingress-controller-", Namespace:"monitor", SelfLink:"", UID:"4c64b3d0-34e3-4e7f-b65c-7d0935baeaa1", ResourceVersion:"8145773", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708488699, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/serviceaccount":"mon-nginx-ingress", "app":"nginx-ingress", "component":"controller", "controller-revision-hash":"866b74fd9d", "pod-template-generation":"1", "release":"mon-nginx-ingress", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-nginx-ingress-controller-hcp7h", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e313/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-nginx-ingress"}, InterfaceName:"cali2e0f4ae3beb", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x50}, v3.EndpointPort{Name:"https", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1bb}}}} 2019-11-04T18:46:30.632 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.632 [INFO][298349] k8s.go 477: Releasing IP address(es) ContainerID="1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" 2019-11-04T18:46:30.632 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.632 [INFO][298349] utils.go 171: Calico CNI releasing IP address ContainerID="1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" 2019-11-04T18:46:30.650 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.650 [INFO][298373] ipam_plugin.go 299: Releasing address using handleID ContainerID="1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" HandleID="chain.1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" Workload="controller--1-k8s-mon--nginx--ingress--controller--hcp7h-eth0" 2019-11-04T18:46:30.650 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.650 [INFO][298373] ipam.go 1145: Releasing all IPs with handle 'chain.1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339' 2019-11-04T18:46:30.672 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.672 [INFO][298373] ipam_plugin.go 308: Released address using handleID ContainerID="1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" HandleID="chain.1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" Workload="controller--1-k8s-mon--nginx--ingress--controller--hcp7h-eth0" 2019-11-04T18:46:30.672 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.672 [INFO][298373] ipam_plugin.go 317: Releasing address using workloadID ContainerID="1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" HandleID="chain.1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" Workload="controller--1-k8s-mon--nginx--ingress--controller--hcp7h-eth0" 2019-11-04T18:46:30.672 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.672 [INFO][298373] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-nginx-ingress-controller-hcp7h' 2019-11-04T18:46:30.674 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.674 [INFO][298349] k8s.go 481: Cleaning up netns ContainerID="1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" 2019-11-04T18:46:30.674 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.674 [INFO][298349] network_linux.go 450: Calico CNI deleting device in netns /proc/95227/ns/net ContainerID="1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" 2019-11-04T18:46:30.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%17, but no knowledge of it 2019-11-04T18:46:30.746 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.746 [INFO][298349] network_linux.go 467: Calico CNI deleted device in netns /proc/95227/ns/net ContainerID="1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" 2019-11-04T18:46:30.746 controller-1 kubelet[88521]: info 2019-11-04 18:46:30.746 [INFO][298349] k8s.go 493: Teardown processing complete. ContainerID="1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339" 2019-11-04T18:46:30.863 controller-1 containerd[12218]: info time="2019-11-04T18:46:30.863000328Z" level=info msg="shim reaped" id=1f6a3f54e3bc1f89bd9fb62969740818dfb4a4f14d48b80e9e3e7f413ef64339 2019-11-04T18:46:30.873 controller-1 dockerd[12258]: info time="2019-11-04T18:46:30.872995763Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:31.030 controller-1 kubelet[88521]: info W1104 18:46:31.030334 88521 prober.go:108] No ref for container "docker://78cbc171cce118acbea731c23f29be730dae9180b46d7723dc4c5ece05a1cb32" (mon-nginx-ingress-controller-hcp7h_monitor(4c64b3d0-34e3-4e7f-b65c-7d0935baeaa1):nginx-ingress-controller) 2019-11-04T18:46:31.902 controller-1 kubelet[88521]: info I1104 18:46:31.902676 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "mon-nginx-ingress-token-dgbmq" (UniqueName: "kubernetes.io/secret/4c64b3d0-34e3-4e7f-b65c-7d0935baeaa1-mon-nginx-ingress-token-dgbmq") pod "4c64b3d0-34e3-4e7f-b65c-7d0935baeaa1" (UID: "4c64b3d0-34e3-4e7f-b65c-7d0935baeaa1") 2019-11-04T18:46:31.922 controller-1 kubelet[88521]: info I1104 18:46:31.922778 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c64b3d0-34e3-4e7f-b65c-7d0935baeaa1-mon-nginx-ingress-token-dgbmq" (OuterVolumeSpecName: "mon-nginx-ingress-token-dgbmq") pod "4c64b3d0-34e3-4e7f-b65c-7d0935baeaa1" (UID: "4c64b3d0-34e3-4e7f-b65c-7d0935baeaa1"). InnerVolumeSpecName "mon-nginx-ingress-token-dgbmq". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:32.002 controller-1 kubelet[88521]: info I1104 18:46:32.002875 88521 reconciler.go:301] Volume detached for volume "mon-nginx-ingress-token-dgbmq" (UniqueName: "kubernetes.io/secret/4c64b3d0-34e3-4e7f-b65c-7d0935baeaa1-mon-nginx-ingress-token-dgbmq") on node "controller-1" DevicePath "" 2019-11-04T18:46:32.000 controller-1 ntpd[87544]: info Deleting interface #15 cali2e0f4ae3beb, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=1281 secs 2019-11-04T18:46:33.637 controller-1 collectd[12249]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T18:46:47.686 controller-1 dockerd[12258]: info time="2019-11-04T18:46:47.686009030Z" level=info msg="Container 99f6a93b487b02a87548f137135caecbc4cb32ef63f91bf8aaa4213cb37d0a94 failed to exit within 30 seconds of signal 15 - using the force" 2019-11-04T18:46:47.686 controller-1 dockerd[12258]: info time="2019-11-04T18:46:47.686192840Z" level=info msg="Container a09a4f24155733d9ed5c8be520a8a8dd7f2fcaad4205b9808d1118223e64cc24 failed to exit within 30 seconds of signal 15 - using the force" 2019-11-04T18:46:47.695 controller-1 dockerd[12258]: info time="2019-11-04T18:46:47.695538073Z" level=info msg="Container 1252dc4931489c57d0fc73264835611c0d968d136bff104463a2ebc6b659b988 failed to exit within 30 seconds of signal 15 - using the force" 2019-11-04T18:46:47.709 controller-1 dockerd[12258]: info time="2019-11-04T18:46:47.709120507Z" level=info msg="Container 27f91b626d60dc217c209017a5044f2a6c0045acc87c58fa41b993eb48a9632e failed to exit within 30 seconds of signal 15 - using the force" 2019-11-04T18:46:47.813 controller-1 containerd[12218]: info time="2019-11-04T18:46:47.813476254Z" level=info msg="shim reaped" id=a09a4f24155733d9ed5c8be520a8a8dd7f2fcaad4205b9808d1118223e64cc24 2019-11-04T18:46:47.814 controller-1 containerd[12218]: info time="2019-11-04T18:46:47.814084115Z" level=info msg="shim reaped" id=1252dc4931489c57d0fc73264835611c0d968d136bff104463a2ebc6b659b988 2019-11-04T18:46:47.823 controller-1 dockerd[12258]: info time="2019-11-04T18:46:47.823384173Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:47.824 controller-1 dockerd[12258]: info time="2019-11-04T18:46:47.824056992Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:47.883 controller-1 containerd[12218]: info time="2019-11-04T18:46:47.883658462Z" level=info msg="shim reaped" id=27f91b626d60dc217c209017a5044f2a6c0045acc87c58fa41b993eb48a9632e 2019-11-04T18:46:47.893 controller-1 dockerd[12258]: info time="2019-11-04T18:46:47.893594112Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:47.893 controller-1 containerd[12218]: info time="2019-11-04T18:46:47.893592515Z" level=info msg="shim reaped" id=99f6a93b487b02a87548f137135caecbc4cb32ef63f91bf8aaa4213cb37d0a94 2019-11-04T18:46:47.903 controller-1 dockerd[12258]: info time="2019-11-04T18:46:47.903460721Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:47.960 controller-1 containerd[12218]: info time="2019-11-04T18:46:47.960518356Z" level=info msg="shim reaped" id=9fa98c49e270b09f287df6d5fc10d2e50b079f731c04c741ccd40226ed1d3a90 2019-11-04T18:46:47.961 controller-1 containerd[12218]: info time="2019-11-04T18:46:47.960993420Z" level=info msg="shim reaped" id=5fe3175c259d8d690c5b163aa811c91b9d1235ac44d019ecc14aa49f1e39e8d4 2019-11-04T18:46:47.970 controller-1 dockerd[12258]: info time="2019-11-04T18:46:47.970310946Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:47.970 controller-1 dockerd[12258]: info time="2019-11-04T18:46:47.970738057Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:48.022 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.022 [INFO][301177] plugin.go 442: Extracted identifiers ContainerID="a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--metricbeat--7948cd594c--kwpjb-eth0" 2019-11-04T18:46:48.028 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.028 [WARNING][301177] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:46:48.028 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.028 [INFO][301177] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--metricbeat--7948cd594c--kwpjb-eth0", GenerateName:"mon-metricbeat-7948cd594c-", Namespace:"monitor", SelfLink:"", UID:"42aff923-3b79-48fd-b2ae-45921bfa46a3", ResourceVersion:"8145980", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489540, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"release":"mon-metricbeat", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-metricbeat", "app":"metricbeat", "pod-template-hash":"7948cd594c"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-metricbeat-7948cd594c-kwpjb", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e32b/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-metricbeat"}, InterfaceName:"cali22f2ff92180", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:46:48.028 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.028 [INFO][301177] k8s.go 477: Releasing IP address(es) ContainerID="a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" 2019-11-04T18:46:48.028 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.028 [INFO][301177] utils.go 171: Calico CNI releasing IP address ContainerID="a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" 2019-11-04T18:46:48.040 controller-1 containerd[12218]: info time="2019-11-04T18:46:48.040470241Z" level=info msg="shim reaped" id=abd903866a41d34e14ae26320e6ff0a85dcad1d7577de1546f4372e0a34368fd 2019-11-04T18:46:48.048 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.047 [INFO][301201] ipam_plugin.go 299: Releasing address using handleID ContainerID="a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" HandleID="chain.a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" Workload="controller--1-k8s-mon--metricbeat--7948cd594c--kwpjb-eth0" 2019-11-04T18:46:48.048 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.047 [INFO][301201] ipam.go 1145: Releasing all IPs with handle 'chain.a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7' 2019-11-04T18:46:48.050 controller-1 dockerd[12258]: info time="2019-11-04T18:46:48.050553047Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:48.070 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.070 [INFO][301201] ipam_plugin.go 308: Released address using handleID ContainerID="a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" HandleID="chain.a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" Workload="controller--1-k8s-mon--metricbeat--7948cd594c--kwpjb-eth0" 2019-11-04T18:46:48.070 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.070 [INFO][301201] ipam_plugin.go 317: Releasing address using workloadID ContainerID="a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" HandleID="chain.a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" Workload="controller--1-k8s-mon--metricbeat--7948cd594c--kwpjb-eth0" 2019-11-04T18:46:48.070 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.070 [INFO][301201] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-metricbeat-7948cd594c-kwpjb' 2019-11-04T18:46:48.073 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.073 [INFO][301177] k8s.go 481: Cleaning up netns ContainerID="a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" 2019-11-04T18:46:48.073 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.073 [INFO][301177] network_linux.go 450: Calico CNI deleting device in netns /proc/199135/ns/net ContainerID="a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" 2019-11-04T18:46:48.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%28, but no knowledge of it 2019-11-04T18:46:48.150 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.150 [INFO][301177] network_linux.go 467: Calico CNI deleted device in netns /proc/199135/ns/net ContainerID="a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" 2019-11-04T18:46:48.150 controller-1 kubelet[88521]: info 2019-11-04 18:46:48.150 [INFO][301177] k8s.go 493: Teardown processing complete. ContainerID="a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7" 2019-11-04T18:46:48.173 controller-1 kubelet[88521]: info W1104 18:46:48.173124 88521 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "mon-metricbeat-7948cd594c-kwpjb_monitor": unexpected command output nsenter: cannot open /proc/199135/ns/net: No such file or directory 2019-11-04T18:46:48.173 controller-1 kubelet[88521]: info with error: exit status 1 2019-11-04T18:46:48.237 controller-1 kubelet[88521]: info I1104 18:46:48.237836 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "cnibin" (UniqueName: "kubernetes.io/host-path/470fe5e7-08dd-4310-ac81-7520e37a88ea-cnibin") pod "470fe5e7-08dd-4310-ac81-7520e37a88ea" (UID: "470fe5e7-08dd-4310-ac81-7520e37a88ea") 2019-11-04T18:46:48.237 controller-1 kubelet[88521]: info I1104 18:46:48.237879 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-jxtxx" (UniqueName: "kubernetes.io/secret/470fe5e7-08dd-4310-ac81-7520e37a88ea-default-token-jxtxx") pod "470fe5e7-08dd-4310-ac81-7520e37a88ea" (UID: "470fe5e7-08dd-4310-ac81-7520e37a88ea") 2019-11-04T18:46:48.237 controller-1 kubelet[88521]: info I1104 18:46:48.237918 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "data" (UniqueName: "kubernetes.io/empty-dir/902bed75-d971-4868-83e1-1f629ca76b4c-data") pod "902bed75-d971-4868-83e1-1f629ca76b4c" (UID: "902bed75-d971-4868-83e1-1f629ca76b4c") 2019-11-04T18:46:48.237 controller-1 kubelet[88521]: info I1104 18:46:48.237938 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/470fe5e7-08dd-4310-ac81-7520e37a88ea-cnibin" (OuterVolumeSpecName: "cnibin") pod "470fe5e7-08dd-4310-ac81-7520e37a88ea" (UID: "470fe5e7-08dd-4310-ac81-7520e37a88ea"). InnerVolumeSpecName "cnibin". PluginName "kubernetes.io/host-path", VolumeGidValue "" 2019-11-04T18:46:48.237 controller-1 kubelet[88521]: info I1104 18:46:48.237964 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "cnibin" (UniqueName: "kubernetes.io/host-path/8fe130f5-1c66-429e-990e-dacffdbca8b9-cnibin") pod "8fe130f5-1c66-429e-990e-dacffdbca8b9" (UID: "8fe130f5-1c66-429e-990e-dacffdbca8b9") 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.237997 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/902bed75-d971-4868-83e1-1f629ca76b4c-default-token-88gsr") pod "902bed75-d971-4868-83e1-1f629ca76b4c" (UID: "902bed75-d971-4868-83e1-1f629ca76b4c") 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238017 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fe130f5-1c66-429e-990e-dacffdbca8b9-cnibin" (OuterVolumeSpecName: "cnibin") pod "8fe130f5-1c66-429e-990e-dacffdbca8b9" (UID: "8fe130f5-1c66-429e-990e-dacffdbca8b9"). InnerVolumeSpecName "cnibin". PluginName "kubernetes.io/host-path", VolumeGidValue "" 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238030 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "pipeline" (UniqueName: "kubernetes.io/configmap/902bed75-d971-4868-83e1-1f629ca76b4c-pipeline") pod "902bed75-d971-4868-83e1-1f629ca76b4c" (UID: "902bed75-d971-4868-83e1-1f629ca76b4c") 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info W1104 18:46:48.238029 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/902bed75-d971-4868-83e1-1f629ca76b4c/volumes/kubernetes.io~empty-dir/data: ClearQuota called, but quotas disabled 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238054 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "multus-token-dtj6m" (UniqueName: "kubernetes.io/secret/8fe130f5-1c66-429e-990e-dacffdbca8b9-multus-token-dtj6m") pod "8fe130f5-1c66-429e-990e-dacffdbca8b9" (UID: "8fe130f5-1c66-429e-990e-dacffdbca8b9") 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238081 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "multus-cfg" (UniqueName: "kubernetes.io/configmap/8fe130f5-1c66-429e-990e-dacffdbca8b9-multus-cfg") pod "8fe130f5-1c66-429e-990e-dacffdbca8b9" (UID: "8fe130f5-1c66-429e-990e-dacffdbca8b9") 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238114 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "patterns" (UniqueName: "kubernetes.io/configmap/902bed75-d971-4868-83e1-1f629ca76b4c-patterns") pod "902bed75-d971-4868-83e1-1f629ca76b4c" (UID: "902bed75-d971-4868-83e1-1f629ca76b4c") 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238148 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "files" (UniqueName: "kubernetes.io/configmap/902bed75-d971-4868-83e1-1f629ca76b4c-files") pod "902bed75-d971-4868-83e1-1f629ca76b4c" (UID: "902bed75-d971-4868-83e1-1f629ca76b4c") 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info W1104 18:46:48.238163 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/8fe130f5-1c66-429e-990e-dacffdbca8b9/volumes/kubernetes.io~configmap/multus-cfg: ClearQuota called, but quotas disabled 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238206 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8fe130f5-1c66-429e-990e-dacffdbca8b9-cni" (OuterVolumeSpecName: "cni") pod "8fe130f5-1c66-429e-990e-dacffdbca8b9" (UID: "8fe130f5-1c66-429e-990e-dacffdbca8b9"). InnerVolumeSpecName "cni". PluginName "kubernetes.io/host-path", VolumeGidValue "" 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238179 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/8fe130f5-1c66-429e-990e-dacffdbca8b9-cni") pod "8fe130f5-1c66-429e-990e-dacffdbca8b9" (UID: "8fe130f5-1c66-429e-990e-dacffdbca8b9") 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info W1104 18:46:48.238227 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/902bed75-d971-4868-83e1-1f629ca76b4c/volumes/kubernetes.io~configmap/patterns: ClearQuota called, but quotas disabled 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info W1104 18:46:48.238258 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/902bed75-d971-4868-83e1-1f629ca76b4c/volumes/kubernetes.io~configmap/files: ClearQuota called, but quotas disabled 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238276 88521 reconciler.go:301] Volume detached for volume "cni" (UniqueName: "kubernetes.io/host-path/8fe130f5-1c66-429e-990e-dacffdbca8b9-cni") on node "controller-1" DevicePath "" 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238245 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/902bed75-d971-4868-83e1-1f629ca76b4c-data" (OuterVolumeSpecName: "data") pod "902bed75-d971-4868-83e1-1f629ca76b4c" (UID: "902bed75-d971-4868-83e1-1f629ca76b4c"). InnerVolumeSpecName "data". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238290 88521 reconciler.go:301] Volume detached for volume "cnibin" (UniqueName: "kubernetes.io/host-path/470fe5e7-08dd-4310-ac81-7520e37a88ea-cnibin") on node "controller-1" DevicePath "" 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info W1104 18:46:48.238281 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/902bed75-d971-4868-83e1-1f629ca76b4c/volumes/kubernetes.io~configmap/pipeline: ClearQuota called, but quotas disabled 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238299 88521 reconciler.go:301] Volume detached for volume "cnibin" (UniqueName: "kubernetes.io/host-path/8fe130f5-1c66-429e-990e-dacffdbca8b9-cnibin") on node "controller-1" DevicePath "" 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238343 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8fe130f5-1c66-429e-990e-dacffdbca8b9-multus-cfg" (OuterVolumeSpecName: "multus-cfg") pod "8fe130f5-1c66-429e-990e-dacffdbca8b9" (UID: "8fe130f5-1c66-429e-990e-dacffdbca8b9"). InnerVolumeSpecName "multus-cfg". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238418 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902bed75-d971-4868-83e1-1f629ca76b4c-patterns" (OuterVolumeSpecName: "patterns") pod "902bed75-d971-4868-83e1-1f629ca76b4c" (UID: "902bed75-d971-4868-83e1-1f629ca76b4c"). InnerVolumeSpecName "patterns". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238423 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902bed75-d971-4868-83e1-1f629ca76b4c-files" (OuterVolumeSpecName: "files") pod "902bed75-d971-4868-83e1-1f629ca76b4c" (UID: "902bed75-d971-4868-83e1-1f629ca76b4c"). InnerVolumeSpecName "files". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:46:48.238 controller-1 kubelet[88521]: info I1104 18:46:48.238501 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/902bed75-d971-4868-83e1-1f629ca76b4c-pipeline" (OuterVolumeSpecName: "pipeline") pod "902bed75-d971-4868-83e1-1f629ca76b4c" (UID: "902bed75-d971-4868-83e1-1f629ca76b4c"). InnerVolumeSpecName "pipeline". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:46:48.249 controller-1 kubelet[88521]: info I1104 18:46:48.249751 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/470fe5e7-08dd-4310-ac81-7520e37a88ea-default-token-jxtxx" (OuterVolumeSpecName: "default-token-jxtxx") pod "470fe5e7-08dd-4310-ac81-7520e37a88ea" (UID: "470fe5e7-08dd-4310-ac81-7520e37a88ea"). InnerVolumeSpecName "default-token-jxtxx". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:48.251 controller-1 kubelet[88521]: info I1104 18:46:48.251818 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fe130f5-1c66-429e-990e-dacffdbca8b9-multus-token-dtj6m" (OuterVolumeSpecName: "multus-token-dtj6m") pod "8fe130f5-1c66-429e-990e-dacffdbca8b9" (UID: "8fe130f5-1c66-429e-990e-dacffdbca8b9"). InnerVolumeSpecName "multus-token-dtj6m". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:48.251 controller-1 kubelet[88521]: info I1104 18:46:48.251818 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/902bed75-d971-4868-83e1-1f629ca76b4c-default-token-88gsr" (OuterVolumeSpecName: "default-token-88gsr") pod "902bed75-d971-4868-83e1-1f629ca76b4c" (UID: "902bed75-d971-4868-83e1-1f629ca76b4c"). InnerVolumeSpecName "default-token-88gsr". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:48.293 controller-1 containerd[12218]: info time="2019-11-04T18:46:48.292944859Z" level=info msg="shim reaped" id=a19e983ae23f8573eab24ed0beba452c3f8f7ebf258a3e832da9e21e4671eea7 2019-11-04T18:46:48.302 controller-1 dockerd[12258]: info time="2019-11-04T18:46:48.302790036Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:46:48.338 controller-1 kubelet[88521]: info I1104 18:46:48.338518 88521 reconciler.go:301] Volume detached for volume "default-token-jxtxx" (UniqueName: "kubernetes.io/secret/470fe5e7-08dd-4310-ac81-7520e37a88ea-default-token-jxtxx") on node "controller-1" DevicePath "" 2019-11-04T18:46:48.338 controller-1 kubelet[88521]: info I1104 18:46:48.338536 88521 reconciler.go:301] Volume detached for volume "data" (UniqueName: "kubernetes.io/empty-dir/902bed75-d971-4868-83e1-1f629ca76b4c-data") on node "controller-1" DevicePath "" 2019-11-04T18:46:48.338 controller-1 kubelet[88521]: info I1104 18:46:48.338545 88521 reconciler.go:301] Volume detached for volume "multus-token-dtj6m" (UniqueName: "kubernetes.io/secret/8fe130f5-1c66-429e-990e-dacffdbca8b9-multus-token-dtj6m") on node "controller-1" DevicePath "" 2019-11-04T18:46:48.338 controller-1 kubelet[88521]: info I1104 18:46:48.338553 88521 reconciler.go:301] Volume detached for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/902bed75-d971-4868-83e1-1f629ca76b4c-default-token-88gsr") on node "controller-1" DevicePath "" 2019-11-04T18:46:48.338 controller-1 kubelet[88521]: info I1104 18:46:48.338562 88521 reconciler.go:301] Volume detached for volume "pipeline" (UniqueName: "kubernetes.io/configmap/902bed75-d971-4868-83e1-1f629ca76b4c-pipeline") on node "controller-1" DevicePath "" 2019-11-04T18:46:48.338 controller-1 kubelet[88521]: info I1104 18:46:48.338571 88521 reconciler.go:301] Volume detached for volume "multus-cfg" (UniqueName: "kubernetes.io/configmap/8fe130f5-1c66-429e-990e-dacffdbca8b9-multus-cfg") on node "controller-1" DevicePath "" 2019-11-04T18:46:48.338 controller-1 kubelet[88521]: info I1104 18:46:48.338579 88521 reconciler.go:301] Volume detached for volume "patterns" (UniqueName: "kubernetes.io/configmap/902bed75-d971-4868-83e1-1f629ca76b4c-patterns") on node "controller-1" DevicePath "" 2019-11-04T18:46:48.338 controller-1 kubelet[88521]: info I1104 18:46:48.338586 88521 reconciler.go:301] Volume detached for volume "files" (UniqueName: "kubernetes.io/configmap/902bed75-d971-4868-83e1-1f629ca76b4c-files") on node "controller-1" DevicePath "" 2019-11-04T18:46:49.901 controller-1 kubelet[88521]: info E1104 18:46:49.901253 88521 remote_runtime.go:295] ContainerStatus "99f6a93b487b02a87548f137135caecbc4cb32ef63f91bf8aaa4213cb37d0a94" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 99f6a93b487b02a87548f137135caecbc4cb32ef63f91bf8aaa4213cb37d0a94 2019-11-04T18:46:49.901 controller-1 kubelet[88521]: info E1104 18:46:49.901375 88521 remote_runtime.go:295] ContainerStatus "27f91b626d60dc217c209017a5044f2a6c0045acc87c58fa41b993eb48a9632e" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 27f91b626d60dc217c209017a5044f2a6c0045acc87c58fa41b993eb48a9632e 2019-11-04T18:46:49.902 controller-1 kubelet[88521]: info E1104 18:46:49.902248 88521 kubelet_pods.go:1093] Failed killing the pod "mon-metricbeat-7948cd594c-kwpjb": failed to "KillContainer" for "metricbeat" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: 99f6a93b487b02a87548f137135caecbc4cb32ef63f91bf8aaa4213cb37d0a94" 2019-11-04T18:46:49.902 controller-1 kubelet[88521]: info E1104 18:46:49.902426 88521 kubelet_pods.go:1093] Failed killing the pod "mon-logstash-0": failed to "KillContainer" for "logstash" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: 27f91b626d60dc217c209017a5044f2a6c0045acc87c58fa41b993eb48a9632e" 2019-11-04T18:46:50.000 controller-1 ntpd[87544]: info Deleting interface #28 cali22f2ff92180, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=459 secs 2019-11-04T18:46:50.242 controller-1 kubelet[88521]: info I1104 18:46:50.242401 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "metricbeat-config" (UniqueName: "kubernetes.io/secret/42aff923-3b79-48fd-b2ae-45921bfa46a3-metricbeat-config") pod "42aff923-3b79-48fd-b2ae-45921bfa46a3" (UID: "42aff923-3b79-48fd-b2ae-45921bfa46a3") 2019-11-04T18:46:50.242 controller-1 kubelet[88521]: info I1104 18:46:50.242445 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "mon-metricbeat-token-5vdfc" (UniqueName: "kubernetes.io/secret/42aff923-3b79-48fd-b2ae-45921bfa46a3-mon-metricbeat-token-5vdfc") pod "42aff923-3b79-48fd-b2ae-45921bfa46a3" (UID: "42aff923-3b79-48fd-b2ae-45921bfa46a3") 2019-11-04T18:46:50.242 controller-1 kubelet[88521]: info I1104 18:46:50.242470 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "modules" (UniqueName: "kubernetes.io/secret/42aff923-3b79-48fd-b2ae-45921bfa46a3-modules") pod "42aff923-3b79-48fd-b2ae-45921bfa46a3" (UID: "42aff923-3b79-48fd-b2ae-45921bfa46a3") 2019-11-04T18:46:50.242 controller-1 kubelet[88521]: info I1104 18:46:50.242520 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "root" (UniqueName: "kubernetes.io/host-path/42aff923-3b79-48fd-b2ae-45921bfa46a3-root") pod "42aff923-3b79-48fd-b2ae-45921bfa46a3" (UID: "42aff923-3b79-48fd-b2ae-45921bfa46a3") 2019-11-04T18:46:50.242 controller-1 kubelet[88521]: info I1104 18:46:50.242581 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/42aff923-3b79-48fd-b2ae-45921bfa46a3-root" (OuterVolumeSpecName: "root") pod "42aff923-3b79-48fd-b2ae-45921bfa46a3" (UID: "42aff923-3b79-48fd-b2ae-45921bfa46a3"). InnerVolumeSpecName "root". PluginName "kubernetes.io/host-path", VolumeGidValue "" 2019-11-04T18:46:50.262 controller-1 kubelet[88521]: info I1104 18:46:50.262738 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42aff923-3b79-48fd-b2ae-45921bfa46a3-modules" (OuterVolumeSpecName: "modules") pod "42aff923-3b79-48fd-b2ae-45921bfa46a3" (UID: "42aff923-3b79-48fd-b2ae-45921bfa46a3"). InnerVolumeSpecName "modules". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:50.263 controller-1 kubelet[88521]: info I1104 18:46:50.263853 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42aff923-3b79-48fd-b2ae-45921bfa46a3-mon-metricbeat-token-5vdfc" (OuterVolumeSpecName: "mon-metricbeat-token-5vdfc") pod "42aff923-3b79-48fd-b2ae-45921bfa46a3" (UID: "42aff923-3b79-48fd-b2ae-45921bfa46a3"). InnerVolumeSpecName "mon-metricbeat-token-5vdfc". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:50.278 controller-1 kubelet[88521]: info I1104 18:46:50.278820 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42aff923-3b79-48fd-b2ae-45921bfa46a3-metricbeat-config" (OuterVolumeSpecName: "metricbeat-config") pod "42aff923-3b79-48fd-b2ae-45921bfa46a3" (UID: "42aff923-3b79-48fd-b2ae-45921bfa46a3"). InnerVolumeSpecName "metricbeat-config". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:46:50.342 controller-1 kubelet[88521]: info I1104 18:46:50.342737 88521 reconciler.go:301] Volume detached for volume "metricbeat-config" (UniqueName: "kubernetes.io/secret/42aff923-3b79-48fd-b2ae-45921bfa46a3-metricbeat-config") on node "controller-1" DevicePath "" 2019-11-04T18:46:50.342 controller-1 kubelet[88521]: info I1104 18:46:50.342759 88521 reconciler.go:301] Volume detached for volume "mon-metricbeat-token-5vdfc" (UniqueName: "kubernetes.io/secret/42aff923-3b79-48fd-b2ae-45921bfa46a3-mon-metricbeat-token-5vdfc") on node "controller-1" DevicePath "" 2019-11-04T18:46:50.342 controller-1 kubelet[88521]: info I1104 18:46:50.342776 88521 reconciler.go:301] Volume detached for volume "modules" (UniqueName: "kubernetes.io/secret/42aff923-3b79-48fd-b2ae-45921bfa46a3-modules") on node "controller-1" DevicePath "" 2019-11-04T18:46:50.342 controller-1 kubelet[88521]: info I1104 18:46:50.342782 88521 reconciler.go:301] Volume detached for volume "root" (UniqueName: "kubernetes.io/host-path/42aff923-3b79-48fd-b2ae-45921bfa46a3-root") on node "controller-1" DevicePath "" 2019-11-04T18:47:00.679 controller-1 kubelet[88521]: info E1104 18:47:00.679854 88521 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/9a32555b3bd5aa2fa3556461ed9edaf8f6cd689bdce9f125cf98697263561767/diff" to get inode usage: stat /var/lib/docker/overlay2/9a32555b3bd5aa2fa3556461ed9edaf8f6cd689bdce9f125cf98697263561767/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/27f91b626d60dc217c209017a5044f2a6c0045acc87c58fa41b993eb48a9632e" to get inode usage: stat /var/lib/docker/containers/27f91b626d60dc217c209017a5044f2a6c0045acc87c58fa41b993eb48a9632e: no such file or directory 2019-11-04T18:47:02.514 controller-1 kubelet[88521]: info E1104 18:47:02.514408 88521 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/c944f4f1bdbd7635a59c04a01f2c74773b61279ccfaa59ff6213044d1882906b/diff" to get inode usage: stat /var/lib/docker/overlay2/c944f4f1bdbd7635a59c04a01f2c74773b61279ccfaa59ff6213044d1882906b/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/a09a4f24155733d9ed5c8be520a8a8dd7f2fcaad4205b9808d1118223e64cc24" to get inode usage: stat /var/lib/docker/containers/a09a4f24155733d9ed5c8be520a8a8dd7f2fcaad4205b9808d1118223e64cc24: no such file or directory 2019-11-04T18:47:02.521 controller-1 kubelet[88521]: info E1104 18:47:02.521030 88521 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/77caee6fb5fac0fc6d605e0697d0925e0f3e9a44baaafcf13b666c106158af05/diff" to get inode usage: stat /var/lib/docker/overlay2/77caee6fb5fac0fc6d605e0697d0925e0f3e9a44baaafcf13b666c106158af05/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/1252dc4931489c57d0fc73264835611c0d968d136bff104463a2ebc6b659b988" to get inode usage: stat /var/lib/docker/containers/1252dc4931489c57d0fc73264835611c0d968d136bff104463a2ebc6b659b988: no such file or directory 2019-11-04T18:47:08.480 controller-1 kubelet[88521]: info E1104 18:47:08.480514 88521 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/f366af554da961c52a11189836eff7b523d71ecf174808378706361ee9b289a5/diff" to get inode usage: stat /var/lib/docker/overlay2/f366af554da961c52a11189836eff7b523d71ecf174808378706361ee9b289a5/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/78cbc171cce118acbea731c23f29be730dae9180b46d7723dc4c5ece05a1cb32" to get inode usage: stat /var/lib/docker/containers/78cbc171cce118acbea731c23f29be730dae9180b46d7723dc4c5ece05a1cb32: no such file or directory 2019-11-04T18:47:08.685 controller-1 kubelet[88521]: info E1104 18:47:08.685836 88521 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/838266c2d906b0624f9303b226622c84b4ac08484882303b6a769374a56e0a05/diff" to get inode usage: stat /var/lib/docker/overlay2/838266c2d906b0624f9303b226622c84b4ac08484882303b6a769374a56e0a05/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/99f6a93b487b02a87548f137135caecbc4cb32ef63f91bf8aaa4213cb37d0a94" to get inode usage: stat /var/lib/docker/containers/99f6a93b487b02a87548f137135caecbc4cb32ef63f91bf8aaa4213cb37d0a94: no such file or directory 2019-11-04T18:48:17.677 controller-1 kubelet[88521]: info E1104 18:48:17.677721 88521 remote_runtime.go:243] StopContainer "395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:48:17.677 controller-1 kubelet[88521]: info E1104 18:48:17.677771 88521 kuberuntime_container.go:590] Container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:48:17.688 controller-1 kubelet[88521]: info E1104 18:48:17.688605 88521 remote_runtime.go:243] StopContainer "9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:48:17.688 controller-1 kubelet[88521]: info E1104 18:48:17.688634 88521 kuberuntime_container.go:590] Container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:48:17.703 controller-1 dockerd[12258]: info time="2019-11-04T18:48:17.703628538Z" level=info msg="Container 395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:48:17.706 controller-1 dockerd[12258]: info time="2019-11-04T18:48:17.706453837Z" level=info msg="Container 9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6 failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:48:17.784 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.784 [INFO][313867] plugin.go 442: Extracted identifiers ContainerID="5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T18:48:17.791 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.791 [WARNING][313867] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:48:17.791 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.791 [INFO][313867] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--data--1-eth0", GenerateName:"mon-elasticsearch-data-", Namespace:"monitor", SelfLink:"", UID:"99913751-ab01-4a00-8e4f-ff54b0232e5d", ResourceVersion:"8146893", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708487688, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"chart":"elasticsearch", "controller-revision-hash":"mon-elasticsearch-data-dc668b5cf", "release":"mon-elasticsearch-data", "app":"mon-elasticsearch-data", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-data-1", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "heritage":"Tiller"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-elasticsearch-data-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e32f/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"calibfabab83f74", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T18:48:17.791 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.791 [INFO][313867] k8s.go 477: Releasing IP address(es) ContainerID="5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" 2019-11-04T18:48:17.791 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.791 [INFO][313867] utils.go 171: Calico CNI releasing IP address ContainerID="5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" 2019-11-04T18:48:17.807 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.807 [INFO][313909] plugin.go 442: Extracted identifiers ContainerID="87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T18:48:17.810 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.810 [INFO][313914] ipam_plugin.go 299: Releasing address using handleID ContainerID="5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" HandleID="chain.5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" Workload="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T18:48:17.810 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.810 [INFO][313914] ipam.go 1145: Releasing all IPs with handle 'chain.5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04' 2019-11-04T18:48:17.814 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.814 [WARNING][313909] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:48:17.814 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.814 [INFO][313909] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--master--1-eth0", GenerateName:"mon-elasticsearch-master-", Namespace:"monitor", SelfLink:"", UID:"5edf03ac-2483-4c65-ba4d-f40dde7dbf65", ResourceVersion:"8146895", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708487231, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"mon-elasticsearch-master", "controller-revision-hash":"mon-elasticsearch-master-6fbc49c65b", "heritage":"Tiller", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-master-1", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "chart":"elasticsearch", "release":"mon-elasticsearch-master", "projectcalico.org/namespace":"monitor"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-elasticsearch-master-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e33f/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"calif772c92d8f9", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T18:48:17.814 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.814 [INFO][313909] k8s.go 477: Releasing IP address(es) ContainerID="87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" 2019-11-04T18:48:17.814 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.814 [INFO][313909] utils.go 171: Calico CNI releasing IP address ContainerID="87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" 2019-11-04T18:48:17.832 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.831 [INFO][313914] ipam_plugin.go 308: Released address using handleID ContainerID="5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" HandleID="chain.5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" Workload="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T18:48:17.832 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.832 [INFO][313914] ipam_plugin.go 317: Releasing address using workloadID ContainerID="5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" HandleID="chain.5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" Workload="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T18:48:17.832 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.832 [INFO][313914] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-elasticsearch-data-1' 2019-11-04T18:48:17.833 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.833 [INFO][313956] ipam_plugin.go 299: Releasing address using handleID ContainerID="87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" HandleID="chain.87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" Workload="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T18:48:17.833 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.833 [INFO][313956] ipam.go 1145: Releasing all IPs with handle 'chain.87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f' 2019-11-04T18:48:17.835 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.835 [INFO][313867] k8s.go 481: Cleaning up netns ContainerID="5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" 2019-11-04T18:48:17.835 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.835 [INFO][313867] network_linux.go 450: Calico CNI deleting device in netns /proc/97183/ns/net ContainerID="5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" 2019-11-04T18:48:17.855 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.855 [INFO][313956] ipam_plugin.go 308: Released address using handleID ContainerID="87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" HandleID="chain.87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" Workload="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T18:48:17.855 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.855 [INFO][313956] ipam_plugin.go 317: Releasing address using workloadID ContainerID="87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" HandleID="chain.87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" Workload="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T18:48:17.855 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.855 [INFO][313956] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-elasticsearch-master-1' 2019-11-04T18:48:17.857 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.857 [INFO][313909] k8s.go 481: Cleaning up netns ContainerID="87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" 2019-11-04T18:48:17.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%19, but no knowledge of it 2019-11-04T18:48:17.894 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.894 [INFO][313909] network_linux.go 450: Calico CNI deleting device in netns /proc/97844/ns/net ContainerID="87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" 2019-11-04T18:48:17.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%20, but no knowledge of it 2019-11-04T18:48:17.953 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.952 [INFO][313867] network_linux.go 467: Calico CNI deleted device in netns /proc/97183/ns/net ContainerID="5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" 2019-11-04T18:48:17.953 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.952 [INFO][313867] k8s.go 493: Teardown processing complete. ContainerID="5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" 2019-11-04T18:48:17.968 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.968 [INFO][313909] network_linux.go 467: Calico CNI deleted device in netns /proc/97844/ns/net ContainerID="87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" 2019-11-04T18:48:17.968 controller-1 kubelet[88521]: info 2019-11-04 18:48:17.968 [INFO][313909] k8s.go 493: Teardown processing complete. ContainerID="87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" 2019-11-04T18:48:18.065 controller-1 containerd[12218]: info time="2019-11-04T18:48:18.065443970Z" level=info msg="shim reaped" id=5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04 2019-11-04T18:48:18.075 controller-1 dockerd[12258]: info time="2019-11-04T18:48:18.075358339Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:48:18.075 controller-1 containerd[12218]: info time="2019-11-04T18:48:18.075893606Z" level=info msg="shim reaped" id=87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f 2019-11-04T18:48:18.085 controller-1 dockerd[12258]: info time="2019-11-04T18:48:18.085870096Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:48:18.104 controller-1 dockerd[12258]: info time="2019-11-04T18:48:18.104390317Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:18.104 controller-1 dockerd[12258]: info time="2019-11-04T18:48:18.104390570Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:18.107 controller-1 kubelet[88521]: info E1104 18:48:18.107686 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:48:18.107 controller-1 kubelet[88521]: info E1104 18:48:18.107746 88521 pod_workers.go:191] Error syncing pod 99913751-ab01-4a00-8e4f-ff54b0232e5d ("mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:48:18.117 controller-1 kubelet[88521]: info E1104 18:48:18.117402 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:48:18.117 controller-1 kubelet[88521]: info E1104 18:48:18.117421 88521 pod_workers.go:191] Error syncing pod 5edf03ac-2483-4c65-ba4d-f40dde7dbf65 ("mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:48:18.152 controller-1 dockerd[12258]: info time="2019-11-04T18:48:18.152343517Z" level=error msg="Error running exec c5e29d06daa6ffe2b7788dd76d849e4ab359a7182953bed190f8a23342b9bf88 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:48:18.152 controller-1 kubelet[88521]: info W1104 18:48:18.152736 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:48:18.982 controller-1 kubelet[88521]: info W1104 18:48:18.982238 88521 pod_container_deletor.go:75] Container "5ae4bd56889eaed2ed165c296b40108a3c92a376c13603c529043a75d7a25c04" not found in pod's containers 2019-11-04T18:48:18.989 controller-1 kubelet[88521]: info W1104 18:48:18.989623 88521 pod_container_deletor.go:75] Container "87665b16b98f047123d05c90df9825192e32726f104db1595052506dad0dd70f" not found in pod's containers 2019-11-04T18:48:19.000 controller-1 ntpd[87544]: info Deleting interface #18 calif772c92d8f9, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=1378 secs 2019-11-04T18:48:19.000 controller-1 ntpd[87544]: info Deleting interface #17 calibfabab83f74, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=1381 secs 2019-11-04T18:48:27.596 controller-1 dockerd[12258]: info time="2019-11-04T18:48:27.596539242Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:27.596 controller-1 dockerd[12258]: info time="2019-11-04T18:48:27.596553382Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:27.646 controller-1 dockerd[12258]: info time="2019-11-04T18:48:27.646726477Z" level=error msg="Error running exec d4cb0caca81ac8358b5b05cc803af56f8bbaf70c1908538890ba5ffa604b9d9f in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:48:27.647 controller-1 kubelet[88521]: info W1104 18:48:27.647170 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:48:27.724 controller-1 dockerd[12258]: info time="2019-11-04T18:48:27.724183787Z" level=info msg="Container 9bde4ebdc7bb failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:48:27.736 controller-1 dockerd[12258]: info time="2019-11-04T18:48:27.736360382Z" level=info msg="Container 395f343e30e3 failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:48:28.104 controller-1 dockerd[12258]: info time="2019-11-04T18:48:28.104230248Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:28.104 controller-1 dockerd[12258]: info time="2019-11-04T18:48:28.104262555Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:28.150 controller-1 dockerd[12258]: info time="2019-11-04T18:48:28.150917669Z" level=error msg="Error running exec c91bbcaba2ae2e25f57398145cc9d319479993a36628466e67318517aaddd9e3 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:48:28.151 controller-1 kubelet[88521]: info W1104 18:48:28.151368 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:48:37.596 controller-1 dockerd[12258]: info time="2019-11-04T18:48:37.596763471Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:37.596 controller-1 dockerd[12258]: info time="2019-11-04T18:48:37.596763786Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:37.647 controller-1 dockerd[12258]: info time="2019-11-04T18:48:37.647112074Z" level=error msg="Error running exec 08720e7512f01bacea9f361faeab952b5f17e47a570a29b8042363671c53199f in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:48:37.647 controller-1 kubelet[88521]: info W1104 18:48:37.647710 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:48:38.104 controller-1 dockerd[12258]: info time="2019-11-04T18:48:38.104297089Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:38.104 controller-1 dockerd[12258]: info time="2019-11-04T18:48:38.104298688Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:38.156 controller-1 dockerd[12258]: info time="2019-11-04T18:48:38.156409473Z" level=error msg="Error running exec 1c5eb563345732ae2be54201c98d0255b4ba0498bc669f1babcde8a398630b62 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:48:38.156 controller-1 kubelet[88521]: info W1104 18:48:38.156910 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:48:47.598 controller-1 dockerd[12258]: info time="2019-11-04T18:48:47.598272758Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:47.598 controller-1 dockerd[12258]: info time="2019-11-04T18:48:47.598272889Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:47.648 controller-1 dockerd[12258]: info time="2019-11-04T18:48:47.648437508Z" level=error msg="Error running exec 360542009a803d88ad9647261a3a8f5096c089f83941c98669b11dbf6e3232bb in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:48:47.649 controller-1 kubelet[88521]: info W1104 18:48:47.648965 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:48:48.108 controller-1 dockerd[12258]: info time="2019-11-04T18:48:48.107971986Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:48.108 controller-1 dockerd[12258]: info time="2019-11-04T18:48:48.107987970Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:48.159 controller-1 dockerd[12258]: info time="2019-11-04T18:48:48.159631612Z" level=error msg="Error running exec 6fb0ba195d04e46778e689a1b41b412f8c58e161b343399fbd4676335ffe994a in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:48:48.160 controller-1 kubelet[88521]: info W1104 18:48:48.160113 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:48:53.000 controller-1 nslcd[84484]: warning [a88611] ldap_search_ext() failed: Can't contact LDAP server: Connection reset by peer 2019-11-04T18:48:53.000 controller-1 nslcd[84484]: warning [a88611] no available LDAP server found, sleeping 1 seconds 2019-11-04T18:48:54.000 controller-1 nslcd[84484]: info [a88611] connected to LDAP server ldap://controller 2019-11-04T18:48:57.593 controller-1 dockerd[12258]: info time="2019-11-04T18:48:57.593601707Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:57.593 controller-1 dockerd[12258]: info time="2019-11-04T18:48:57.593665627Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:57.641 controller-1 dockerd[12258]: info time="2019-11-04T18:48:57.641033841Z" level=error msg="Error running exec 1634ccd97eaa261024cca94acd5607d0ef051e857a3921b79b5f75fee92d0f1c in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:48:57.641 controller-1 kubelet[88521]: info W1104 18:48:57.641537 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:48:58.103 controller-1 dockerd[12258]: info time="2019-11-04T18:48:58.103017126Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:58.103 controller-1 dockerd[12258]: info time="2019-11-04T18:48:58.103037662Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:48:58.148 controller-1 dockerd[12258]: info time="2019-11-04T18:48:58.148229207Z" level=error msg="Error running exec 9d406a2879c0739a409a9659763a4bbaba4f8aea2059a1db51c2c791eb5671b9 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:48:58.148 controller-1 kubelet[88521]: info W1104 18:48:58.148631 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:49:07.596 controller-1 dockerd[12258]: info time="2019-11-04T18:49:07.596743453Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:07.596 controller-1 dockerd[12258]: info time="2019-11-04T18:49:07.596771761Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:07.649 controller-1 dockerd[12258]: info time="2019-11-04T18:49:07.649071879Z" level=error msg="Error running exec c62efb7544d6dcd5775e9488417480da3c20fed57f4c230c0e1a09dce5f0d894 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:07.649 controller-1 kubelet[88521]: info W1104 18:49:07.649650 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:49:08.108 controller-1 dockerd[12258]: info time="2019-11-04T18:49:08.108700383Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:08.108 controller-1 dockerd[12258]: info time="2019-11-04T18:49:08.108718672Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:08.161 controller-1 dockerd[12258]: info time="2019-11-04T18:49:08.161611457Z" level=error msg="Error running exec 76ac07ccfd1cb835f3d247276b15626970c83cde686c63ec215ad327ace3d220 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:08.162 controller-1 kubelet[88521]: info W1104 18:49:08.162044 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:49:17.591 controller-1 dockerd[12258]: info time="2019-11-04T18:49:17.591715505Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:17.591 controller-1 dockerd[12258]: info time="2019-11-04T18:49:17.591715134Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:17.639 controller-1 dockerd[12258]: info time="2019-11-04T18:49:17.639127274Z" level=error msg="Error running exec 92b96686c78bf0e27a130f82240ae0bc0b0f319dcad9bdd53353fb724c4875d5 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:17.639 controller-1 kubelet[88521]: info W1104 18:49:17.639642 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:49:18.102 controller-1 dockerd[12258]: info time="2019-11-04T18:49:18.102642337Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:18.102 controller-1 dockerd[12258]: info time="2019-11-04T18:49:18.102646186Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:18.151 controller-1 dockerd[12258]: info time="2019-11-04T18:49:18.151199290Z" level=error msg="Error running exec 9ceb42d668623f327765a368a1e5913b094023d96eba5f578d7896d8aaac4dd1 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:18.151 controller-1 kubelet[88521]: info W1104 18:49:18.151818 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239249 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cnibin" (UniqueName: "kubernetes.io/host-path/e88b6292-68ed-43ed-be3e-4667434abb79-cnibin") pod "kube-sriov-cni-ds-amd64-hwc5l" (UID: "e88b6292-68ed-43ed-be3e-4667434abb79") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239283 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "data" (UniqueName: "kubernetes.io/empty-dir/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-data") pod "mon-logstash-0" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239316 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pipeline" (UniqueName: "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-pipeline") pod "mon-logstash-0" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239387 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-default-token-88gsr") pod "mon-logstash-0" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239422 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "rbd-provisioner-token-587hn" (UniqueName: "kubernetes.io/secret/49ec1a28-7e5c-411e-a43c-3ae7187d0955-rbd-provisioner-token-587hn") pod "rbd-provisioner-7484d49cf6-gw4fs" (UID: "49ec1a28-7e5c-411e-a43c-3ae7187d0955") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239473 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "files" (UniqueName: "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-files") pod "mon-logstash-0" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239563 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "multus-token-dtj6m" (UniqueName: "kubernetes.io/secret/59a873c6-47e3-4d1f-91dc-44d027d29903-multus-token-dtj6m") pod "kube-multus-ds-amd64-l97hp" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239591 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "multus-cfg" (UniqueName: "kubernetes.io/configmap/59a873c6-47e3-4d1f-91dc-44d027d29903-multus-cfg") pod "kube-multus-ds-amd64-l97hp" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239644 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jxtxx" (UniqueName: "kubernetes.io/secret/e88b6292-68ed-43ed-be3e-4667434abb79-default-token-jxtxx") pod "kube-sriov-cni-ds-amd64-hwc5l" (UID: "e88b6292-68ed-43ed-be3e-4667434abb79") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239672 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "mon-nginx-ingress-token-dgbmq" (UniqueName: "kubernetes.io/secret/5f94b1d5-a0a0-4c81-926a-07d23af72b93-mon-nginx-ingress-token-dgbmq") pod "mon-nginx-ingress-controller-kgq85" (UID: "5f94b1d5-a0a0-4c81-926a-07d23af72b93") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239695 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/59a873c6-47e3-4d1f-91dc-44d027d29903-cni") pod "kube-multus-ds-amd64-l97hp" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239712 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "patterns" (UniqueName: "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-patterns") pod "mon-logstash-0" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239750 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/5caace26-dd42-45da-8273-1a4ea4e95a86-default-token-88gsr") pod "mon-elasticsearch-client-1" (UID: "5caace26-dd42-45da-8273-1a4ea4e95a86") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239786 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cnibin" (UniqueName: "kubernetes.io/host-path/59a873c6-47e3-4d1f-91dc-44d027d29903-cnibin") pod "kube-multus-ds-amd64-l97hp" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239806 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b58145a4-0299-407b-8902-4780e9a7b778-config-volume") pod "coredns-6bc668cd76-crh8t" (UID: "b58145a4-0299-407b-8902-4780e9a7b778") 2019-11-04T18:49:18.239 controller-1 kubelet[88521]: info I1104 18:49:18.239825 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-x97rb" (UniqueName: "kubernetes.io/secret/b58145a4-0299-407b-8902-4780e9a7b778-coredns-token-x97rb") pod "coredns-6bc668cd76-crh8t" (UID: "b58145a4-0299-407b-8902-4780e9a7b778") 2019-11-04T18:49:18.359 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/bec75c2c-6de0-4ac4-8746-8cc48dc32f82/volumes/kubernetes.io~secret/default-token-88gsr. 2019-11-04T18:49:18.372 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b58145a4-0299-407b-8902-4780e9a7b778/volumes/kubernetes.io~secret/coredns-token-x97rb. 2019-11-04T18:49:18.387 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/49ec1a28-7e5c-411e-a43c-3ae7187d0955/volumes/kubernetes.io~secret/rbd-provisioner-token-587hn. 2019-11-04T18:49:18.397 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/5caace26-dd42-45da-8273-1a4ea4e95a86/volumes/kubernetes.io~secret/default-token-88gsr. 2019-11-04T18:49:18.412 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/e88b6292-68ed-43ed-be3e-4667434abb79/volumes/kubernetes.io~secret/default-token-jxtxx. 2019-11-04T18:49:18.542 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/59a873c6-47e3-4d1f-91dc-44d027d29903/volumes/kubernetes.io~secret/multus-token-dtj6m. 2019-11-04T18:49:18.545 controller-1 dockerd[12258]: info time="2019-11-04T18:49:18.545763310Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T18:49:18.552 controller-1 containerd[12218]: info time="2019-11-04T18:49:18.552431191Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967/shim.sock" debug=false pid=322391 2019-11-04T18:49:18.563 controller-1 dockerd[12258]: info time="2019-11-04T18:49:18.563572688Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T18:49:18.569 controller-1 containerd[12218]: info time="2019-11-04T18:49:18.569501475Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351/shim.sock" debug=false pid=322406 2019-11-04T18:49:18.570 controller-1 containerd[12218]: info time="2019-11-04T18:49:18.570366247Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6974fb1e83243b99ccd185e85e1e1c0317f2f805518750ac24a94fc3b8014e9a/shim.sock" debug=false pid=322411 2019-11-04T18:49:18.575 controller-1 dockerd[12258]: info time="2019-11-04T18:49:18.575330464Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T18:49:18.580 controller-1 containerd[12218]: info time="2019-11-04T18:49:18.580334895Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e/shim.sock" debug=false pid=322436 2019-11-04T18:49:18.590 controller-1 containerd[12218]: info time="2019-11-04T18:49:18.590823563Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ab3318ebfde3c69123cd381aa7327e78543e0da02abfedff62e22edd9496de8c/shim.sock" debug=false pid=322454 2019-11-04T18:49:18.844 controller-1 containerd[12218]: info time="2019-11-04T18:49:18.844896090Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8cb6835371d61133bd9872f49ffc38cd6d57a909ba2acdea16262f951fb5e9f4/shim.sock" debug=false pid=322655 2019-11-04T18:49:18.888 controller-1 containerd[12218]: info time="2019-11-04T18:49:18.888391244Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9146134a9f94357aac34c85b725b6a0025f5e4c8b5dd27c12968626cc4b9c5ce/shim.sock" debug=false pid=322723 2019-11-04T18:49:18.888 controller-1 containerd[12218]: info time="2019-11-04T18:49:18.888606570Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/04ffbb0b8469ff115606dbecd55963639c00d56f9b22e28e3631b033a70f01db/shim.sock" debug=false pid=322725 2019-11-04T18:49:18.946 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/5f94b1d5-a0a0-4c81-926a-07d23af72b93/volumes/kubernetes.io~secret/mon-nginx-ingress-token-dgbmq. 2019-11-04T18:49:19.103 controller-1 containerd[12218]: info time="2019-11-04T18:49:19.103682324Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a69f9ca15369645a5c2bc3dfea879321c92bdc066a98c1ef6ac02b4b5e776e88/shim.sock" debug=false pid=322893 2019-11-04T18:49:19.160 controller-1 dockerd[12258]: info time="2019-11-04T18:49:19.160166286Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T18:49:19.167 controller-1 containerd[12218]: info time="2019-11-04T18:49:19.167127531Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751/shim.sock" debug=false pid=322949 2019-11-04T18:49:24.585 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.585 [INFO][323800] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"coredns-6bc668cd76-crh8t", ContainerID:"6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351"}} 2019-11-04T18:49:24.599 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.599 [INFO][323822] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"rbd-provisioner-7484d49cf6-gw4fs", ContainerID:"6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e"}} 2019-11-04T18:49:24.600 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.600 [INFO][323837] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"monitor", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"mon-elasticsearch-client-1", ContainerID:"0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967"}} 2019-11-04T18:49:24.601 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.601 [INFO][323800] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-coredns--6bc668cd76--crh8t-eth0 coredns-6bc668cd76- kube-system b58145a4-0299-407b-8902-4780e9a7b778 8147502 0 2019-11-04 18:46:17 +0000 UTC map[projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns k8s-app:kube-dns pod-template-hash:6bc668cd76 projectcalico.org/namespace:kube-system] map[] [] nil [] } {k8s controller-1 coredns-6bc668cd76-crh8t eth0 [] [] [kns.kube-system ksa.kube-system.coredns] calidc068a6b120 [{dns UDP 53} {dns-tcp TCP 53} {metrics TCP 9153}]}} ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Namespace="kube-system" Pod="coredns-6bc668cd76-crh8t" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--crh8t-" 2019-11-04T18:49:24.601 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.601 [INFO][323800] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Namespace="kube-system" Pod="coredns-6bc668cd76-crh8t" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:49:24.605 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.605 [INFO][323800] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kube-system,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/kube-system,UID:5d016a6c-19e8-4b97-88a9-b6113a3cb736,ResourceVersion:5,Generation:0,CreationTimestamp:2019-10-25 15:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T18:49:24.607 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.607 [INFO][323800] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:coredns-6bc668cd76-crh8t,GenerateName:coredns-6bc668cd76-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/coredns-6bc668cd76-crh8t,UID:b58145a4-0299-407b-8902-4780e9a7b778,ResourceVersion:8147502,Generation:0,CreationTimestamp:2019-11-04 18:46:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{k8s-app: kube-dns,pod-template-hash: 6bc668cd76,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet coredns-6bc668cd76 a5d8df09-9b63-4615-a0e9-5f4c684232cb 0xc0003f80c7 0xc0003f80c8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{config-volume {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:coredns,},Items:[{Corefile Corefile }],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil}} {coredns-token-x97rb {nil nil nil nil nil &SecretVolumeSource{SecretName:coredns-token-x97rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{coredns registry.local:9001/k8s.gcr.io/coredns:1.6.2 [] [-conf /etc/coredns/Corefile] [{dns 0 53 UDP } {dns-tcp 0 53 TCP } {metrics 0 9153 TCP }] [] [] {map[memory:{{178257920 0} {} 170Mi BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{73400320 0} {} 70Mi BinarySI}]} [{config-volume true /etc/coredns } {coredns-token-x97rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:8181,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{beta.kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},ServiceAccountName:coredns,DeprecatedServiceAccount:coredns,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[{LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[{k8s-app In [kube-dns]}],} [] kubernetes.io/hostname}],PreferredDuringSchedulingIgnoredDuringExecution:[],},},SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{CriticalAddonsOnly Exists } {node-role.kubernetes.io/master NoSchedule } {node.kubernetes.io/not-ready Exists NoExecute 0xc0003f83c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0003f83e0}],HostAliases:[],PriorityClassName:system-cluster-critical,Priority:*2000000000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 18:49:18 +0000 UTC,ContainerStatuses:[{coredns {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 registry.local:9001/k8s.gcr.io/coredns:1.6.2 }],QOSClass:Burstable,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T18:49:24.613 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.613 [INFO][323822] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0 rbd-provisioner-7484d49cf6- kube-system 49ec1a28-7e5c-411e-a43c-3ae7187d0955 8147539 0 2019-11-04 18:46:17 +0000 UTC map[app:rbd-provisioner pod-template-hash:7484d49cf6 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:rbd-provisioner] map[] [] nil [] } {k8s controller-1 rbd-provisioner-7484d49cf6-gw4fs eth0 [] [] [kns.kube-system ksa.kube-system.rbd-provisioner] cali2d9e61f5bd4 []}} ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-gw4fs" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-" 2019-11-04T18:49:24.613 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.613 [INFO][323822] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-gw4fs" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:49:24.615 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.615 [INFO][323837] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-mon--elasticsearch--client--1-eth0 mon-elasticsearch-client- monitor 5caace26-dd42-45da-8273-1a4ea4e95a86 8147519 0 2019-11-04 18:46:24 +0000 UTC map[projectcalico.org/orchestrator:k8s app:mon-elasticsearch-client controller-revision-hash:mon-elasticsearch-client-7c64d4f4fd release:mon-elasticsearch-client projectcalico.org/namespace:monitor projectcalico.org/serviceaccount:default chart:elasticsearch heritage:Tiller statefulset.kubernetes.io/pod-name:mon-elasticsearch-client-1] map[] [] nil [] } {k8s controller-1 mon-elasticsearch-client-1 eth0 [] [] [kns.monitor ksa.monitor.default] cali7eb1b3c61b4 [{http TCP 9200} {transport TCP 9300}]}} ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-" 2019-11-04T18:49:24.615 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.615 [INFO][323837] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:49:24.617 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.617 [INFO][323822] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kube-system,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/kube-system,UID:5d016a6c-19e8-4b97-88a9-b6113a3cb736,ResourceVersion:5,Generation:0,CreationTimestamp:2019-10-25 15:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T18:49:24.618 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.618 [INFO][323837] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:monitor,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/monitor,UID:85834187-55ff-4686-b97c-c3f524d37f83,ResourceVersion:46120,Generation:0,CreationTimestamp:2019-10-25 19:07:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T18:49:24.618 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.618 [INFO][323822] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rbd-provisioner-7484d49cf6-gw4fs,GenerateName:rbd-provisioner-7484d49cf6-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/rbd-provisioner-7484d49cf6-gw4fs,UID:49ec1a28-7e5c-411e-a43c-3ae7187d0955,ResourceVersion:8147539,Generation:0,CreationTimestamp:2019-11-04 18:46:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rbd-provisioner,pod-template-hash: 7484d49cf6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet rbd-provisioner-7484d49cf6 4293aea8-b2ec-41f5-b635-07aeb9f394f9 0xc000783547 0xc000783548}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{rbd-provisioner-token-587hn {nil nil nil nil nil SecretVolumeSource{SecretName:rbd-provisioner-token-587hn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{rbd-provisioner registry.local:9001/quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 [] [] [] [] [{PROVISIONER_NAME ceph.com/rbd nil}] {map[] map[]} [{rbd-provisioner-token-587hn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:rbd-provisioner,DeprecatedServiceAccount:rbd-provisioner,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[{default-registry-key}],Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[{LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[{app In [rbd-provisioner]}],} [] kubernetes.io/hostname}],PreferredDuringSchedulingIgnoredDuringExecution:[],},},SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000783710} {node.kubernetes.io/unreachable Exists NoExecute 0xc000783730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC ContainersNotReady containers with unready status: [rbd-provisioner]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC ContainersNotReady containers with unready status: [rbd-provisioner]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 18:49:18 +0000 UTC,ContainerStatuses:[{rbd-provisioner {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 registry.local:9001/quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.619 [INFO][323837] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:mon-elasticsearch-client-1,GenerateName:mon-elasticsearch-client-,Namespace:monitor,SelfLink:/api/v1/namespaces/monitor/pods/mon-elasticsearch-client-1,UID:5caace26-dd42-45da-8273-1a4ea4e95a86,ResourceVersion:8147519,Generation:0,CreationTimestamp:2019-11-04 18:46:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: mon-elasticsearch-client,chart: elasticsearch,controller-revision-hash: mon-elasticsearch-client-7c64d4f4fd,heritage: Tiller,release: mon-elasticsearch-client,statefulset.kubernetes.io/pod-name: mon-elasticsearch-client-1,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 StatefulSet mon-elasticsearch-client 01fc0ff7-b1ba-467a-b465-ac381c76be1a 0xc0001cb5d7 0xc0001cb5d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-88gsr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-88gsr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{elasticsearch docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 [] [] [{http 0 9200 TCP } {transport 0 9300 TCP }] [] [{node.name EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {discovery.seed_hosts mon-elasticsearch-data-headless, mon-elasticsearch-master-headless nil} {cluster.name mon-elasticsearch nil} {network.host 0.0.0.0 nil} {ES_JAVA_OPTS -Djava.net.preferIPv6Addresses=true -Xmx1024m -Xms1024m nil} {node.data false nil} {node.ingest true nil} {node.master false nil} {DATA_PRESTOP_SLEEP 100 nil}] {map[cpu:{{1 0} {} 1 DecimalSI} memory:{{2147483648 0} {} 2Gi BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{2147483648 0} {} 2Gi BinarySI}]} [{default-token-88gsr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil &Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c #!/usr/bin/env bash -e 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info # If the node is starting up wait for the cluster to be ready (request params: '' ) 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info # Once it has started only check that the node itself is responding 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info START_FILE=/tmp/.es_start_file 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info http () { 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info local path="${1}" 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}" 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info else 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info BASIC_AUTH='' 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info fi 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path} 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info } 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info if [ -f "${START_FILE}" ]; then 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info echo 'Elasticsearch is already running, lets check the node is healthy' 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info http "/" 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info else 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "" )' 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info if http "/_cluster/health?" ; then 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info touch ${START_FILE} 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info exit 0 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info else 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info echo 'Cluster is not yet ready (request params: "" )' 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info exit 1 2019-11-04T18:49:24.619 controller-1 kubelet[88521]: info fi 2019-11-04T18:49:24.620 controller-1 kubelet[88521]: info fi 2019-11-04T18:49:24.620 controller-1 kubelet[88521]: info ],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:3,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*120,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{elastic-client: enabled,},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:mon-elasticsearch-client-1,Subdomain:mon-elasticsearch-client-headless,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[{LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[{app In [mon-elasticsearch-client]}],} [] kubernetes.io/hostname}],PreferredDuringSchedulingIgnoredDuringExecution:[],},},SchedulerName:default-scheduler,InitContainers:[{configure-sysctl docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 [sysctl -w vm.max_map_count=262144] [] [] [] [] {map[] map[]} [{default-token-88gsr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0003d6f20} {node.kubernetes.io/unreachable Exists NoExecute 0xc0003d6f40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC ContainersNotInitialized containers with incomplete status: [configure-sysctl]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC ContainersNotReady containers with unready status: [elasticsearch]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC ContainersNotReady containers with unready status: [elasticsearch]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 18:49:18 +0000 UTC,ContainerStatuses:[{elasticsearch {ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 }],QOSClass:Burstable,InitContainerStatuses:[{configure-sysctl {ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 }],NominatedNodeName:,},} 2019-11-04T18:49:24.626 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.626 [INFO][323878] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" HandleID="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Workload="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:49:24.635 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.635 [INFO][323878] ipam_plugin.go 220: Calico CNI IPAM handle=chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351 ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" HandleID="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Workload="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:49:24.635 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.635 [INFO][323878] ipam_plugin.go 230: Auto assigning IP ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" HandleID="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Workload="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0001ddb70), Attrs:map[string]string{"pod":"coredns-6bc668cd76-crh8t", "namespace":"kube-system", "node":"controller-1"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T18:49:24.635 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.635 [INFO][323878] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T18:49:24.637 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.636 [INFO][323896] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" HandleID="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:49:24.638 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.638 [INFO][323904] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" HandleID="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:49:24.639 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.638 [INFO][323878] ipam.go 309: Looking up existing affinities for host handle="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" host="controller-1" 2019-11-04T18:49:24.642 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.642 [INFO][323878] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" host="controller-1" 2019-11-04T18:49:24.645 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.645 [INFO][323896] ipam_plugin.go 220: Calico CNI IPAM handle=chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" HandleID="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:49:24.645 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.645 [INFO][323896] ipam_plugin.go 230: Auto assigning IP ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" HandleID="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002ea560), Attrs:map[string]string{"namespace":"kube-system", "node":"controller-1", "pod":"rbd-provisioner-7484d49cf6-gw4fs"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T18:49:24.645 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.645 [INFO][323896] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T18:49:24.645 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.645 [INFO][323878] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:49:24.647 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.647 [INFO][323904] ipam_plugin.go 220: Calico CNI IPAM handle=chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967 ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" HandleID="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:49:24.647 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.647 [INFO][323904] ipam_plugin.go 230: Auto assigning IP ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" HandleID="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc00035e2f0), Attrs:map[string]string{"node":"controller-1", "pod":"mon-elasticsearch-client-1", "namespace":"monitor"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T18:49:24.647 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.647 [INFO][323904] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T18:49:24.647 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.647 [INFO][323878] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:49:24.647 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.647 [INFO][323878] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" host="controller-1" 2019-11-04T18:49:24.648 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.648 [INFO][323878] ipam.go 1244: Creating new handle: chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351 2019-11-04T18:49:24.648 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.648 [INFO][323896] ipam.go 309: Looking up existing affinities for host handle="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" host="controller-1" 2019-11-04T18:49:24.651 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.651 [INFO][323904] ipam.go 309: Looking up existing affinities for host handle="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" host="controller-1" 2019-11-04T18:49:24.652 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.652 [INFO][323878] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" host="controller-1" 2019-11-04T18:49:24.652 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.652 [INFO][323896] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" host="controller-1" 2019-11-04T18:49:24.654 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.654 [INFO][323896] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:49:24.655 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.654 [INFO][323904] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" host="controller-1" 2019-11-04T18:49:24.660 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.660 [INFO][323878] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e30a/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" host="controller-1" 2019-11-04T18:49:24.660 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.660 [INFO][323878] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e30a/122] handle="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" host="controller-1" 2019-11-04T18:49:24.660 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.660 [INFO][323904] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:49:24.660 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.660 [INFO][323896] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:49:24.660 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.660 [INFO][323896] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" host="controller-1" 2019-11-04T18:49:24.661 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.660 [INFO][323878] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e30a/122] handle="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" host="controller-1" 2019-11-04T18:49:24.661 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.660 [INFO][323878] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e30a/122] ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" HandleID="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Workload="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:49:24.661 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.661 [INFO][323878] ipam_plugin.go 258: IPAM Result ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" HandleID="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Workload="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc00014e600)} 2019-11-04T18:49:24.661 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.661 [INFO][323896] ipam.go 1244: Creating new handle: chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e 2019-11-04T18:49:24.662 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.662 [INFO][323800] k8s.go 361: Populated endpoint ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Namespace="kube-system" Pod="coredns-6bc668cd76-crh8t" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-coredns--6bc668cd76--crh8t-eth0", GenerateName:"coredns-6bc668cd76-", Namespace:"kube-system", SelfLink:"", UID:"b58145a4-0299-407b-8902-4780e9a7b778", ResourceVersion:"8147502", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489977, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns", "k8s-app":"kube-dns", "pod-template-hash":"6bc668cd76"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"coredns-6bc668cd76-crh8t", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e30a/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc068a6b120", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} 2019-11-04T18:49:24.662 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.662 [INFO][323800] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e30a/128] ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Namespace="kube-system" Pod="coredns-6bc668cd76-crh8t" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:49:24.662 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.662 [INFO][323800] network_linux.go 76: Setting the host side veth name to calidc068a6b120 ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Namespace="kube-system" Pod="coredns-6bc668cd76-crh8t" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:49:24.663 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.663 [INFO][323904] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:49:24.663 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.663 [INFO][323904] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" host="controller-1" 2019-11-04T18:49:24.663 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.663 [INFO][323896] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" host="controller-1" 2019-11-04T18:49:24.664 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.664 [INFO][323904] ipam.go 1244: Creating new handle: chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967 2019-11-04T18:49:24.665 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.665 [INFO][323800] network_linux.go 411: Disabling IPv6 forwarding ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Namespace="kube-system" Pod="coredns-6bc668cd76-crh8t" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:49:24.666 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.666 [INFO][323896] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e307/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" host="controller-1" 2019-11-04T18:49:24.666 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.666 [INFO][323896] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e307/122] handle="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" host="controller-1" 2019-11-04T18:49:24.667 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.667 [INFO][323896] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e307/122] handle="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" host="controller-1" 2019-11-04T18:49:24.667 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.667 [INFO][323896] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e307/122] ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" HandleID="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:49:24.667 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.667 [INFO][323896] ipam_plugin.go 258: IPAM Result ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" HandleID="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc00042a180)} 2019-11-04T18:49:24.667 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.667 [INFO][323904] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" host="controller-1" 2019-11-04T18:49:24.668 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.668 [INFO][323822] k8s.go 361: Populated endpoint ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-gw4fs" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0", GenerateName:"rbd-provisioner-7484d49cf6-", Namespace:"kube-system", SelfLink:"", UID:"49ec1a28-7e5c-411e-a43c-3ae7187d0955", ResourceVersion:"8147539", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489977, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"rbd-provisioner", "pod-template-hash":"7484d49cf6", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"rbd-provisioner"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"rbd-provisioner-7484d49cf6-gw4fs", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e307/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.rbd-provisioner"}, InterfaceName:"cali2d9e61f5bd4", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:49:24.668 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.668 [INFO][323822] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e307/128] ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-gw4fs" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:49:24.668 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.668 [INFO][323822] network_linux.go 76: Setting the host side veth name to cali2d9e61f5bd4 ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-gw4fs" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:49:24.669 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.668 [ERROR][323904] customresource.go 136: Error updating resource Key=IPAMBlock(fd00-206--a4ce-fec1-5423-e300-122) Name="fd00-206--a4ce-fec1-5423-e300-122" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"fd00-206--a4ce-fec1-5423-e300-122", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"8147602", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.IPAMBlockSpec{CIDR:"fd00:206::a4ce:fec1:5423:e300/122", Affinity:(*string)(0xc000716590), StrictAffinity:false, Allocations:[]*int{(*int)(0xc000572298), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc0005723a8), (*int)(nil), (*int)(nil), (*int)(0xc0005722a0), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{46, 44, 32, 18, 33, 57, 41, 29, 35, 37, 56, 54, 2, 5, 15, 21, 16, 1, 48, 50, 38, 60, 3, 24, 17, 45, 9, 31, 49, 58, 8, 34, 12, 23, 59, 42, 14, 11, 61, 25, 4, 28, 36, 22, 62, 39, 40, 52, 30, 51, 55, 27, 20, 26, 53, 13, 6, 19, 43, 47, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0007165c0), AttrSecondary:map[string]string{"node":"controller-1", "pod":"mon-filebeat-bppwv", "namespace":"monitor"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc000716630), AttrSecondary:map[string]string{"pod":"coredns-6bc668cd76-crh8t", "namespace":"kube-system", "node":"controller-1"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc00035e2f0), AttrSecondary:map[string]string{"node":"controller-1", "pod":"mon-elasticsearch-client-1", "namespace":"monitor"}}}, Deleted:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "fd00-206--a4ce-fec1-5423-e300-122": the object has been modified; please apply your changes to the latest version and try again 2019-11-04T18:49:24.669 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.669 [INFO][323904] ipam.go 816: Failed to update block block=fd00:206::a4ce:fec1:5423:e300/122 error=update conflict: IPAMBlock(fd00-206--a4ce-fec1-5423-e300-122) handle="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" host="controller-1" 2019-11-04T18:49:24.674 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.674 [INFO][323904] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" host="controller-1" 2019-11-04T18:49:24.675 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.675 [INFO][323904] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:49:24.676 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.676 [INFO][323904] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:49:24.676 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.676 [INFO][323904] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" host="controller-1" 2019-11-04T18:49:24.677 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.677 [INFO][323904] ipam.go 1244: Creating new handle: chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967 2019-11-04T18:49:24.679 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.679 [INFO][323904] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" host="controller-1" 2019-11-04T18:49:24.681 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.681 [INFO][323904] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e32e/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" host="controller-1" 2019-11-04T18:49:24.681 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.681 [INFO][323904] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e32e/122] handle="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" host="controller-1" 2019-11-04T18:49:24.682 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.682 [INFO][323904] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e32e/122] handle="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" host="controller-1" 2019-11-04T18:49:24.682 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.682 [INFO][323904] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e32e/122] ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" HandleID="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:49:24.682 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.682 [INFO][323904] ipam_plugin.go 258: IPAM Result ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" HandleID="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc000364c00)} 2019-11-04T18:49:24.683 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.683 [INFO][323837] k8s.go 361: Populated endpoint ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--client--1-eth0", GenerateName:"mon-elasticsearch-client-", Namespace:"monitor", SelfLink:"", UID:"5caace26-dd42-45da-8273-1a4ea4e95a86", ResourceVersion:"8147519", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489984, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"chart":"elasticsearch", "controller-revision-hash":"mon-elasticsearch-client-7c64d4f4fd", "heritage":"Tiller", "release":"mon-elasticsearch-client", "projectcalico.org/namespace":"monitor", "app":"mon-elasticsearch-client", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-client-1", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-elasticsearch-client-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e32e/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"cali7eb1b3c61b4", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T18:49:24.683 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.683 [INFO][323837] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e32e/128] ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:49:24.683 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.683 [INFO][323837] network_linux.go 76: Setting the host side veth name to cali7eb1b3c61b4 ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:49:24.711 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.711 [INFO][323800] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Namespace="kube-system" Pod="coredns-6bc668cd76-crh8t" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-coredns--6bc668cd76--crh8t-eth0", GenerateName:"coredns-6bc668cd76-", Namespace:"kube-system", SelfLink:"", UID:"b58145a4-0299-407b-8902-4780e9a7b778", ResourceVersion:"8147502", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489977, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/serviceaccount":"coredns", "k8s-app":"kube-dns", "pod-template-hash":"6bc668cd76", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351", Pod:"coredns-6bc668cd76-crh8t", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e30a/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc068a6b120", MAC:"42:10:62:59:67:cf", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} 2019-11-04T18:49:24.714 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.714 [INFO][323822] network_linux.go 411: Disabling IPv6 forwarding ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-gw4fs" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:49:24.715 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.715 [INFO][323800] k8s.go 420: Wrote updated endpoint to datastore ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Namespace="kube-system" Pod="coredns-6bc668cd76-crh8t" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:49:24.754 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.753 [INFO][323822] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-gw4fs" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0", GenerateName:"rbd-provisioner-7484d49cf6-", Namespace:"kube-system", SelfLink:"", UID:"49ec1a28-7e5c-411e-a43c-3ae7187d0955", ResourceVersion:"8147539", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489977, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"rbd-provisioner", "pod-template-hash":"7484d49cf6", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"rbd-provisioner"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e", Pod:"rbd-provisioner-7484d49cf6-gw4fs", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e307/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.rbd-provisioner"}, InterfaceName:"cali2d9e61f5bd4", MAC:"16:d2:ca:52:54:9a", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:49:24.754 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.754 [INFO][323837] network_linux.go 411: Disabling IPv6 forwarding ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:49:24.757 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.757 [INFO][323822] k8s.go 420: Wrote updated endpoint to datastore ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-gw4fs" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:49:24.796 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.796 [INFO][323837] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--client--1-eth0", GenerateName:"mon-elasticsearch-client-", Namespace:"monitor", SelfLink:"", UID:"5caace26-dd42-45da-8273-1a4ea4e95a86", ResourceVersion:"8147519", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489984, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"chart":"elasticsearch", "controller-revision-hash":"mon-elasticsearch-client-7c64d4f4fd", "heritage":"Tiller", "release":"mon-elasticsearch-client", "projectcalico.org/namespace":"monitor", "app":"mon-elasticsearch-client", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-client-1", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967", Pod:"mon-elasticsearch-client-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e32e/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"cali7eb1b3c61b4", MAC:"56:d8:e6:48:de:87", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T18:49:24.800 controller-1 kubelet[88521]: info 2019-11-04 18:49:24.800 [INFO][323837] k8s.go 420: Wrote updated endpoint to datastore ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:49:24.831 controller-1 containerd[12218]: info time="2019-11-04T18:49:24.831594781Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8523b570e1317123fbbbeb96f7b8729666b1fde3cdc20568572a2686c415d9a6/shim.sock" debug=false pid=324071 2019-11-04T18:49:24.840 controller-1 containerd[12218]: info time="2019-11-04T18:49:24.840697794Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bce1295bb935f90419c8343848ce68e30484e8bed7fcc47405a652f2542a12cb/shim.sock" debug=false pid=324087 2019-11-04T18:49:24.884 controller-1 containerd[12218]: info time="2019-11-04T18:49:24.884005190Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/15a19b89ae9a2282f17851a961067c9f61a578c6bc8482e03202ed80d0d162c4/shim.sock" debug=false pid=324125 2019-11-04T18:49:25.093 controller-1 containerd[12218]: info time="2019-11-04T18:49:25.093164964Z" level=info msg="shim reaped" id=15a19b89ae9a2282f17851a961067c9f61a578c6bc8482e03202ed80d0d162c4 2019-11-04T18:49:25.098 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.098 [INFO][324257] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"monitor", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"mon-nginx-ingress-controller-kgq85", ContainerID:"ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751"}} 2019-11-04T18:49:25.103 controller-1 dockerd[12258]: info time="2019-11-04T18:49:25.103124529Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:49:25.113 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.113 [INFO][324257] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0 mon-nginx-ingress-controller- monitor 5f94b1d5-a0a0-4c81-926a-07d23af72b93 8147547 0 2019-11-04 18:49:18 +0000 UTC map[app:nginx-ingress component:controller controller-revision-hash:866b74fd9d pod-template-generation:1 release:mon-nginx-ingress projectcalico.org/namespace:monitor projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:mon-nginx-ingress] map[] [] nil [] } {k8s controller-1 mon-nginx-ingress-controller-kgq85 eth0 [] [] [kns.monitor ksa.monitor.mon-nginx-ingress] calidf007395be0 [{http TCP 80} {https TCP 443}]}} ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Namespace="monitor" Pod="mon-nginx-ingress-controller-kgq85" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--kgq85-" 2019-11-04T18:49:25.113 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.113 [INFO][324257] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Namespace="monitor" Pod="mon-nginx-ingress-controller-kgq85" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:49:25.116 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.116 [INFO][324257] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:monitor,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/monitor,UID:85834187-55ff-4686-b97c-c3f524d37f83,ResourceVersion:46120,Generation:0,CreationTimestamp:2019-10-25 19:07:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T18:49:25.118 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.118 [INFO][324257] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:mon-nginx-ingress-controller-kgq85,GenerateName:mon-nginx-ingress-controller-,Namespace:monitor,SelfLink:/api/v1/namespaces/monitor/pods/mon-nginx-ingress-controller-kgq85,UID:5f94b1d5-a0a0-4c81-926a-07d23af72b93,ResourceVersion:8147547,Generation:0,CreationTimestamp:2019-11-04 18:49:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: nginx-ingress,component: controller,controller-revision-hash: 866b74fd9d,pod-template-generation: 1,release: mon-nginx-ingress,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 DaemonSet mon-nginx-ingress-controller 4f92bc9f-671a-423c-ae73-f67862da850c 0xc00078db00 0xc00078db01}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{mon-nginx-ingress-token-dgbmq {nil nil nil nil nil SecretVolumeSource{SecretName:mon-nginx-ingress-token-dgbmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx-ingress-controller quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 [] [/nginx-ingress-controller --default-backend-service=monitor/mon-nginx-ingress-default-backend --election-id=ingress-controller-leader --ingress-class=nginx --configmap=monitor/mon-nginx-ingress-controller --watch-namespace=monitor] [{http 0 80 TCP } {https 0 443 TCP }] [] [{POD_NAME EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_NAMESPACE &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] {map[cpu:{{200 -3} {} 200m DecimalSI} memory:{{268435456 0} {} BinarySI}] map[cpu:{{200 -3} {} 200m DecimalSI} memory:{{268435456 0} {} BinarySI}]} [{mon-nginx-ingress-token-dgbmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10254,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10254,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*33,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*60,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{elastic-controller: enabled,},ServiceAccountName:mon-nginx-ingress,DeprecatedServiceAccount:mon-nginx-ingress,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[{[] [{metadata.name In [controller-1]}]}],},PreferredDuringSchedulingIgnoredDuringExecution:[],},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute } {node.kubernetes.io/unreachable Exists NoExecute } {node.kubernetes.io/disk-pressure Exists NoSchedule } {node.kubernetes.io/memory-pressure Exists NoSchedule } {node.kubernetes.io/pid-pressure Exists NoSchedule } {node.kubernetes.io/unschedulable Exists NoSchedule }],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC ContainersNotReady containers with unready status: [nginx-ingress-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC ContainersNotReady containers with unready status: [nginx-ingress-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:49:18 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 18:49:18 +0000 UTC,ContainerStatuses:[{nginx-ingress-controller {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 }],QOSClass:Guaranteed,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T18:49:25.137 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.137 [INFO][324282] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" HandleID="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Workload="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:49:25.145 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.145 [INFO][324282] ipam_plugin.go 220: Calico CNI IPAM handle=chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751 ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" HandleID="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Workload="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:49:25.145 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.145 [INFO][324282] ipam_plugin.go 230: Auto assigning IP ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" HandleID="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Workload="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002de560), Attrs:map[string]string{"node":"controller-1", "pod":"mon-nginx-ingress-controller-kgq85", "namespace":"monitor"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T18:49:25.145 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.145 [INFO][324282] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T18:49:25.149 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.149 [INFO][324282] ipam.go 309: Looking up existing affinities for host handle="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" host="controller-1" 2019-11-04T18:49:25.153 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.153 [INFO][324282] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" host="controller-1" 2019-11-04T18:49:25.155 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.155 [INFO][324282] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:49:25.157 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.157 [INFO][324282] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:49:25.157 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.157 [INFO][324282] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" host="controller-1" 2019-11-04T18:49:25.158 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.158 [INFO][324282] ipam.go 1244: Creating new handle: chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751 2019-11-04T18:49:25.160 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.160 [INFO][324282] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" host="controller-1" 2019-11-04T18:49:25.162 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.162 [INFO][324282] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e32c/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" host="controller-1" 2019-11-04T18:49:25.162 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.162 [INFO][324282] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e32c/122] handle="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" host="controller-1" 2019-11-04T18:49:25.163 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.163 [INFO][324282] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e32c/122] handle="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" host="controller-1" 2019-11-04T18:49:25.163 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.163 [INFO][324282] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e32c/122] ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" HandleID="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Workload="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:49:25.163 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.163 [INFO][324282] ipam_plugin.go 258: IPAM Result ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" HandleID="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Workload="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc000144240)} 2019-11-04T18:49:25.165 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.165 [INFO][324257] k8s.go 361: Populated endpoint ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Namespace="monitor" Pod="mon-nginx-ingress-controller-kgq85" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0", GenerateName:"mon-nginx-ingress-controller-", Namespace:"monitor", SelfLink:"", UID:"5f94b1d5-a0a0-4c81-926a-07d23af72b93", ResourceVersion:"8147547", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490158, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-nginx-ingress", "app":"nginx-ingress", "component":"controller", "controller-revision-hash":"866b74fd9d", "pod-template-generation":"1", "release":"mon-nginx-ingress"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-nginx-ingress-controller-kgq85", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e32c/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-nginx-ingress"}, InterfaceName:"calidf007395be0", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x50}, v3.EndpointPort{Name:"https", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1bb}}}} 2019-11-04T18:49:25.165 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.165 [INFO][324257] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e32c/128] ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Namespace="monitor" Pod="mon-nginx-ingress-controller-kgq85" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:49:25.165 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.165 [INFO][324257] network_linux.go 76: Setting the host side veth name to calidf007395be0 ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Namespace="monitor" Pod="mon-nginx-ingress-controller-kgq85" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:49:25.168 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.168 [INFO][324257] network_linux.go 411: Disabling IPv6 forwarding ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Namespace="monitor" Pod="mon-nginx-ingress-controller-kgq85" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:49:25.207 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.207 [INFO][324257] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Namespace="monitor" Pod="mon-nginx-ingress-controller-kgq85" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0", GenerateName:"mon-nginx-ingress-controller-", Namespace:"monitor", SelfLink:"", UID:"5f94b1d5-a0a0-4c81-926a-07d23af72b93", ResourceVersion:"8147547", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490158, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-revision-hash":"866b74fd9d", "pod-template-generation":"1", "release":"mon-nginx-ingress", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-nginx-ingress", "app":"nginx-ingress", "component":"controller"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751", Pod:"mon-nginx-ingress-controller-kgq85", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e32c/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-nginx-ingress"}, InterfaceName:"calidf007395be0", MAC:"3a:03:57:85:b5:7b", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x50}, v3.EndpointPort{Name:"https", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1bb}}}} 2019-11-04T18:49:25.212 controller-1 kubelet[88521]: info 2019-11-04 18:49:25.212 [INFO][324257] k8s.go 420: Wrote updated endpoint to datastore ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Namespace="monitor" Pod="mon-nginx-ingress-controller-kgq85" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:49:25.303 controller-1 containerd[12218]: info time="2019-11-04T18:49:25.303304960Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8a2f1ff5003ccb966a647d0cf3b53f78324b6ed9af44db0d7fde51297c2583af/shim.sock" debug=false pid=324346 2019-11-04T18:49:25.630 controller-1 containerd[12218]: info time="2019-11-04T18:49:25.630767721Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/77c1491261b5f1edd6d0b522a1c11031a73c79a22fd00689ec0c94098696244f/shim.sock" debug=false pid=324437 2019-11-04T18:49:27.594 controller-1 dockerd[12258]: info time="2019-11-04T18:49:27.594282559Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:27.594 controller-1 dockerd[12258]: info time="2019-11-04T18:49:27.594282584Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:27.645 controller-1 dockerd[12258]: info time="2019-11-04T18:49:27.645470049Z" level=error msg="Error running exec aa24baa60a6870850a512b196779139cf0a32915eeac2a46366014445daf1b0e in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:27.646 controller-1 kubelet[88521]: info W1104 18:49:27.645978 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:49:28.105 controller-1 dockerd[12258]: info time="2019-11-04T18:49:28.105847709Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:28.105 controller-1 dockerd[12258]: info time="2019-11-04T18:49:28.105868438Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:28.000 controller-1 ntpd[87544]: info Listen normally on 34 calidf007395be0 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T18:49:28.000 controller-1 ntpd[87544]: info Listen normally on 35 cali7eb1b3c61b4 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T18:49:28.000 controller-1 ntpd[87544]: info Listen normally on 36 cali2d9e61f5bd4 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T18:49:28.000 controller-1 ntpd[87544]: info Listen normally on 37 calidc068a6b120 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T18:49:28.000 controller-1 ntpd[87544]: debug new interface(s) found: waking up resolver 2019-11-04T18:49:28.152 controller-1 dockerd[12258]: info time="2019-11-04T18:49:28.152261724Z" level=error msg="Error running exec 83392a9981358e258feee60504195b158b080ab491abdadfaade2b688aec9acf in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:28.152 controller-1 kubelet[88521]: info W1104 18:49:28.152739 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:49:37.594 controller-1 dockerd[12258]: info time="2019-11-04T18:49:37.594717932Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:37.594 controller-1 dockerd[12258]: info time="2019-11-04T18:49:37.594723838Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:37.644 controller-1 dockerd[12258]: info time="2019-11-04T18:49:37.644456105Z" level=error msg="Error running exec a7ee0c7bb951ec5d8a76e2b75291774b94d97d008f73f15868be805fab8bc9c6 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:37.645 controller-1 kubelet[88521]: info W1104 18:49:37.645088 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:49:38.107 controller-1 dockerd[12258]: info time="2019-11-04T18:49:38.107915931Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:38.108 controller-1 dockerd[12258]: info time="2019-11-04T18:49:38.107915961Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:38.159 controller-1 dockerd[12258]: info time="2019-11-04T18:49:38.159771351Z" level=error msg="Error running exec e307ebfc0fd5b02d0042bbcf42e931bed335b5d9b1307713a9579d9623f0fc6b in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:38.160 controller-1 kubelet[88521]: info W1104 18:49:38.160364 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:49:47.594 controller-1 dockerd[12258]: info time="2019-11-04T18:49:47.594382652Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:47.594 controller-1 dockerd[12258]: info time="2019-11-04T18:49:47.594388141Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:47.643 controller-1 dockerd[12258]: info time="2019-11-04T18:49:47.643248093Z" level=error msg="Error running exec d46a7e5a3368dd51ab1cfd0c64d4dfe8cac3c8c1cbc388c031e1907ea564ef4a in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:47.643 controller-1 kubelet[88521]: info W1104 18:49:47.643777 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:49:48.104 controller-1 dockerd[12258]: info time="2019-11-04T18:49:48.104024440Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:48.104 controller-1 dockerd[12258]: info time="2019-11-04T18:49:48.104031908Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:48.150 controller-1 dockerd[12258]: info time="2019-11-04T18:49:48.150918147Z" level=error msg="Error running exec 99d9943c477c313d3e3550025123219e1cba39db8c2f3f99e542fd5ae6895a31 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:48.151 controller-1 kubelet[88521]: info W1104 18:49:48.151529 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:49:55.094 controller-1 containerd[12218]: info time="2019-11-04T18:49:55.094491203Z" level=info msg="shim reaped" id=bce1295bb935f90419c8343848ce68e30484e8bed7fcc47405a652f2542a12cb 2019-11-04T18:49:55.104 controller-1 dockerd[12258]: info time="2019-11-04T18:49:55.104531155Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:49:55.978 controller-1 containerd[12218]: info time="2019-11-04T18:49:55.978462365Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b9beffd175aa3535ea3b481093da1cd301904b67c6d68a6fea36b45afc4a3607/shim.sock" debug=false pid=329106 2019-11-04T18:49:57.594 controller-1 dockerd[12258]: info time="2019-11-04T18:49:57.594445379Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:57.594 controller-1 dockerd[12258]: info time="2019-11-04T18:49:57.594457950Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:57.642 controller-1 dockerd[12258]: info time="2019-11-04T18:49:57.642661106Z" level=error msg="Error running exec 3137ff30e5a3309da4f08a5e8f97d8aa99e5cc281806170cf4a699b68d9b6f05 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:57.643 controller-1 kubelet[88521]: info W1104 18:49:57.643070 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:49:58.106 controller-1 dockerd[12258]: info time="2019-11-04T18:49:58.106121713Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:58.106 controller-1 dockerd[12258]: info time="2019-11-04T18:49:58.106150512Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:49:58.153 controller-1 dockerd[12258]: info time="2019-11-04T18:49:58.153253547Z" level=error msg="Error running exec 981e14adbeee34b5fbe7fde7fb8a79d7b50eef87220f2b4736a76e57eb48867d in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:49:58.153 controller-1 kubelet[88521]: info W1104 18:49:58.153766 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:49:59.000 controller-1 nslcd[84484]: warning [36c40e] ldap_search_ext() failed: Can't contact LDAP server: Connection timed out 2019-11-04T18:49:59.000 controller-1 nslcd[84484]: warning [36c40e] no available LDAP server found, sleeping 1 seconds 2019-11-04T18:50:00.000 controller-1 nslcd[84484]: info [36c40e] connected to LDAP server ldap://controller 2019-11-04T18:50:01.839 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T18:50:01.848 controller-1 systemd[1]: info Started Session 7 of user root. 2019-11-04T18:50:01.893 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T18:50:03.637 controller-1 collectd[12249]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T18:50:06.803 controller-1 containerd[12218]: info time="2019-11-04T18:50:06.803816582Z" level=info msg="shim reaped" id=b9beffd175aa3535ea3b481093da1cd301904b67c6d68a6fea36b45afc4a3607 2019-11-04T18:50:06.814 controller-1 dockerd[12258]: info time="2019-11-04T18:50:06.813954939Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:50:06.929 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.929 [INFO][330893] plugin.go 442: Extracted identifiers ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:50:06.936 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.936 [WARNING][330893] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:50:06.936 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.936 [INFO][330893] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0", GenerateName:"rbd-provisioner-7484d49cf6-", Namespace:"kube-system", SelfLink:"", UID:"49ec1a28-7e5c-411e-a43c-3ae7187d0955", ResourceVersion:"8148096", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489977, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"rbd-provisioner", "pod-template-hash":"7484d49cf6", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"rbd-provisioner"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"rbd-provisioner-7484d49cf6-gw4fs", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e307/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.rbd-provisioner"}, InterfaceName:"cali2d9e61f5bd4", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:50:06.936 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.936 [INFO][330893] k8s.go 477: Releasing IP address(es) ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" 2019-11-04T18:50:06.936 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.936 [INFO][330893] utils.go 171: Calico CNI releasing IP address ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" 2019-11-04T18:50:06.954 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.954 [INFO][330915] ipam_plugin.go 299: Releasing address using handleID ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" HandleID="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:50:06.954 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.954 [INFO][330915] ipam.go 1145: Releasing all IPs with handle 'chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e' 2019-11-04T18:50:06.976 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.976 [INFO][330915] ipam_plugin.go 308: Released address using handleID ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" HandleID="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:50:06.976 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.976 [INFO][330915] ipam_plugin.go 317: Releasing address using workloadID ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" HandleID="chain.6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--gw4fs-eth0" 2019-11-04T18:50:06.976 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.976 [INFO][330915] ipam.go 1145: Releasing all IPs with handle 'kube-system.rbd-provisioner-7484d49cf6-gw4fs' 2019-11-04T18:50:06.978 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.978 [INFO][330893] k8s.go 481: Cleaning up netns ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" 2019-11-04T18:50:06.979 controller-1 kubelet[88521]: info 2019-11-04 18:50:06.979 [INFO][330893] network_linux.go 450: Calico CNI deleting device in netns /proc/322504/ns/net ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" 2019-11-04T18:50:07.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%32, but no knowledge of it 2019-11-04T18:50:07.052 controller-1 kubelet[88521]: info 2019-11-04 18:50:07.052 [INFO][330893] network_linux.go 467: Calico CNI deleted device in netns /proc/322504/ns/net ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" 2019-11-04T18:50:07.052 controller-1 kubelet[88521]: info 2019-11-04 18:50:07.052 [INFO][330893] k8s.go 493: Teardown processing complete. ContainerID="6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e" 2019-11-04T18:50:07.171 controller-1 containerd[12218]: info time="2019-11-04T18:50:07.170944403Z" level=info msg="shim reaped" id=6e10d8caab1d49360940646fa88e8dada123f8c489f02b2c6453ba2f42ffb64e 2019-11-04T18:50:07.180 controller-1 dockerd[12258]: info time="2019-11-04T18:50:07.180776921Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:50:07.592 controller-1 dockerd[12258]: info time="2019-11-04T18:50:07.592132243Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:07.592 controller-1 dockerd[12258]: info time="2019-11-04T18:50:07.592132058Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:07.638 controller-1 dockerd[12258]: info time="2019-11-04T18:50:07.638074071Z" level=error msg="Error running exec 8553d739fc75b3ba440a5ff39b29fc7687d30c8d3588765a41befeb17e88b575 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:07.638 controller-1 kubelet[88521]: info W1104 18:50:07.638584 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:50:08.107 controller-1 dockerd[12258]: info time="2019-11-04T18:50:08.107919849Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:08.107 controller-1 dockerd[12258]: info time="2019-11-04T18:50:08.107930365Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:08.000 controller-1 ntpd[87544]: info Deleting interface #36 cali2d9e61f5bd4, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=40 secs 2019-11-04T18:50:08.150 controller-1 kubelet[88521]: info I1104 18:50:08.150774 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "rbd-provisioner-token-587hn" (UniqueName: "kubernetes.io/secret/49ec1a28-7e5c-411e-a43c-3ae7187d0955-rbd-provisioner-token-587hn") pod "49ec1a28-7e5c-411e-a43c-3ae7187d0955" (UID: "49ec1a28-7e5c-411e-a43c-3ae7187d0955") 2019-11-04T18:50:08.157 controller-1 dockerd[12258]: info time="2019-11-04T18:50:08.157909710Z" level=error msg="Error running exec f1fb0ab07aee627641fca28f31e7871799ec8564ef04013154fbcb6d6d9cff79 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:08.158 controller-1 kubelet[88521]: info W1104 18:50:08.158364 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:50:08.163 controller-1 kubelet[88521]: info I1104 18:50:08.163788 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ec1a28-7e5c-411e-a43c-3ae7187d0955-rbd-provisioner-token-587hn" (OuterVolumeSpecName: "rbd-provisioner-token-587hn") pod "49ec1a28-7e5c-411e-a43c-3ae7187d0955" (UID: "49ec1a28-7e5c-411e-a43c-3ae7187d0955"). InnerVolumeSpecName "rbd-provisioner-token-587hn". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:50:08.251 controller-1 kubelet[88521]: info I1104 18:50:08.251035 88521 reconciler.go:301] Volume detached for volume "rbd-provisioner-token-587hn" (UniqueName: "kubernetes.io/secret/49ec1a28-7e5c-411e-a43c-3ae7187d0955-rbd-provisioner-token-587hn") on node "controller-1" DevicePath "" 2019-11-04T18:50:17.594 controller-1 dockerd[12258]: info time="2019-11-04T18:50:17.594706449Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:17.594 controller-1 dockerd[12258]: info time="2019-11-04T18:50:17.594708805Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:17.640 controller-1 dockerd[12258]: info time="2019-11-04T18:50:17.640278876Z" level=error msg="Error running exec 113ff51b345d67a9bea2ec2c57b12d33c0c42e8fedf835bbca9f5f381f08d253 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:17.640 controller-1 kubelet[88521]: info W1104 18:50:17.640713 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:50:18.106 controller-1 dockerd[12258]: info time="2019-11-04T18:50:18.106464332Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:18.106 controller-1 dockerd[12258]: info time="2019-11-04T18:50:18.106464936Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:18.153 controller-1 dockerd[12258]: info time="2019-11-04T18:50:18.153084536Z" level=error msg="Error running exec ca22c5f06983173a7c0fd88612eff3975765d16dc5d0546e1cdaeaf8aebec11e in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:18.153 controller-1 kubelet[88521]: info W1104 18:50:18.153612 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:50:18.983 controller-1 kubelet[88521]: info E1104 18:50:18.982968 88521 remote_runtime.go:243] StopContainer "395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:50:18.983 controller-1 kubelet[88521]: info E1104 18:50:18.983021 88521 kuberuntime_container.go:590] Container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:50:18.984 controller-1 kubelet[88521]: info E1104 18:50:18.984712 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:50:18.984 controller-1 kubelet[88521]: info E1104 18:50:18.984738 88521 pod_workers.go:191] Error syncing pod 99913751-ab01-4a00-8e4f-ff54b0232e5d ("mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:50:18.990 controller-1 kubelet[88521]: info E1104 18:50:18.990199 88521 remote_runtime.go:243] StopContainer "9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:50:18.990 controller-1 kubelet[88521]: info E1104 18:50:18.990244 88521 kuberuntime_container.go:590] Container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:50:18.991 controller-1 kubelet[88521]: info E1104 18:50:18.991418 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:50:18.991 controller-1 kubelet[88521]: info E1104 18:50:18.991437 88521 pod_workers.go:191] Error syncing pod 5edf03ac-2483-4c65-ba4d-f40dde7dbf65 ("mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:50:19.001 controller-1 dockerd[12258]: info time="2019-11-04T18:50:19.001529306Z" level=info msg="Container 395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:50:19.006 controller-1 dockerd[12258]: info time="2019-11-04T18:50:19.006828968Z" level=info msg="Container 9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6 failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:50:27.596 controller-1 dockerd[12258]: info time="2019-11-04T18:50:27.596289495Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:27.596 controller-1 dockerd[12258]: info time="2019-11-04T18:50:27.596317485Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:27.643 controller-1 dockerd[12258]: info time="2019-11-04T18:50:27.643031246Z" level=error msg="Error running exec c857c61e66d9cb32c920a14c1320592779eb731be5d43dfa3b289fc5fb8007b0 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:27.643 controller-1 kubelet[88521]: info W1104 18:50:27.643570 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:50:28.104 controller-1 dockerd[12258]: info time="2019-11-04T18:50:28.104488986Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:28.104 controller-1 dockerd[12258]: info time="2019-11-04T18:50:28.104599509Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:28.151 controller-1 dockerd[12258]: info time="2019-11-04T18:50:28.151724618Z" level=error msg="Error running exec b1c533d0021eb816ce3d5ffc1e3cd2ef0398075f93b829f61c22b32530028c60 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:28.152 controller-1 kubelet[88521]: info W1104 18:50:28.152309 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:50:29.020 controller-1 dockerd[12258]: info time="2019-11-04T18:50:29.020129513Z" level=info msg="Container 395f343e30e3 failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:50:29.024 controller-1 dockerd[12258]: info time="2019-11-04T18:50:29.024258247Z" level=info msg="Container 9bde4ebdc7bb failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:50:37.592 controller-1 dockerd[12258]: info time="2019-11-04T18:50:37.592722835Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:37.592 controller-1 dockerd[12258]: info time="2019-11-04T18:50:37.592748727Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:37.643 controller-1 dockerd[12258]: info time="2019-11-04T18:50:37.643881323Z" level=error msg="Error running exec 4c4b0ef91268e2c05fc1fcf1e0f11afd6a266e642b7fe3c2bf4fc2c6c45656d1 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:37.644 controller-1 kubelet[88521]: info W1104 18:50:37.644504 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:50:38.105 controller-1 dockerd[12258]: info time="2019-11-04T18:50:38.105632080Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:38.105 controller-1 dockerd[12258]: info time="2019-11-04T18:50:38.105662736Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:38.155 controller-1 dockerd[12258]: info time="2019-11-04T18:50:38.155603558Z" level=error msg="Error running exec 3d33e0703f8c196dfae34a0fa4ee6b150e0bb91652b39b22eb365005c644efb7 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:38.156 controller-1 kubelet[88521]: info W1104 18:50:38.156275 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:50:47.596 controller-1 dockerd[12258]: info time="2019-11-04T18:50:47.596594283Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:47.596 controller-1 dockerd[12258]: info time="2019-11-04T18:50:47.596628865Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:47.647 controller-1 dockerd[12258]: info time="2019-11-04T18:50:47.647583328Z" level=error msg="Error running exec 6211f9422c5723e8b4996c0243b4c7bc4fe3aaf2595730ed1e6ee73828d40245 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:47.648 controller-1 kubelet[88521]: info W1104 18:50:47.648069 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:50:48.106 controller-1 dockerd[12258]: info time="2019-11-04T18:50:48.106013983Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:48.106 controller-1 dockerd[12258]: info time="2019-11-04T18:50:48.106013499Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:48.152 controller-1 dockerd[12258]: info time="2019-11-04T18:50:48.152063196Z" level=error msg="Error running exec 428ec8e122980cd66100c7f329f4a80518ad3e18894e8aebe88e3f8b62be5226 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:48.152 controller-1 kubelet[88521]: info W1104 18:50:48.152602 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:50:57.600 controller-1 dockerd[12258]: info time="2019-11-04T18:50:57.600768818Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:57.600 controller-1 dockerd[12258]: info time="2019-11-04T18:50:57.600798695Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:57.650 controller-1 dockerd[12258]: info time="2019-11-04T18:50:57.650663626Z" level=error msg="Error running exec 18b028f2e50a306503506d2e3be20a6e27d3c0a1dcf77e2c3cda363383010c53 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:57.651 controller-1 kubelet[88521]: info W1104 18:50:57.651066 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:50:58.106 controller-1 dockerd[12258]: info time="2019-11-04T18:50:58.106071957Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:58.106 controller-1 dockerd[12258]: info time="2019-11-04T18:50:58.106118572Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:50:58.151 controller-1 dockerd[12258]: info time="2019-11-04T18:50:58.151688693Z" level=error msg="Error running exec 80824f72c1381e45143c640a266b24ac47b55636941d1ec1efc6449289d3f7ae in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:50:58.152 controller-1 kubelet[88521]: info W1104 18:50:58.152180 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:51:07.590 controller-1 dockerd[12258]: info time="2019-11-04T18:51:07.590259880Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:07.590 controller-1 dockerd[12258]: info time="2019-11-04T18:51:07.590294640Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:07.637 controller-1 dockerd[12258]: info time="2019-11-04T18:51:07.637088601Z" level=error msg="Error running exec a4d03550751b8d035a57709df9a656445cbb5b7237c7f4abdc79430d66a476e1 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:07.637 controller-1 kubelet[88521]: info W1104 18:51:07.637587 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:51:08.104 controller-1 dockerd[12258]: info time="2019-11-04T18:51:08.104323602Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:08.104 controller-1 dockerd[12258]: info time="2019-11-04T18:51:08.104346149Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:08.150 controller-1 dockerd[12258]: info time="2019-11-04T18:51:08.150663933Z" level=error msg="Error running exec bb7d2c947f1aa45c3de8fa75b34ed7a21f900e5fd5f0d3b6d01064b234e0e43d in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:08.151 controller-1 kubelet[88521]: info W1104 18:51:08.151126 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:51:17.598 controller-1 dockerd[12258]: info time="2019-11-04T18:51:17.598230424Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:17.598 controller-1 dockerd[12258]: info time="2019-11-04T18:51:17.598244713Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:17.648 controller-1 dockerd[12258]: info time="2019-11-04T18:51:17.648670532Z" level=error msg="Error running exec 9faeb8f66411dcd4d6f5654e8624fba6dd9a68c5a63d0d79345b30027ff25c01 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:17.649 controller-1 kubelet[88521]: info W1104 18:51:17.649200 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:51:18.102 controller-1 dockerd[12258]: info time="2019-11-04T18:51:18.102918130Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:18.102 controller-1 dockerd[12258]: info time="2019-11-04T18:51:18.102939167Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:18.147 controller-1 dockerd[12258]: info time="2019-11-04T18:51:18.147842694Z" level=error msg="Error running exec 6575213a70fa0c7f14eedab866ab4dfae1e347ceb11cdcc3b18ef5caf096d8d3 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:18.148 controller-1 kubelet[88521]: info W1104 18:51:18.148241 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:51:27.593 controller-1 dockerd[12258]: info time="2019-11-04T18:51:27.593274344Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:27.593 controller-1 dockerd[12258]: info time="2019-11-04T18:51:27.593286155Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:27.641 controller-1 dockerd[12258]: info time="2019-11-04T18:51:27.641900539Z" level=error msg="Error running exec cdd84e4f8f7407265a6815171a2becea8f270174ea5991b67989bdf790c4ade4 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:27.642 controller-1 kubelet[88521]: info W1104 18:51:27.642352 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:51:28.105 controller-1 dockerd[12258]: info time="2019-11-04T18:51:28.105636961Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:28.105 controller-1 dockerd[12258]: info time="2019-11-04T18:51:28.105660163Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:28.152 controller-1 dockerd[12258]: info time="2019-11-04T18:51:28.152325687Z" level=error msg="Error running exec c1861c7729fdaeed6a20d5743bd42f8ed385483004238958c0431e63b1189ca2 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:28.152 controller-1 kubelet[88521]: info W1104 18:51:28.152793 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:51:37.592 controller-1 dockerd[12258]: info time="2019-11-04T18:51:37.592440819Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:37.592 controller-1 dockerd[12258]: info time="2019-11-04T18:51:37.592475795Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:37.637 controller-1 dockerd[12258]: info time="2019-11-04T18:51:37.636960096Z" level=error msg="Error running exec 9f6947abe6850ac78ebb14213b08ed2566d49b7117a280ffb8595c12b5ebfce6 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:37.637 controller-1 kubelet[88521]: info W1104 18:51:37.637363 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:51:38.105 controller-1 dockerd[12258]: info time="2019-11-04T18:51:38.104967358Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:38.105 controller-1 dockerd[12258]: info time="2019-11-04T18:51:38.104966354Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:38.154 controller-1 dockerd[12258]: info time="2019-11-04T18:51:38.154357680Z" level=error msg="Error running exec 22003972d0b29916e2245322f0a965cea43e02756a058a9c98b7e4dee07389ab in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:38.154 controller-1 kubelet[88521]: info W1104 18:51:38.154878 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:51:47.596 controller-1 dockerd[12258]: info time="2019-11-04T18:51:47.596312917Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:47.596 controller-1 dockerd[12258]: info time="2019-11-04T18:51:47.596335309Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:47.645 controller-1 dockerd[12258]: info time="2019-11-04T18:51:47.645065074Z" level=error msg="Error running exec b8883b4285e072423b1b927355138c89cd0157cd59bce3e83544de030cd999b6 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:47.645 controller-1 kubelet[88521]: info W1104 18:51:47.645534 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:51:48.107 controller-1 dockerd[12258]: info time="2019-11-04T18:51:48.107610649Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:48.107 controller-1 dockerd[12258]: info time="2019-11-04T18:51:48.107647408Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:48.159 controller-1 dockerd[12258]: info time="2019-11-04T18:51:48.159092917Z" level=error msg="Error running exec 1a592d29f16bc99cb945f570f928cd861311b9862e029d1d03049f80e43bbfdf in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:48.159 controller-1 kubelet[88521]: info W1104 18:51:48.159576 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:51:53.640 controller-1 collectd[12249]: info NTP query plugin server list: ['0.pool.ntp.org', '1.pool.ntp.org', '3.pool.ntp.org'] 2019-11-04T18:51:53.657 controller-1 collectd[12249]: info NTPQ: fd00:204::3 2019-11-04T18:51:53.657 controller-1 collectd[12249]: info NTPQ: 153.24.162.44 2 u 68 128 17 0.053 -1.116 0.105 2019-11-04T18:51:53.657 controller-1 collectd[12249]: info NTPQ: 64:ff9b::607e:7a27 2019-11-04T18:51:53.658 controller-1 collectd[12249]: info WARNING:root:fm_python_extension: Failed to connect to FM manager 2019-11-04T18:51:53.658 controller-1 collectd[12249]: info NTP query plugin 'set_fault' exception ; 100.114:host=controller-1.ntp=64:ff9b::607e:7a27:minor ; Failed to execute set_fault. 2019-11-04T18:51:53.658 controller-1 collectd[12249]: info NTPQ: 67.201.132.53 2 u 18 128 37 40.588 -0.451 0.259 2019-11-04T18:51:53.658 controller-1 collectd[12249]: info NTPQ: 64:ff9b::d073:7e46 2019-11-04T18:51:53.660 controller-1 collectd[12249]: info NTP query plugin raised alarm 100.114:host=controller-1.ntp=64:ff9b::d073:7e46 2019-11-04T18:51:53.660 controller-1 collectd[12249]: info NTP query plugin added '64:ff9b::d073:7e46' to unreachable servers list: ['64:ff9b::d073:7e46'] 2019-11-04T18:51:53.660 controller-1 collectd[12249]: info NTPQ: 140.142.1.8 3 u 21 128 37 73.224 3.391 0.195 2019-11-04T18:51:53.660 controller-1 collectd[12249]: info NTPQ: 64:ff9b::6c3d:3823 2019-11-04T18:51:53.701 controller-1 collectd[12249]: info NTP query plugin raised alarm 100.114:host=controller-1.ntp=64:ff9b::6c3d:3823 2019-11-04T18:51:53.701 controller-1 collectd[12249]: info NTP query plugin added '64:ff9b::6c3d:3823' to unreachable servers list: ['64:ff9b::d073:7e46', '64:ff9b::6c3d:3823'] 2019-11-04T18:51:53.701 controller-1 collectd[12249]: info NTPQ: 198.30.92.2 2 u 14 128 37 14.953 -2.639 0.184 2019-11-04T18:51:53.701 controller-1 collectd[12249]: info NTP query plugin no selected server 2019-11-04T18:51:53.742 controller-1 collectd[12249]: info NTP query plugin raised alarm 100.114:host=controller-1.ntp 2019-11-04T18:51:57.595 controller-1 dockerd[12258]: info time="2019-11-04T18:51:57.595219080Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:57.595 controller-1 dockerd[12258]: info time="2019-11-04T18:51:57.595233320Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:57.642 controller-1 dockerd[12258]: info time="2019-11-04T18:51:57.642175926Z" level=error msg="Error running exec 4994e67c129b08b274efefea905cac86e1b98c05ba8fa35291404c3f5bc507fb in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:57.642 controller-1 kubelet[88521]: info W1104 18:51:57.642646 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:51:58.104 controller-1 dockerd[12258]: info time="2019-11-04T18:51:58.104372067Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:58.104 controller-1 dockerd[12258]: info time="2019-11-04T18:51:58.104399726Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:51:58.151 controller-1 dockerd[12258]: info time="2019-11-04T18:51:58.151612719Z" level=error msg="Error running exec 988b8a8b0a3b1564bf17f6244b454ede5db46613c856e365ac656e3f57f56fee in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:51:58.152 controller-1 kubelet[88521]: info W1104 18:51:58.152024 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:52:07.591 controller-1 dockerd[12258]: info time="2019-11-04T18:52:07.591907484Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:07.592 controller-1 dockerd[12258]: info time="2019-11-04T18:52:07.591933844Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:07.637 controller-1 dockerd[12258]: info time="2019-11-04T18:52:07.637064113Z" level=error msg="Error running exec 20f0df63a9de37902b5bba00a7e6fb0370abb0aa8223b727880af7b5f8994089 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:07.637 controller-1 kubelet[88521]: info W1104 18:52:07.637429 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:52:08.103 controller-1 dockerd[12258]: info time="2019-11-04T18:52:08.103735478Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:08.103 controller-1 dockerd[12258]: info time="2019-11-04T18:52:08.103746299Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:08.150 controller-1 dockerd[12258]: info time="2019-11-04T18:52:08.150342848Z" level=error msg="Error running exec b174183b8f52b90cb4d274bf94ed0c31c04b4eee9127cb0ef6ea283d38875867 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:08.150 controller-1 kubelet[88521]: info W1104 18:52:08.150715 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:52:17.598 controller-1 dockerd[12258]: info time="2019-11-04T18:52:17.598762173Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:17.598 controller-1 dockerd[12258]: info time="2019-11-04T18:52:17.598775309Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:17.644 controller-1 dockerd[12258]: info time="2019-11-04T18:52:17.644025305Z" level=error msg="Error running exec 47230d518def0658803f81ae76b7b5b520bd43f95c63cbb50024396c745d25b0 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:17.644 controller-1 kubelet[88521]: info W1104 18:52:17.644426 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:52:18.104 controller-1 dockerd[12258]: info time="2019-11-04T18:52:18.104586368Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:18.104 controller-1 dockerd[12258]: info time="2019-11-04T18:52:18.104589938Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:18.150 controller-1 dockerd[12258]: info time="2019-11-04T18:52:18.150224269Z" level=error msg="Error running exec 4529912473b9c11dbe15010d0af78611c7d94c7ef67292ae7c4730604a20bbee in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:18.150 controller-1 kubelet[88521]: info W1104 18:52:18.150728 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:52:19.204 controller-1 kubelet[88521]: info E1104 18:52:19.204317 88521 remote_runtime.go:243] StopContainer "395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:52:19.204 controller-1 kubelet[88521]: info E1104 18:52:19.204384 88521 remote_runtime.go:243] StopContainer "9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:52:19.204 controller-1 kubelet[88521]: info E1104 18:52:19.204393 88521 kuberuntime_container.go:590] Container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:52:19.204 controller-1 kubelet[88521]: info E1104 18:52:19.204410 88521 kuberuntime_container.go:590] Container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:52:19.205 controller-1 kubelet[88521]: info E1104 18:52:19.205906 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:52:19.205 controller-1 kubelet[88521]: info E1104 18:52:19.205929 88521 pod_workers.go:191] Error syncing pod 5edf03ac-2483-4c65-ba4d-f40dde7dbf65 ("mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:52:19.205 controller-1 kubelet[88521]: info E1104 18:52:19.205930 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:52:19.207 controller-1 kubelet[88521]: info E1104 18:52:19.207047 88521 pod_workers.go:191] Error syncing pod 99913751-ab01-4a00-8e4f-ff54b0232e5d ("mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:52:19.222 controller-1 dockerd[12258]: info time="2019-11-04T18:52:19.222560070Z" level=info msg="Container 395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:52:19.222 controller-1 dockerd[12258]: info time="2019-11-04T18:52:19.222713867Z" level=info msg="Container 9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6 failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:52:27.594 controller-1 dockerd[12258]: info time="2019-11-04T18:52:27.594663340Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:27.594 controller-1 dockerd[12258]: info time="2019-11-04T18:52:27.594679626Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:27.642 controller-1 dockerd[12258]: info time="2019-11-04T18:52:27.642842955Z" level=error msg="Error running exec 2150f3b5e711c13bbfd71afbadfebcabf9cc53467a6205f1cb1fd1d9916962d0 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:27.643 controller-1 kubelet[88521]: info W1104 18:52:27.643224 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:52:28.104 controller-1 dockerd[12258]: info time="2019-11-04T18:52:28.104945539Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:28.105 controller-1 dockerd[12258]: info time="2019-11-04T18:52:28.104967410Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:28.150 controller-1 dockerd[12258]: info time="2019-11-04T18:52:28.150620758Z" level=error msg="Error running exec bd709073937297d468c79b8e78ca8409a172b577be00492aec8ef8edc9ad6834 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:28.151 controller-1 kubelet[88521]: info W1104 18:52:28.151189 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:52:29.241 controller-1 dockerd[12258]: info time="2019-11-04T18:52:29.241552237Z" level=info msg="Container 9bde4ebdc7bb failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:52:29.241 controller-1 dockerd[12258]: info time="2019-11-04T18:52:29.241538696Z" level=info msg="Container 395f343e30e3 failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:52:37.591 controller-1 dockerd[12258]: info time="2019-11-04T18:52:37.591352119Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:37.591 controller-1 dockerd[12258]: info time="2019-11-04T18:52:37.591361429Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:37.637 controller-1 dockerd[12258]: info time="2019-11-04T18:52:37.637311541Z" level=error msg="Error running exec ecd8cbe7067dc9c267bc453d40269c557001936765a71a1f5e84d2d887496b41 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:37.637 controller-1 kubelet[88521]: info W1104 18:52:37.637685 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:52:38.104 controller-1 dockerd[12258]: info time="2019-11-04T18:52:38.104865431Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:38.104 controller-1 dockerd[12258]: info time="2019-11-04T18:52:38.104874896Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:38.150 controller-1 dockerd[12258]: info time="2019-11-04T18:52:38.150740518Z" level=error msg="Error running exec 0050c8e445105330c5875688a99bbbb507531fc827c710351e5a1433403e43eb in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:38.151 controller-1 kubelet[88521]: info W1104 18:52:38.151176 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:52:47.593 controller-1 dockerd[12258]: info time="2019-11-04T18:52:47.592999810Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:47.593 controller-1 dockerd[12258]: info time="2019-11-04T18:52:47.592999829Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:47.641 controller-1 dockerd[12258]: info time="2019-11-04T18:52:47.640983226Z" level=error msg="Error running exec 458c18ca172832b5573f6b2bdde42874b095b86cedf73021bc730c2c6efa2b02 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:47.641 controller-1 kubelet[88521]: info W1104 18:52:47.641461 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:52:48.103 controller-1 dockerd[12258]: info time="2019-11-04T18:52:48.103282004Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:48.103 controller-1 dockerd[12258]: info time="2019-11-04T18:52:48.103305572Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:48.150 controller-1 dockerd[12258]: info time="2019-11-04T18:52:48.150477256Z" level=error msg="Error running exec d877717032f6adcf452f0e2aa5da6d8a2aef783b4d7facc90e0520c742997e20 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:48.150 controller-1 kubelet[88521]: info W1104 18:52:48.150881 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:52:57.595 controller-1 dockerd[12258]: info time="2019-11-04T18:52:57.594947005Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:57.595 controller-1 dockerd[12258]: info time="2019-11-04T18:52:57.594982175Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:57.644 controller-1 dockerd[12258]: info time="2019-11-04T18:52:57.644679072Z" level=error msg="Error running exec 5cecdebe46cc234d013828dc94790a5b6e095cb8eb8c852cef67f0b862be52f3 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:57.645 controller-1 kubelet[88521]: info W1104 18:52:57.645163 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:52:58.104 controller-1 dockerd[12258]: info time="2019-11-04T18:52:58.104265782Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:58.104 controller-1 dockerd[12258]: info time="2019-11-04T18:52:58.104265784Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:52:58.151 controller-1 dockerd[12258]: info time="2019-11-04T18:52:58.151568767Z" level=error msg="Error running exec 916d1c1b44aaf0d66c721ed0d5f5feaaf205c10ad4ea240020994acc43a3a775 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:52:58.152 controller-1 kubelet[88521]: info W1104 18:52:58.151994 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:53:07.592 controller-1 dockerd[12258]: info time="2019-11-04T18:53:07.592432897Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:07.592 controller-1 dockerd[12258]: info time="2019-11-04T18:53:07.592469212Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:07.640 controller-1 dockerd[12258]: info time="2019-11-04T18:53:07.640109819Z" level=error msg="Error running exec acc765e62c4dfc0a75abc259dc619c33f93a2507e7d063796cf923f7776ff877 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:07.640 controller-1 kubelet[88521]: info W1104 18:53:07.640603 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:53:08.104 controller-1 dockerd[12258]: info time="2019-11-04T18:53:08.104880915Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:08.104 controller-1 dockerd[12258]: info time="2019-11-04T18:53:08.104903321Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:08.151 controller-1 dockerd[12258]: info time="2019-11-04T18:53:08.151933783Z" level=error msg="Error running exec 5fe0a6191d1803729bba8b5fd215caedeaa946310f92c6e5e7805d215fa30cf0 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:08.152 controller-1 kubelet[88521]: info W1104 18:53:08.152360 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:53:17.595 controller-1 dockerd[12258]: info time="2019-11-04T18:53:17.595535871Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:17.595 controller-1 dockerd[12258]: info time="2019-11-04T18:53:17.595561005Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:17.644 controller-1 dockerd[12258]: info time="2019-11-04T18:53:17.644687959Z" level=error msg="Error running exec 37430d322d278cb08264899495a3faa0bbf6537106c24c0688c4a7f3333a3229 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:17.645 controller-1 kubelet[88521]: info W1104 18:53:17.645148 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:53:18.105 controller-1 dockerd[12258]: info time="2019-11-04T18:53:18.105800363Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:18.105 controller-1 dockerd[12258]: info time="2019-11-04T18:53:18.105803173Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:18.155 controller-1 dockerd[12258]: info time="2019-11-04T18:53:18.155405453Z" level=error msg="Error running exec 671b8bef69798ac4db69e45c01c3f0375a7e439a9eaa3e924a39492cc073fef4 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:18.156 controller-1 kubelet[88521]: info W1104 18:53:18.155933 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:53:27.595 controller-1 dockerd[12258]: info time="2019-11-04T18:53:27.595677203Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:27.595 controller-1 dockerd[12258]: info time="2019-11-04T18:53:27.595679627Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:27.644 controller-1 dockerd[12258]: info time="2019-11-04T18:53:27.644777033Z" level=error msg="Error running exec e0d2476aa3a47bd9c2b7e6bfa770705ed9157f47d4f7f23dacc87cf7a13115ac in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:27.645 controller-1 kubelet[88521]: info W1104 18:53:27.645291 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:53:28.110 controller-1 dockerd[12258]: info time="2019-11-04T18:53:28.110874727Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:28.110 controller-1 dockerd[12258]: info time="2019-11-04T18:53:28.110913670Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:28.162 controller-1 dockerd[12258]: info time="2019-11-04T18:53:28.162753082Z" level=error msg="Error running exec bab60468a4b25037d16f959e5db19ab0ced3d7ce5b829fad3b8c86fcc53d985f in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:28.163 controller-1 kubelet[88521]: info W1104 18:53:28.163199 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:53:33.637 controller-1 collectd[12249]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T18:53:37.588 controller-1 dockerd[12258]: info time="2019-11-04T18:53:37.588729873Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:37.588 controller-1 dockerd[12258]: info time="2019-11-04T18:53:37.588749658Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:37.634 controller-1 dockerd[12258]: info time="2019-11-04T18:53:37.634378083Z" level=error msg="Error running exec 264627264f3c5298775f3e2650f24e4f6f7e450683dfbb1cffae8e0e0e073fb5 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:37.634 controller-1 kubelet[88521]: info W1104 18:53:37.634822 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:53:38.104 controller-1 dockerd[12258]: info time="2019-11-04T18:53:38.104621080Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:38.104 controller-1 dockerd[12258]: info time="2019-11-04T18:53:38.104675189Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:38.151 controller-1 dockerd[12258]: info time="2019-11-04T18:53:38.151943106Z" level=error msg="Error running exec c8dc807f3a4ffb8d71e26e4f708dd331132853ae5ae6092900d93380114c56cb in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:38.152 controller-1 kubelet[88521]: info W1104 18:53:38.152344 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:53:47.595 controller-1 dockerd[12258]: info time="2019-11-04T18:53:47.595570384Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:47.595 controller-1 dockerd[12258]: info time="2019-11-04T18:53:47.595574593Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:47.646 controller-1 dockerd[12258]: info time="2019-11-04T18:53:47.646340127Z" level=error msg="Error running exec 72b46af8a808f80aa25e69053c12f00b50f9996c5931434783efb0d91a0670d9 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:47.646 controller-1 kubelet[88521]: info W1104 18:53:47.646817 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:53:48.104 controller-1 dockerd[12258]: info time="2019-11-04T18:53:48.104263964Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:48.104 controller-1 dockerd[12258]: info time="2019-11-04T18:53:48.104263341Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:48.153 controller-1 dockerd[12258]: info time="2019-11-04T18:53:48.152963861Z" level=error msg="Error running exec 456359c4a86c275339ae70f753a9d8d6348c4938f6c87af4ec9cb88a2f49d197 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:48.153 controller-1 kubelet[88521]: info W1104 18:53:48.153393 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:53:57.598 controller-1 dockerd[12258]: info time="2019-11-04T18:53:57.598212242Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:57.598 controller-1 dockerd[12258]: info time="2019-11-04T18:53:57.598235325Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:57.647 controller-1 dockerd[12258]: info time="2019-11-04T18:53:57.647253428Z" level=error msg="Error running exec e97bc49b51fc7f1ca3d66c04e9ae4d107434c1ca841d3764349e741816c54962 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:57.647 controller-1 kubelet[88521]: info W1104 18:53:57.647735 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:53:58.107 controller-1 dockerd[12258]: info time="2019-11-04T18:53:58.107416054Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:58.107 controller-1 dockerd[12258]: info time="2019-11-04T18:53:58.107426879Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:53:58.154 controller-1 dockerd[12258]: info time="2019-11-04T18:53:58.154110142Z" level=error msg="Error running exec 66ff50440b98bf0e84f3b4489c9c514a7e5616bcb299fb47164d6072e121a7fa in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:53:58.154 controller-1 kubelet[88521]: info W1104 18:53:58.154588 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:54:07.591 controller-1 dockerd[12258]: info time="2019-11-04T18:54:07.591876131Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:07.591 controller-1 dockerd[12258]: info time="2019-11-04T18:54:07.591924965Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:07.637 controller-1 dockerd[12258]: info time="2019-11-04T18:54:07.637420312Z" level=error msg="Error running exec 7e343df12c3731e1c3fd50dd42929cf71359f47e1218b2883444cb2c96ece9ba in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:07.638 controller-1 kubelet[88521]: info W1104 18:54:07.637939 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:54:08.106 controller-1 dockerd[12258]: info time="2019-11-04T18:54:08.106892941Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:08.106 controller-1 dockerd[12258]: info time="2019-11-04T18:54:08.106895015Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:08.156 controller-1 dockerd[12258]: info time="2019-11-04T18:54:08.156757703Z" level=error msg="Error running exec 336188784b7406e177b0706ba1382798076f3accf3682945ec91283db5097933 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:08.157 controller-1 kubelet[88521]: info W1104 18:54:08.157189 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:54:17.598 controller-1 dockerd[12258]: info time="2019-11-04T18:54:17.598380350Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:17.598 controller-1 dockerd[12258]: info time="2019-11-04T18:54:17.598395186Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:17.646 controller-1 dockerd[12258]: info time="2019-11-04T18:54:17.646962798Z" level=error msg="Error running exec c7f64628583d609d2386d75be78df12c0eb6de25a3ff3740c015b18f2d4f9e21 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:17.647 controller-1 kubelet[88521]: info W1104 18:54:17.647385 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:54:18.106 controller-1 dockerd[12258]: info time="2019-11-04T18:54:18.106560784Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:18.106 controller-1 dockerd[12258]: info time="2019-11-04T18:54:18.106657341Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:18.157 controller-1 dockerd[12258]: info time="2019-11-04T18:54:18.157516625Z" level=error msg="Error running exec da2b73a3d22bd6cbde1c9abec884de79c48026f662590e48c3dce0e9286f2a2d in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:18.158 controller-1 kubelet[88521]: info W1104 18:54:18.157994 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:54:19.439 controller-1 kubelet[88521]: info E1104 18:54:19.439717 88521 remote_runtime.go:243] StopContainer "9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:54:19.439 controller-1 kubelet[88521]: info E1104 18:54:19.439737 88521 remote_runtime.go:243] StopContainer "395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:54:19.439 controller-1 kubelet[88521]: info E1104 18:54:19.439787 88521 kuberuntime_container.go:590] Container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:54:19.439 controller-1 kubelet[88521]: info E1104 18:54:19.439791 88521 kuberuntime_container.go:590] Container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:54:19.441 controller-1 kubelet[88521]: info E1104 18:54:19.441320 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:54:19.441 controller-1 kubelet[88521]: info E1104 18:54:19.441338 88521 pod_workers.go:191] Error syncing pod 99913751-ab01-4a00-8e4f-ff54b0232e5d ("mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:54:19.441 controller-1 kubelet[88521]: info E1104 18:54:19.441401 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:54:19.442 controller-1 kubelet[88521]: info E1104 18:54:19.442476 88521 pod_workers.go:191] Error syncing pod 5edf03ac-2483-4c65-ba4d-f40dde7dbf65 ("mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:54:19.457 controller-1 dockerd[12258]: info time="2019-11-04T18:54:19.457037784Z" level=info msg="Container 9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6 failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:54:19.457 controller-1 dockerd[12258]: info time="2019-11-04T18:54:19.457476273Z" level=info msg="Container 395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:54:27.591 controller-1 dockerd[12258]: info time="2019-11-04T18:54:27.591598138Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:27.591 controller-1 dockerd[12258]: info time="2019-11-04T18:54:27.591612388Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:27.640 controller-1 dockerd[12258]: info time="2019-11-04T18:54:27.640131839Z" level=error msg="Error running exec d67c489f4296c8690ba94cb2e65fa00ae3ee1980e49b896e16180171979c5330 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:27.640 controller-1 kubelet[88521]: info W1104 18:54:27.640660 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:54:28.104 controller-1 dockerd[12258]: info time="2019-11-04T18:54:28.104566668Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:28.104 controller-1 dockerd[12258]: info time="2019-11-04T18:54:28.104571983Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:28.150 controller-1 dockerd[12258]: info time="2019-11-04T18:54:28.150110006Z" level=error msg="Error running exec 11bc3716f3ced7b4ac50a212989b48dbef0803e8374a4d37f844e11122a8b7f7 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:28.150 controller-1 kubelet[88521]: info W1104 18:54:28.150611 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:54:29.475 controller-1 dockerd[12258]: info time="2019-11-04T18:54:29.475088102Z" level=info msg="Container 9bde4ebdc7bb failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:54:29.475 controller-1 dockerd[12258]: info time="2019-11-04T18:54:29.475437565Z" level=info msg="Container 395f343e30e3 failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:54:37.590 controller-1 dockerd[12258]: info time="2019-11-04T18:54:37.590268519Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:37.590 controller-1 dockerd[12258]: info time="2019-11-04T18:54:37.590279979Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:37.636 controller-1 dockerd[12258]: info time="2019-11-04T18:54:37.636517927Z" level=error msg="Error running exec 6ed486460b3900b2e6d17659a21a80f6e109a9fc111721b250cf325aabd357ab in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:37.637 controller-1 kubelet[88521]: info W1104 18:54:37.636995 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:54:38.104 controller-1 dockerd[12258]: info time="2019-11-04T18:54:38.104882035Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:38.104 controller-1 dockerd[12258]: info time="2019-11-04T18:54:38.104883281Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:38.150 controller-1 dockerd[12258]: info time="2019-11-04T18:54:38.150262144Z" level=error msg="Error running exec abbcd30d69d400d7f97125251fe7a07c1f5dc56a143ec7a57577b0db45c31a51 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:38.150 controller-1 kubelet[88521]: info W1104 18:54:38.150668 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:54:47.600 controller-1 dockerd[12258]: info time="2019-11-04T18:54:47.600028653Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:47.600 controller-1 dockerd[12258]: info time="2019-11-04T18:54:47.600044297Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:47.649 controller-1 dockerd[12258]: info time="2019-11-04T18:54:47.649282366Z" level=error msg="Error running exec f2e8b7cf51f1fd72346d01269562935c5402d0b523670d22291d2e0ff143d6da in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:47.649 controller-1 kubelet[88521]: info W1104 18:54:47.649791 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:54:48.106 controller-1 dockerd[12258]: info time="2019-11-04T18:54:48.106580313Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:48.106 controller-1 dockerd[12258]: info time="2019-11-04T18:54:48.106611741Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:48.152 controller-1 dockerd[12258]: info time="2019-11-04T18:54:48.152646540Z" level=error msg="Error running exec fc90d06e27b7b4222d56c7982a16de769edf1a7b953519fd9722a1b4941dab05 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:48.153 controller-1 kubelet[88521]: info W1104 18:54:48.153124 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:54:57.596 controller-1 dockerd[12258]: info time="2019-11-04T18:54:57.596567515Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:57.596 controller-1 dockerd[12258]: info time="2019-11-04T18:54:57.596600212Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:57.644 controller-1 dockerd[12258]: info time="2019-11-04T18:54:57.644769169Z" level=error msg="Error running exec 7b60bd071e2bb2b53e5334b505e8286e16c49def4d486f1966c7ab081095b0e2 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:57.645 controller-1 kubelet[88521]: info W1104 18:54:57.645267 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:54:58.108 controller-1 dockerd[12258]: info time="2019-11-04T18:54:58.108768254Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:58.108 controller-1 dockerd[12258]: info time="2019-11-04T18:54:58.108799798Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:54:58.157 controller-1 dockerd[12258]: info time="2019-11-04T18:54:58.157745205Z" level=error msg="Error running exec 23043f542134cf3649ddbfcf22e182649b61129f1cdb032edc04aa59c330c88a in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:54:58.158 controller-1 kubelet[88521]: info W1104 18:54:58.158277 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:54:58.000 controller-1 nslcd[84484]: warning [221a70] ldap_search_ext() failed: Can't contact LDAP server: Broken pipe 2019-11-04T18:54:58.000 controller-1 nslcd[84484]: warning [221a70] no available LDAP server found, sleeping 1 seconds 2019-11-04T18:54:59.000 controller-1 nslcd[84484]: info [221a70] connected to LDAP server ldap://controller 2019-11-04T18:55:03.309 controller-1 kubelet[88521]: info I1104 18:55:03.309216 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-etc") pod "ceph-pools-audit-1572893700-mjc77" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4") 2019-11-04T18:55:03.309 controller-1 kubelet[88521]: info I1104 18:55:03.309263 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-pools-bin") pod "ceph-pools-audit-1572893700-mjc77" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4") 2019-11-04T18:55:03.309 controller-1 kubelet[88521]: info I1104 18:55:03.309400 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/d517f3f5-5117-4c9e-a583-60d5e6883ea4-etcceph") pod "ceph-pools-audit-1572893700-mjc77" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4") 2019-11-04T18:55:03.309 controller-1 kubelet[88521]: info I1104 18:55:03.309447 88521 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-pools-audit-token-bsfbw") pod "ceph-pools-audit-1572893700-mjc77" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4") 2019-11-04T18:55:03.427 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d517f3f5-5117-4c9e-a583-60d5e6883ea4/volumes/kubernetes.io~secret/ceph-pools-audit-token-bsfbw. 2019-11-04T18:55:03.615 controller-1 dockerd[12258]: info time="2019-11-04T18:55:03.615129605Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T18:55:03.620 controller-1 containerd[12218]: info time="2019-11-04T18:55:03.620250054Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9/shim.sock" debug=false pid=373010 2019-11-04T18:55:07.590 controller-1 dockerd[12258]: info time="2019-11-04T18:55:07.590117795Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:07.590 controller-1 dockerd[12258]: info time="2019-11-04T18:55:07.590146154Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:07.636 controller-1 dockerd[12258]: info time="2019-11-04T18:55:07.635974253Z" level=error msg="Error running exec 2bc6b7dd0d1c5ead7f01584f47e8e0677cd5dc7ed706d5bea8f9f0a47b40af2d in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:07.636 controller-1 kubelet[88521]: info W1104 18:55:07.636393 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:55:08.102 controller-1 dockerd[12258]: info time="2019-11-04T18:55:08.102200679Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:08.102 controller-1 dockerd[12258]: info time="2019-11-04T18:55:08.102205451Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:08.148 controller-1 dockerd[12258]: info time="2019-11-04T18:55:08.148666877Z" level=error msg="Error running exec 0f3b5ec2517aee8c6ce5ea9ecb9b6fc41edcd49a6326244386f557dfd0eba557 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:08.149 controller-1 kubelet[88521]: info W1104 18:55:08.149028 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:55:09.549 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.549 [INFO][373850] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"ceph-pools-audit-1572893700-mjc77", ContainerID:"dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9"}} 2019-11-04T18:55:09.565 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.565 [INFO][373850] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0 ceph-pools-audit-1572893700- kube-system d517f3f5-5117-4c9e-a583-60d5e6883ea4 8150984 0 2019-11-04 18:55:03 +0000 UTC map[app:ceph-pools-audit controller-uid:0af5efba-80dd-45f8-8c8f-f64857b86d5f job-name:ceph-pools-audit-1572893700 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:ceph-pools-audit] map[] [] nil [] } {k8s controller-1 ceph-pools-audit-1572893700-mjc77 eth0 [] [] [kns.kube-system ksa.kube-system.ceph-pools-audit] cali9b1659125dd []}} ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Namespace="kube-system" Pod="ceph-pools-audit-1572893700-mjc77" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-" 2019-11-04T18:55:09.565 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.565 [INFO][373850] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Namespace="kube-system" Pod="ceph-pools-audit-1572893700-mjc77" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:09.568 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.567 [INFO][373850] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kube-system,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/kube-system,UID:5d016a6c-19e8-4b97-88a9-b6113a3cb736,ResourceVersion:5,Generation:0,CreationTimestamp:2019-10-25 15:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T18:55:09.572 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.572 [INFO][373850] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ceph-pools-audit-1572893700-mjc77,GenerateName:ceph-pools-audit-1572893700-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/ceph-pools-audit-1572893700-mjc77,UID:d517f3f5-5117-4c9e-a583-60d5e6883ea4,ResourceVersion:8150984,Generation:0,CreationTimestamp:2019-11-04 18:55:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: ceph-pools-audit,controller-uid: 0af5efba-80dd-45f8-8c8f-f64857b86d5f,job-name: ceph-pools-audit-1572893700,},Annotations:map[string]string{},OwnerReferences:[{batch/v1 Job ceph-pools-audit-1572893700 0af5efba-80dd-45f8-8c8f-f64857b86d5f 0xc0003d98ab 0xc0003d98ac}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{ceph-pools-bin {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:ceph-pools-bin,},Items:[],DefaultMode:*365,Optional:nil,} nil nil nil nil nil nil nil nil}} {etcceph {nil &EmptyDirVolumeSource{Medium:,SizeLimit:,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {ceph-etc {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:ceph-etc,},Items:[],DefaultMode:*292,Optional:nil,} nil nil nil nil nil nil nil nil}} {ceph-pools-audit-token-bsfbw {nil nil nil nil nil &SecretVolumeSource{SecretName:ceph-pools-audit-token-bsfbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{ceph-pools-audit-ceph-store registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 [/tmp/ceph-pools-audit.sh] [] [] [] [{RBD_POOL_REPLICATION 2 nil} {RBD_POOL_MIN_REPLICATION 1 nil} {RBD_POOL_CRUSH_RULE_NAME storage_tier_ruleset nil}] {map[] map[]} [{ceph-pools-bin true /tmp/ceph-pools-audit.sh ceph-pools-audit.sh } {etcceph false /etc/ceph } {ceph-etc true /etc/ceph/ceph.conf ceph.conf } {ceph-pools-audit-token-bsfbw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:OnFailure,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:ceph-pools-audit,DeprecatedServiceAccount:ceph-pools-audit,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[{default-registry-key}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0003d9bd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0003d9bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:55:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:55:03 +0000 UTC ContainersNotReady containers with unready status: [ceph-pools-audit-ceph-store]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:55:03 +0000 UTC ContainersNotReady containers with unready status: [ceph-pools-audit-ceph-store]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:55:03 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 18:55:03 +0000 UTC,ContainerStatuses:[{ceph-pools-audit-ceph-store {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T18:55:09.590 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.590 [INFO][373914] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" HandleID="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Workload="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:09.599 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.599 [INFO][373914] ipam_plugin.go 220: Calico CNI IPAM handle=chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9 ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" HandleID="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Workload="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:09.599 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.599 [INFO][373914] ipam_plugin.go 230: Auto assigning IP ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" HandleID="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Workload="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002e2560), Attrs:map[string]string{"node":"controller-1", "pod":"ceph-pools-audit-1572893700-mjc77", "namespace":"kube-system"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T18:55:09.599 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.599 [INFO][373914] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T18:55:09.603 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.603 [INFO][373914] ipam.go 309: Looking up existing affinities for host handle="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" host="controller-1" 2019-11-04T18:55:09.606 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.606 [INFO][373914] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" host="controller-1" 2019-11-04T18:55:09.608 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.608 [INFO][373914] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:55:09.610 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.610 [INFO][373914] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T18:55:09.610 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.610 [INFO][373914] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" host="controller-1" 2019-11-04T18:55:09.611 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.611 [INFO][373914] ipam.go 1244: Creating new handle: chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9 2019-11-04T18:55:09.614 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.614 [INFO][373914] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" host="controller-1" 2019-11-04T18:55:09.616 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.616 [INFO][373914] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e320/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" host="controller-1" 2019-11-04T18:55:09.616 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.616 [INFO][373914] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e320/122] handle="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" host="controller-1" 2019-11-04T18:55:09.617 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.617 [INFO][373914] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e320/122] handle="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" host="controller-1" 2019-11-04T18:55:09.617 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.617 [INFO][373914] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e320/122] ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" HandleID="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Workload="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:09.617 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.617 [INFO][373914] ipam_plugin.go 258: IPAM Result ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" HandleID="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Workload="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc0006a8180)} 2019-11-04T18:55:09.619 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.619 [INFO][373850] k8s.go 361: Populated endpoint ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Namespace="kube-system" Pod="ceph-pools-audit-1572893700-mjc77" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0", GenerateName:"ceph-pools-audit-1572893700-", Namespace:"kube-system", SelfLink:"", UID:"d517f3f5-5117-4c9e-a583-60d5e6883ea4", ResourceVersion:"8150984", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490503, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ceph-pools-audit", "controller-uid":"0af5efba-80dd-45f8-8c8f-f64857b86d5f", "job-name":"ceph-pools-audit-1572893700", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"ceph-pools-audit-1572893700-mjc77", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e320/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali9b1659125dd", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:55:09.619 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.619 [INFO][373850] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e320/128] ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Namespace="kube-system" Pod="ceph-pools-audit-1572893700-mjc77" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:09.619 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.619 [INFO][373850] network_linux.go 76: Setting the host side veth name to cali9b1659125dd ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Namespace="kube-system" Pod="ceph-pools-audit-1572893700-mjc77" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:09.622 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.622 [INFO][373850] network_linux.go 411: Disabling IPv6 forwarding ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Namespace="kube-system" Pod="ceph-pools-audit-1572893700-mjc77" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:09.658 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.657 [INFO][373850] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Namespace="kube-system" Pod="ceph-pools-audit-1572893700-mjc77" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0", GenerateName:"ceph-pools-audit-1572893700-", Namespace:"kube-system", SelfLink:"", UID:"d517f3f5-5117-4c9e-a583-60d5e6883ea4", ResourceVersion:"8150984", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490503, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit", "app":"ceph-pools-audit", "controller-uid":"0af5efba-80dd-45f8-8c8f-f64857b86d5f", "job-name":"ceph-pools-audit-1572893700", "projectcalico.org/namespace":"kube-system"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9", Pod:"ceph-pools-audit-1572893700-mjc77", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e320/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali9b1659125dd", MAC:"72:66:40:63:49:2e", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:55:09.660 controller-1 kubelet[88521]: info 2019-11-04 18:55:09.660 [INFO][373850] k8s.go 420: Wrote updated endpoint to datastore ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Namespace="kube-system" Pod="ceph-pools-audit-1572893700-mjc77" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:09.713 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d517f3f5-5117-4c9e-a583-60d5e6883ea4/volume-subpaths/ceph-pools-bin/ceph-pools-audit-ceph-store/0. 2019-11-04T18:55:09.780 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d517f3f5-5117-4c9e-a583-60d5e6883ea4/volume-subpaths/ceph-pools-bin/ceph-pools-audit-ceph-store/0. 2019-11-04T18:55:09.807 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d517f3f5-5117-4c9e-a583-60d5e6883ea4/volume-subpaths/ceph-etc/ceph-pools-audit-ceph-store/2. 2019-11-04T18:55:09.834 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/d517f3f5-5117-4c9e-a583-60d5e6883ea4/volume-subpaths/ceph-etc/ceph-pools-audit-ceph-store/2. 2019-11-04T18:55:09.875 controller-1 containerd[12218]: info time="2019-11-04T18:55:09.875613297Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c67a6eb76e57ab0d230dd48bdb4e52ac13f289adb8a98eb0143453db3ccc5ef9/shim.sock" debug=false pid=374033 2019-11-04T18:55:12.000 controller-1 ntpd[87544]: info Listen normally on 38 cali9b1659125dd fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T18:55:12.000 controller-1 ntpd[87544]: debug new interface(s) found: waking up resolver 2019-11-04T18:55:17.595 controller-1 dockerd[12258]: info time="2019-11-04T18:55:17.595675045Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:17.595 controller-1 dockerd[12258]: info time="2019-11-04T18:55:17.595748245Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:17.645 controller-1 dockerd[12258]: info time="2019-11-04T18:55:17.645361952Z" level=error msg="Error running exec 8abddd16f7a910335dfcddc8a6107d2a449fc847f4f4b1ebfaacd77ddddcdb51 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:17.646 controller-1 kubelet[88521]: info W1104 18:55:17.645991 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:55:18.110 controller-1 dockerd[12258]: info time="2019-11-04T18:55:18.110126430Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:18.110 controller-1 dockerd[12258]: info time="2019-11-04T18:55:18.110196558Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:18.159 controller-1 dockerd[12258]: info time="2019-11-04T18:55:18.159467834Z" level=error msg="Error running exec 42d859f26e9b57f56c12c2b38bb87cc56677fd46f7e6e36de59b8434016d6a80 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:18.160 controller-1 kubelet[88521]: info W1104 18:55:18.160007 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:55:21.180 controller-1 containerd[12218]: info time="2019-11-04T18:55:21.180351319Z" level=info msg="shim reaped" id=c67a6eb76e57ab0d230dd48bdb4e52ac13f289adb8a98eb0143453db3ccc5ef9 2019-11-04T18:55:21.190 controller-1 dockerd[12258]: info time="2019-11-04T18:55:21.190542084Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:21.477 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.477 [INFO][375444] plugin.go 442: Extracted identifiers ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:21.483 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.483 [WARNING][375444] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:55:21.483 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.483 [INFO][375444] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0", GenerateName:"ceph-pools-audit-1572893700-", Namespace:"kube-system", SelfLink:"", UID:"d517f3f5-5117-4c9e-a583-60d5e6883ea4", ResourceVersion:"8151154", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490503, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ceph-pools-audit", "controller-uid":"0af5efba-80dd-45f8-8c8f-f64857b86d5f", "job-name":"ceph-pools-audit-1572893700", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"ceph-pools-audit-1572893700-mjc77", Endpoint:"eth0", IPNetworks:[]string(nil), IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali9b1659125dd", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T18:55:21.483 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.483 [INFO][375444] k8s.go 477: Releasing IP address(es) ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" 2019-11-04T18:55:21.483 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.483 [INFO][375444] utils.go 171: Calico CNI releasing IP address ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" 2019-11-04T18:55:21.501 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.501 [INFO][375467] ipam_plugin.go 299: Releasing address using handleID ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" HandleID="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Workload="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:21.501 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.501 [INFO][375467] ipam.go 1145: Releasing all IPs with handle 'chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9' 2019-11-04T18:55:21.524 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.524 [INFO][375467] ipam_plugin.go 308: Released address using handleID ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" HandleID="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Workload="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:21.524 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.524 [INFO][375467] ipam_plugin.go 317: Releasing address using workloadID ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" HandleID="chain.dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" Workload="controller--1-k8s-ceph--pools--audit--1572893700--mjc77-eth0" 2019-11-04T18:55:21.524 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.524 [INFO][375467] ipam.go 1145: Releasing all IPs with handle 'kube-system.ceph-pools-audit-1572893700-mjc77' 2019-11-04T18:55:21.526 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.526 [INFO][375444] k8s.go 481: Cleaning up netns ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" 2019-11-04T18:55:21.527 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.527 [INFO][375444] network_linux.go 450: Calico CNI deleting device in netns /proc/373030/ns/net ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" 2019-11-04T18:55:21.551 controller-1 kubelet[88521]: info I1104 18:55:21.551295 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-pools-bin") pod "d517f3f5-5117-4c9e-a583-60d5e6883ea4" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4") 2019-11-04T18:55:21.551 controller-1 kubelet[88521]: info I1104 18:55:21.551337 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/d517f3f5-5117-4c9e-a583-60d5e6883ea4-etcceph") pod "d517f3f5-5117-4c9e-a583-60d5e6883ea4" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4") 2019-11-04T18:55:21.551 controller-1 kubelet[88521]: info I1104 18:55:21.551382 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-pools-audit-token-bsfbw") pod "d517f3f5-5117-4c9e-a583-60d5e6883ea4" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4") 2019-11-04T18:55:21.551 controller-1 kubelet[88521]: info I1104 18:55:21.551419 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-etc") pod "d517f3f5-5117-4c9e-a583-60d5e6883ea4" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4") 2019-11-04T18:55:21.551 controller-1 kubelet[88521]: info W1104 18:55:21.551443 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/d517f3f5-5117-4c9e-a583-60d5e6883ea4/volumes/kubernetes.io~empty-dir/etcceph: ClearQuota called, but quotas disabled 2019-11-04T18:55:21.551 controller-1 kubelet[88521]: info I1104 18:55:21.551567 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d517f3f5-5117-4c9e-a583-60d5e6883ea4-etcceph" (OuterVolumeSpecName: "etcceph") pod "d517f3f5-5117-4c9e-a583-60d5e6883ea4" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4"). InnerVolumeSpecName "etcceph". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" 2019-11-04T18:55:21.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%35, but no knowledge of it 2019-11-04T18:55:21.588 controller-1 kubelet[88521]: info I1104 18:55:21.588801 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-pools-audit-token-bsfbw" (OuterVolumeSpecName: "ceph-pools-audit-token-bsfbw") pod "d517f3f5-5117-4c9e-a583-60d5e6883ea4" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4"). InnerVolumeSpecName "ceph-pools-audit-token-bsfbw". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:55:21.596 controller-1 kubelet[88521]: info W1104 18:55:21.596934 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/d517f3f5-5117-4c9e-a583-60d5e6883ea4/volumes/kubernetes.io~configmap/ceph-pools-bin: ClearQuota called, but quotas disabled 2019-11-04T18:55:21.596 controller-1 kubelet[88521]: info E1104 18:55:21.596958 88521 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-etc\" (\"d517f3f5-5117-4c9e-a583-60d5e6883ea4\")" failed. No retries permitted until 2019-11-04 18:55:22.096924656 +0000 UTC m=+1881.157922657 (durationBeforeRetry 500ms). Error: "error cleaning subPath mounts for volume \"ceph-etc\" (UniqueName: \"kubernetes.io/configmap/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-etc\") pod \"d517f3f5-5117-4c9e-a583-60d5e6883ea4\" (UID: \"d517f3f5-5117-4c9e-a583-60d5e6883ea4\") : error deleting /var/lib/kubelet/pods/d517f3f5-5117-4c9e-a583-60d5e6883ea4/volume-subpaths: remove /var/lib/kubelet/pods/d517f3f5-5117-4c9e-a583-60d5e6883ea4/volume-subpaths: no such file or directory" 2019-11-04T18:55:21.597 controller-1 kubelet[88521]: info I1104 18:55:21.597082 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-pools-bin" (OuterVolumeSpecName: "ceph-pools-bin") pod "d517f3f5-5117-4c9e-a583-60d5e6883ea4" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4"). InnerVolumeSpecName "ceph-pools-bin". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:55:21.617 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.617 [INFO][375444] network_linux.go 467: Calico CNI deleted device in netns /proc/373030/ns/net ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" 2019-11-04T18:55:21.617 controller-1 kubelet[88521]: info 2019-11-04 18:55:21.617 [INFO][375444] k8s.go 493: Teardown processing complete. ContainerID="dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" 2019-11-04T18:55:21.651 controller-1 kubelet[88521]: info I1104 18:55:21.651784 88521 reconciler.go:301] Volume detached for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/d517f3f5-5117-4c9e-a583-60d5e6883ea4-etcceph") on node "controller-1" DevicePath "" 2019-11-04T18:55:21.651 controller-1 kubelet[88521]: info I1104 18:55:21.651811 88521 reconciler.go:301] Volume detached for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-pools-audit-token-bsfbw") on node "controller-1" DevicePath "" 2019-11-04T18:55:21.651 controller-1 kubelet[88521]: info I1104 18:55:21.651831 88521 reconciler.go:301] Volume detached for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-pools-bin") on node "controller-1" DevicePath "" 2019-11-04T18:55:21.732 controller-1 containerd[12218]: info time="2019-11-04T18:55:21.732163440Z" level=info msg="shim reaped" id=dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9 2019-11-04T18:55:21.742 controller-1 dockerd[12258]: info time="2019-11-04T18:55:21.742163191Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:22.153 controller-1 kubelet[88521]: info I1104 18:55:22.153099 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-etc") pod "d517f3f5-5117-4c9e-a583-60d5e6883ea4" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4") 2019-11-04T18:55:22.153 controller-1 kubelet[88521]: info W1104 18:55:22.153225 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/d517f3f5-5117-4c9e-a583-60d5e6883ea4/volumes/kubernetes.io~configmap/ceph-etc: ClearQuota called, but quotas disabled 2019-11-04T18:55:22.153 controller-1 kubelet[88521]: info I1104 18:55:22.153425 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-etc" (OuterVolumeSpecName: "ceph-etc") pod "d517f3f5-5117-4c9e-a583-60d5e6883ea4" (UID: "d517f3f5-5117-4c9e-a583-60d5e6883ea4"). InnerVolumeSpecName "ceph-etc". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:55:22.253 controller-1 kubelet[88521]: info I1104 18:55:22.253416 88521 reconciler.go:301] Volume detached for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/d517f3f5-5117-4c9e-a583-60d5e6883ea4-ceph-etc") on node "controller-1" DevicePath "" 2019-11-04T18:55:22.390 controller-1 kubelet[88521]: info W1104 18:55:22.390925 88521 pod_container_deletor.go:75] Container "dce1007e145164219dc5d46b4ba7aca015df161056f97e00adeb72da7b0269e9" not found in pod's containers 2019-11-04T18:55:23.000 controller-1 ntpd[87544]: info Deleting interface #38 cali9b1659125dd, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=11 secs 2019-11-04T18:55:27.486 controller-1 containerd[12218]: info time="2019-11-04T18:55:27.486611373Z" level=info msg="shim reaped" id=8523b570e1317123fbbbeb96f7b8729666b1fde3cdc20568572a2686c415d9a6 2019-11-04T18:55:27.496 controller-1 dockerd[12258]: info time="2019-11-04T18:55:27.496556267Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:27.596 controller-1 dockerd[12258]: info time="2019-11-04T18:55:27.596061129Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:27.596 controller-1 dockerd[12258]: info time="2019-11-04T18:55:27.596133104Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:27.606 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.606 [INFO][376467] plugin.go 442: Extracted identifiers ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:55:27.613 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.613 [WARNING][376467] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:55:27.613 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.613 [INFO][376467] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-coredns--6bc668cd76--crh8t-eth0", GenerateName:"coredns-6bc668cd76-", Namespace:"kube-system", SelfLink:"", UID:"b58145a4-0299-407b-8902-4780e9a7b778", ResourceVersion:"8151226", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489977, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns", "k8s-app":"kube-dns", "pod-template-hash":"6bc668cd76", "projectcalico.org/namespace":"kube-system"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"coredns-6bc668cd76-crh8t", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e30a/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidc068a6b120", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} 2019-11-04T18:55:27.613 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.613 [INFO][376467] k8s.go 477: Releasing IP address(es) ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" 2019-11-04T18:55:27.613 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.613 [INFO][376467] utils.go 171: Calico CNI releasing IP address ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" 2019-11-04T18:55:27.634 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.634 [INFO][376510] ipam_plugin.go 299: Releasing address using handleID ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" HandleID="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Workload="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:55:27.634 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.634 [INFO][376510] ipam.go 1145: Releasing all IPs with handle 'chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351' 2019-11-04T18:55:27.649 controller-1 dockerd[12258]: info time="2019-11-04T18:55:27.649908282Z" level=error msg="Error running exec bd2e0b60f0e5a22b5cf38b741f71072471ab5c51673ed4eda8dea0f407b0bc64 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:27.650 controller-1 kubelet[88521]: info W1104 18:55:27.650513 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:55:27.655 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.655 [INFO][376510] ipam_plugin.go 308: Released address using handleID ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" HandleID="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Workload="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:55:27.655 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.655 [INFO][376510] ipam_plugin.go 317: Releasing address using workloadID ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" HandleID="chain.6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" Workload="controller--1-k8s-coredns--6bc668cd76--crh8t-eth0" 2019-11-04T18:55:27.655 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.655 [INFO][376510] ipam.go 1145: Releasing all IPs with handle 'kube-system.coredns-6bc668cd76-crh8t' 2019-11-04T18:55:27.657 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.657 [INFO][376467] k8s.go 481: Cleaning up netns ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" 2019-11-04T18:55:27.658 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.658 [INFO][376467] network_linux.go 450: Calico CNI deleting device in netns /proc/322497/ns/net ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" 2019-11-04T18:55:27.670 controller-1 containerd[12218]: info time="2019-11-04T18:55:27.670282354Z" level=info msg="shim reaped" id=77c1491261b5f1edd6d0b522a1c11031a73c79a22fd00689ec0c94098696244f 2019-11-04T18:55:27.680 controller-1 dockerd[12258]: info time="2019-11-04T18:55:27.680205120Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:27.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%31, but no knowledge of it 2019-11-04T18:55:27.745 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.744 [INFO][376467] network_linux.go 467: Calico CNI deleted device in netns /proc/322497/ns/net ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" 2019-11-04T18:55:27.745 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.745 [INFO][376467] k8s.go 493: Teardown processing complete. ContainerID="6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351" 2019-11-04T18:55:27.823 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.823 [INFO][376626] plugin.go 442: Extracted identifiers ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:55:27.830 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.830 [WARNING][376626] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:55:27.830 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.830 [INFO][376626] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--client--1-eth0", GenerateName:"mon-elasticsearch-client-", Namespace:"monitor", SelfLink:"", UID:"5caace26-dd42-45da-8273-1a4ea4e95a86", ResourceVersion:"8151277", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708489984, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"mon-elasticsearch-client", "controller-revision-hash":"mon-elasticsearch-client-7c64d4f4fd", "heritage":"Tiller", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-client-1", "projectcalico.org/orchestrator":"k8s", "chart":"elasticsearch", "release":"mon-elasticsearch-client", "projectcalico.org/namespace":"monitor", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-elasticsearch-client-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e32e/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"cali7eb1b3c61b4", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T18:55:27.830 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.830 [INFO][376626] k8s.go 477: Releasing IP address(es) ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" 2019-11-04T18:55:27.830 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.830 [INFO][376626] utils.go 171: Calico CNI releasing IP address ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" 2019-11-04T18:55:27.851 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.851 [INFO][376667] ipam_plugin.go 299: Releasing address using handleID ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" HandleID="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:55:27.851 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.851 [INFO][376667] ipam.go 1145: Releasing all IPs with handle 'chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967' 2019-11-04T18:55:27.872 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.872 [INFO][376667] ipam_plugin.go 308: Released address using handleID ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" HandleID="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:55:27.872 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.872 [INFO][376667] ipam_plugin.go 317: Releasing address using workloadID ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" HandleID="chain.0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T18:55:27.872 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.872 [INFO][376667] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-elasticsearch-client-1' 2019-11-04T18:55:27.873 controller-1 containerd[12218]: info time="2019-11-04T18:55:27.873642128Z" level=info msg="shim reaped" id=6714c6a908d6da2a19409f9464fd7280fd8ba17b91bd7eb65dec0a64d4a77351 2019-11-04T18:55:27.875 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.874 [INFO][376626] k8s.go 481: Cleaning up netns ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" 2019-11-04T18:55:27.875 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.875 [INFO][376626] network_linux.go 450: Calico CNI deleting device in netns /proc/322490/ns/net ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" 2019-11-04T18:55:27.883 controller-1 dockerd[12258]: info time="2019-11-04T18:55:27.883538163Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:27.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%33, but no knowledge of it 2019-11-04T18:55:27.953 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.953 [INFO][376626] network_linux.go 467: Calico CNI deleted device in netns /proc/322490/ns/net ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" 2019-11-04T18:55:27.953 controller-1 kubelet[88521]: info 2019-11-04 18:55:27.953 [INFO][376626] k8s.go 493: Teardown processing complete. ContainerID="0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967" 2019-11-04T18:55:28.070 controller-1 containerd[12218]: info time="2019-11-04T18:55:28.070317598Z" level=info msg="shim reaped" id=0fbfedd98c4051fdb72a444e84170622ba46a03010efd1d2545c4e29597f8967 2019-11-04T18:55:28.080 controller-1 dockerd[12258]: info time="2019-11-04T18:55:28.080127265Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:28.104 controller-1 dockerd[12258]: info time="2019-11-04T18:55:28.104508613Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:28.104 controller-1 dockerd[12258]: info time="2019-11-04T18:55:28.104528375Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:28.151 controller-1 dockerd[12258]: info time="2019-11-04T18:55:28.151809880Z" level=error msg="Error running exec cfc78ddbfa6384a6612cca75809026d5a7c727bdb2c6082fe88368ecdcdcc7ef in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:28.152 controller-1 kubelet[88521]: info W1104 18:55:28.152322 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:55:28.354 controller-1 kubelet[88521]: info W1104 18:55:28.354734 88521 prober.go:108] No ref for container "docker://8523b570e1317123fbbbeb96f7b8729666b1fde3cdc20568572a2686c415d9a6" (coredns-6bc668cd76-crh8t_kube-system(b58145a4-0299-407b-8902-4780e9a7b778):coredns) 2019-11-04T18:55:28.476 controller-1 kubelet[88521]: info E1104 18:55:28.476562 88521 remote_runtime.go:295] ContainerStatus "77c1491261b5f1edd6d0b522a1c11031a73c79a22fd00689ec0c94098696244f" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 77c1491261b5f1edd6d0b522a1c11031a73c79a22fd00689ec0c94098696244f 2019-11-04T18:55:28.477 controller-1 kubelet[88521]: info E1104 18:55:28.476952 88521 remote_runtime.go:295] ContainerStatus "15a19b89ae9a2282f17851a961067c9f61a578c6bc8482e03202ed80d0d162c4" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 15a19b89ae9a2282f17851a961067c9f61a578c6bc8482e03202ed80d0d162c4 2019-11-04T18:55:28.483 controller-1 kubelet[88521]: info E1104 18:55:28.483713 88521 remote_runtime.go:295] ContainerStatus "8523b570e1317123fbbbeb96f7b8729666b1fde3cdc20568572a2686c415d9a6" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 8523b570e1317123fbbbeb96f7b8729666b1fde3cdc20568572a2686c415d9a6 2019-11-04T18:55:28.567 controller-1 kubelet[88521]: info I1104 18:55:28.567346 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/5caace26-dd42-45da-8273-1a4ea4e95a86-default-token-88gsr") pod "5caace26-dd42-45da-8273-1a4ea4e95a86" (UID: "5caace26-dd42-45da-8273-1a4ea4e95a86") 2019-11-04T18:55:28.567 controller-1 kubelet[88521]: info I1104 18:55:28.567385 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b58145a4-0299-407b-8902-4780e9a7b778-config-volume") pod "b58145a4-0299-407b-8902-4780e9a7b778" (UID: "b58145a4-0299-407b-8902-4780e9a7b778") 2019-11-04T18:55:28.567 controller-1 kubelet[88521]: info I1104 18:55:28.567414 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "coredns-token-x97rb" (UniqueName: "kubernetes.io/secret/b58145a4-0299-407b-8902-4780e9a7b778-coredns-token-x97rb") pod "b58145a4-0299-407b-8902-4780e9a7b778" (UID: "b58145a4-0299-407b-8902-4780e9a7b778") 2019-11-04T18:55:28.567 controller-1 kubelet[88521]: info W1104 18:55:28.567602 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/b58145a4-0299-407b-8902-4780e9a7b778/volumes/kubernetes.io~configmap/config-volume: ClearQuota called, but quotas disabled 2019-11-04T18:55:28.567 controller-1 kubelet[88521]: info I1104 18:55:28.567782 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b58145a4-0299-407b-8902-4780e9a7b778-config-volume" (OuterVolumeSpecName: "config-volume") pod "b58145a4-0299-407b-8902-4780e9a7b778" (UID: "b58145a4-0299-407b-8902-4780e9a7b778"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:55:28.585 controller-1 kubelet[88521]: info I1104 18:55:28.585869 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58145a4-0299-407b-8902-4780e9a7b778-coredns-token-x97rb" (OuterVolumeSpecName: "coredns-token-x97rb") pod "b58145a4-0299-407b-8902-4780e9a7b778" (UID: "b58145a4-0299-407b-8902-4780e9a7b778"). InnerVolumeSpecName "coredns-token-x97rb". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:55:28.585 controller-1 kubelet[88521]: info I1104 18:55:28.585887 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5caace26-dd42-45da-8273-1a4ea4e95a86-default-token-88gsr" (OuterVolumeSpecName: "default-token-88gsr") pod "5caace26-dd42-45da-8273-1a4ea4e95a86" (UID: "5caace26-dd42-45da-8273-1a4ea4e95a86"). InnerVolumeSpecName "default-token-88gsr". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:55:28.667 controller-1 kubelet[88521]: info I1104 18:55:28.667756 88521 reconciler.go:301] Volume detached for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/5caace26-dd42-45da-8273-1a4ea4e95a86-default-token-88gsr") on node "controller-1" DevicePath "" 2019-11-04T18:55:28.667 controller-1 kubelet[88521]: info I1104 18:55:28.667789 88521 reconciler.go:301] Volume detached for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b58145a4-0299-407b-8902-4780e9a7b778-config-volume") on node "controller-1" DevicePath "" 2019-11-04T18:55:28.667 controller-1 kubelet[88521]: info I1104 18:55:28.667796 88521 reconciler.go:301] Volume detached for volume "coredns-token-x97rb" (UniqueName: "kubernetes.io/secret/b58145a4-0299-407b-8902-4780e9a7b778-coredns-token-x97rb") on node "controller-1" DevicePath "" 2019-11-04T18:55:29.000 controller-1 ntpd[87544]: info Deleting interface #37 calidc068a6b120, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=361 secs 2019-11-04T18:55:29.000 controller-1 ntpd[87544]: info Deleting interface #35 cali7eb1b3c61b4, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=361 secs 2019-11-04T18:55:34.670 controller-1 containerd[12218]: info time="2019-11-04T18:55:34.670520563Z" level=info msg="shim reaped" id=04ffbb0b8469ff115606dbecd55963639c00d56f9b22e28e3631b033a70f01db 2019-11-04T18:55:34.680 controller-1 dockerd[12258]: info time="2019-11-04T18:55:34.680312146Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:34.816 controller-1 containerd[12218]: info time="2019-11-04T18:55:34.816043250Z" level=info msg="shim reaped" id=6974fb1e83243b99ccd185e85e1e1c0317f2f805518750ac24a94fc3b8014e9a 2019-11-04T18:55:34.826 controller-1 dockerd[12258]: info time="2019-11-04T18:55:34.825995267Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:35.554 controller-1 kubelet[88521]: info E1104 18:55:35.554922 88521 remote_runtime.go:295] ContainerStatus "04ffbb0b8469ff115606dbecd55963639c00d56f9b22e28e3631b033a70f01db" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 04ffbb0b8469ff115606dbecd55963639c00d56f9b22e28e3631b033a70f01db 2019-11-04T18:55:35.682 controller-1 kubelet[88521]: info I1104 18:55:35.682817 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "patterns" (UniqueName: "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-patterns") pod "bec75c2c-6de0-4ac4-8746-8cc48dc32f82" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82") 2019-11-04T18:55:35.682 controller-1 kubelet[88521]: info I1104 18:55:35.682851 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "pipeline" (UniqueName: "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-pipeline") pod "bec75c2c-6de0-4ac4-8746-8cc48dc32f82" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82") 2019-11-04T18:55:35.682 controller-1 kubelet[88521]: info I1104 18:55:35.682881 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-default-token-88gsr") pod "bec75c2c-6de0-4ac4-8746-8cc48dc32f82" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82") 2019-11-04T18:55:35.682 controller-1 kubelet[88521]: info I1104 18:55:35.682916 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "files" (UniqueName: "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-files") pod "bec75c2c-6de0-4ac4-8746-8cc48dc32f82" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82") 2019-11-04T18:55:35.682 controller-1 kubelet[88521]: info I1104 18:55:35.682942 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "data" (UniqueName: "kubernetes.io/empty-dir/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-data") pod "bec75c2c-6de0-4ac4-8746-8cc48dc32f82" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82") 2019-11-04T18:55:35.682 controller-1 kubelet[88521]: info W1104 18:55:35.682947 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/bec75c2c-6de0-4ac4-8746-8cc48dc32f82/volumes/kubernetes.io~configmap/pipeline: ClearQuota called, but quotas disabled 2019-11-04T18:55:35.683 controller-1 kubelet[88521]: info W1104 18:55:35.683023 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/bec75c2c-6de0-4ac4-8746-8cc48dc32f82/volumes/kubernetes.io~empty-dir/data: ClearQuota called, but quotas disabled 2019-11-04T18:55:35.683 controller-1 kubelet[88521]: info W1104 18:55:35.683025 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/bec75c2c-6de0-4ac4-8746-8cc48dc32f82/volumes/kubernetes.io~configmap/patterns: ClearQuota called, but quotas disabled 2019-11-04T18:55:35.683 controller-1 kubelet[88521]: info W1104 18:55:35.683076 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/bec75c2c-6de0-4ac4-8746-8cc48dc32f82/volumes/kubernetes.io~configmap/files: ClearQuota called, but quotas disabled 2019-11-04T18:55:35.683 controller-1 kubelet[88521]: info I1104 18:55:35.683175 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-patterns" (OuterVolumeSpecName: "patterns") pod "bec75c2c-6de0-4ac4-8746-8cc48dc32f82" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82"). InnerVolumeSpecName "patterns". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:55:35.683 controller-1 kubelet[88521]: info I1104 18:55:35.683187 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-pipeline" (OuterVolumeSpecName: "pipeline") pod "bec75c2c-6de0-4ac4-8746-8cc48dc32f82" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82"). InnerVolumeSpecName "pipeline". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:55:35.683 controller-1 kubelet[88521]: info I1104 18:55:35.683268 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-data" (OuterVolumeSpecName: "data") pod "bec75c2c-6de0-4ac4-8746-8cc48dc32f82" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82"). InnerVolumeSpecName "data". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" 2019-11-04T18:55:35.683 controller-1 kubelet[88521]: info I1104 18:55:35.683314 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-files" (OuterVolumeSpecName: "files") pod "bec75c2c-6de0-4ac4-8746-8cc48dc32f82" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82"). InnerVolumeSpecName "files". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:55:35.703 controller-1 kubelet[88521]: info I1104 18:55:35.703788 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-default-token-88gsr" (OuterVolumeSpecName: "default-token-88gsr") pod "bec75c2c-6de0-4ac4-8746-8cc48dc32f82" (UID: "bec75c2c-6de0-4ac4-8746-8cc48dc32f82"). InnerVolumeSpecName "default-token-88gsr". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:55:35.783 controller-1 kubelet[88521]: info I1104 18:55:35.783154 88521 reconciler.go:301] Volume detached for volume "data" (UniqueName: "kubernetes.io/empty-dir/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-data") on node "controller-1" DevicePath "" 2019-11-04T18:55:35.783 controller-1 kubelet[88521]: info I1104 18:55:35.783171 88521 reconciler.go:301] Volume detached for volume "patterns" (UniqueName: "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-patterns") on node "controller-1" DevicePath "" 2019-11-04T18:55:35.783 controller-1 kubelet[88521]: info I1104 18:55:35.783187 88521 reconciler.go:301] Volume detached for volume "pipeline" (UniqueName: "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-pipeline") on node "controller-1" DevicePath "" 2019-11-04T18:55:35.783 controller-1 kubelet[88521]: info I1104 18:55:35.783195 88521 reconciler.go:301] Volume detached for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-default-token-88gsr") on node "controller-1" DevicePath "" 2019-11-04T18:55:35.783 controller-1 kubelet[88521]: info I1104 18:55:35.783201 88521 reconciler.go:301] Volume detached for volume "files" (UniqueName: "kubernetes.io/configmap/bec75c2c-6de0-4ac4-8746-8cc48dc32f82-files") on node "controller-1" DevicePath "" 2019-11-04T18:55:35.902 controller-1 kubelet[88521]: info E1104 18:55:35.902555 88521 kubelet_pods.go:1093] Failed killing the pod "mon-logstash-0": failed to "KillContainer" for "logstash" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: 04ffbb0b8469ff115606dbecd55963639c00d56f9b22e28e3631b033a70f01db" 2019-11-04T18:55:37.596 controller-1 dockerd[12258]: info time="2019-11-04T18:55:37.596782944Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:37.596 controller-1 dockerd[12258]: info time="2019-11-04T18:55:37.596813541Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:37.644 controller-1 dockerd[12258]: info time="2019-11-04T18:55:37.643960462Z" level=error msg="Error running exec 301fc1940771a1af4e34e7da0faa03886ab202a4586fa8962fc67e90d654720b in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:37.644 controller-1 kubelet[88521]: info W1104 18:55:37.644539 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:55:38.108 controller-1 dockerd[12258]: info time="2019-11-04T18:55:38.108617255Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:38.108 controller-1 dockerd[12258]: info time="2019-11-04T18:55:38.108619281Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:38.156 controller-1 dockerd[12258]: info time="2019-11-04T18:55:38.156164173Z" level=error msg="Error running exec e2e82178eb53da28b23e44df143ddb4540df2d64c228fa75333948b1e52ffc87 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:38.156 controller-1 kubelet[88521]: info W1104 18:55:38.156701 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:55:41.183 controller-1 containerd[12218]: info time="2019-11-04T18:55:41.183467017Z" level=info msg="shim reaped" id=8a2f1ff5003ccb966a647d0cf3b53f78324b6ed9af44db0d7fde51297c2583af 2019-11-04T18:55:41.193 controller-1 dockerd[12258]: info time="2019-11-04T18:55:41.193526969Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:41.312 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.312 [INFO][379454] plugin.go 442: Extracted identifiers ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:55:41.318 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.318 [WARNING][379454] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T18:55:41.318 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.318 [INFO][379454] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0", GenerateName:"mon-nginx-ingress-controller-", Namespace:"monitor", SelfLink:"", UID:"5f94b1d5-a0a0-4c81-926a-07d23af72b93", ResourceVersion:"8151445", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490158, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"component":"controller", "controller-revision-hash":"866b74fd9d", "pod-template-generation":"1", "release":"mon-nginx-ingress", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-nginx-ingress", "app":"nginx-ingress"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-nginx-ingress-controller-kgq85", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e32c/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-nginx-ingress"}, InterfaceName:"calidf007395be0", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x50}, v3.EndpointPort{Name:"https", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1bb}}}} 2019-11-04T18:55:41.318 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.318 [INFO][379454] k8s.go 477: Releasing IP address(es) ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" 2019-11-04T18:55:41.318 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.318 [INFO][379454] utils.go 171: Calico CNI releasing IP address ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" 2019-11-04T18:55:41.336 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.336 [INFO][379475] ipam_plugin.go 299: Releasing address using handleID ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" HandleID="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Workload="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:55:41.336 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.336 [INFO][379475] ipam.go 1145: Releasing all IPs with handle 'chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751' 2019-11-04T18:55:41.359 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.358 [INFO][379475] ipam_plugin.go 308: Released address using handleID ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" HandleID="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Workload="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:55:41.359 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.358 [INFO][379475] ipam_plugin.go 317: Releasing address using workloadID ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" HandleID="chain.ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" Workload="controller--1-k8s-mon--nginx--ingress--controller--kgq85-eth0" 2019-11-04T18:55:41.359 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.359 [INFO][379475] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-nginx-ingress-controller-kgq85' 2019-11-04T18:55:41.361 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.361 [INFO][379454] k8s.go 481: Cleaning up netns ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" 2019-11-04T18:55:41.362 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.362 [INFO][379454] network_linux.go 450: Calico CNI deleting device in netns /proc/323008/ns/net ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" 2019-11-04T18:55:41.000 controller-1 lldpd[12254]: warning removal request for address of fe80::ecee:eeff:feee:eeee%34, but no knowledge of it 2019-11-04T18:55:41.440 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.440 [INFO][379454] network_linux.go 467: Calico CNI deleted device in netns /proc/323008/ns/net ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" 2019-11-04T18:55:41.441 controller-1 kubelet[88521]: info 2019-11-04 18:55:41.440 [INFO][379454] k8s.go 493: Teardown processing complete. ContainerID="ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751" 2019-11-04T18:55:41.558 controller-1 containerd[12218]: info time="2019-11-04T18:55:41.558601653Z" level=info msg="shim reaped" id=ec2c7b176e753f49da9e19168dc6dc69c5447fcbac2c699f85517eed8c27b751 2019-11-04T18:55:41.568 controller-1 dockerd[12258]: info time="2019-11-04T18:55:41.568563962Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:41.623 controller-1 kubelet[88521]: info E1104 18:55:41.623572 88521 remote_runtime.go:295] ContainerStatus "8a2f1ff5003ccb966a647d0cf3b53f78324b6ed9af44db0d7fde51297c2583af" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 8a2f1ff5003ccb966a647d0cf3b53f78324b6ed9af44db0d7fde51297c2583af 2019-11-04T18:55:41.795 controller-1 kubelet[88521]: info I1104 18:55:41.795232 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "mon-nginx-ingress-token-dgbmq" (UniqueName: "kubernetes.io/secret/5f94b1d5-a0a0-4c81-926a-07d23af72b93-mon-nginx-ingress-token-dgbmq") pod "5f94b1d5-a0a0-4c81-926a-07d23af72b93" (UID: "5f94b1d5-a0a0-4c81-926a-07d23af72b93") 2019-11-04T18:55:41.804 controller-1 kubelet[88521]: info I1104 18:55:41.804851 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5f94b1d5-a0a0-4c81-926a-07d23af72b93-mon-nginx-ingress-token-dgbmq" (OuterVolumeSpecName: "mon-nginx-ingress-token-dgbmq") pod "5f94b1d5-a0a0-4c81-926a-07d23af72b93" (UID: "5f94b1d5-a0a0-4c81-926a-07d23af72b93"). InnerVolumeSpecName "mon-nginx-ingress-token-dgbmq". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:55:41.895 controller-1 kubelet[88521]: info I1104 18:55:41.895541 88521 reconciler.go:301] Volume detached for volume "mon-nginx-ingress-token-dgbmq" (UniqueName: "kubernetes.io/secret/5f94b1d5-a0a0-4c81-926a-07d23af72b93-mon-nginx-ingress-token-dgbmq") on node "controller-1" DevicePath "" 2019-11-04T18:55:43.000 controller-1 ntpd[87544]: info Deleting interface #34 calidf007395be0, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=375 secs 2019-11-04T18:55:47.599 controller-1 dockerd[12258]: info time="2019-11-04T18:55:47.599305791Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:47.599 controller-1 dockerd[12258]: info time="2019-11-04T18:55:47.599332832Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:47.646 controller-1 dockerd[12258]: info time="2019-11-04T18:55:47.646831061Z" level=error msg="Error running exec 31e7d0a29606b4a5ce27cd54ff8a67e703de6afb24d83e42b83d81658a67602f in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:47.647 controller-1 kubelet[88521]: info W1104 18:55:47.647571 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:55:48.109 controller-1 dockerd[12258]: info time="2019-11-04T18:55:48.109322855Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:48.109 controller-1 dockerd[12258]: info time="2019-11-04T18:55:48.109362604Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:48.159 controller-1 dockerd[12258]: info time="2019-11-04T18:55:48.159312001Z" level=error msg="Error running exec aa95f46a35608418c0f8693de57aeff432ae9540c5c26b84f279fee46017593d in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:48.159 controller-1 kubelet[88521]: info W1104 18:55:48.159822 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:55:57.386 controller-1 dockerd[12258]: info time="2019-11-04T18:55:57.386605594Z" level=info msg="Container a69f9ca15369645a5c2bc3dfea879321c92bdc066a98c1ef6ac02b4b5e776e88 failed to exit within 30 seconds of signal 15 - using the force" 2019-11-04T18:55:57.397 controller-1 dockerd[12258]: info time="2019-11-04T18:55:57.397292721Z" level=info msg="Container 9146134a9f94357aac34c85b725b6a0025f5e4c8b5dd27c12968626cc4b9c5ce failed to exit within 30 seconds of signal 15 - using the force" 2019-11-04T18:55:57.510 controller-1 containerd[12218]: info time="2019-11-04T18:55:57.510333950Z" level=info msg="shim reaped" id=a69f9ca15369645a5c2bc3dfea879321c92bdc066a98c1ef6ac02b4b5e776e88 2019-11-04T18:55:57.520 controller-1 dockerd[12258]: info time="2019-11-04T18:55:57.520166804Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:57.522 controller-1 containerd[12218]: info time="2019-11-04T18:55:57.522087454Z" level=info msg="shim reaped" id=9146134a9f94357aac34c85b725b6a0025f5e4c8b5dd27c12968626cc4b9c5ce 2019-11-04T18:55:57.531 controller-1 dockerd[12258]: info time="2019-11-04T18:55:57.531901013Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:57.592 controller-1 dockerd[12258]: info time="2019-11-04T18:55:57.592103825Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:57.592 controller-1 dockerd[12258]: info time="2019-11-04T18:55:57.592126214Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:57.640 controller-1 dockerd[12258]: info time="2019-11-04T18:55:57.640622728Z" level=error msg="Error running exec 86ff849f5cfd664a83cf86a247bb64f087d365b15c34f666331acc0a119a7af8 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:57.641 controller-1 kubelet[88521]: info W1104 18:55:57.641202 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:55:57.654 controller-1 containerd[12218]: info time="2019-11-04T18:55:57.654364634Z" level=info msg="shim reaped" id=8cb6835371d61133bd9872f49ffc38cd6d57a909ba2acdea16262f951fb5e9f4 2019-11-04T18:55:57.664 controller-1 dockerd[12258]: info time="2019-11-04T18:55:57.664337704Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:57.669 controller-1 containerd[12218]: info time="2019-11-04T18:55:57.669178878Z" level=info msg="shim reaped" id=ab3318ebfde3c69123cd381aa7327e78543e0da02abfedff62e22edd9496de8c 2019-11-04T18:55:57.679 controller-1 dockerd[12258]: info time="2019-11-04T18:55:57.679045255Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T18:55:57.789 controller-1 kubelet[88521]: info E1104 18:55:57.789960 88521 remote_runtime.go:295] ContainerStatus "a69f9ca15369645a5c2bc3dfea879321c92bdc066a98c1ef6ac02b4b5e776e88" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: a69f9ca15369645a5c2bc3dfea879321c92bdc066a98c1ef6ac02b4b5e776e88 2019-11-04T18:55:57.796 controller-1 kubelet[88521]: info E1104 18:55:57.796464 88521 remote_runtime.go:295] ContainerStatus "9146134a9f94357aac34c85b725b6a0025f5e4c8b5dd27c12968626cc4b9c5ce" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 9146134a9f94357aac34c85b725b6a0025f5e4c8b5dd27c12968626cc4b9c5ce 2019-11-04T18:55:57.901 controller-1 kubelet[88521]: info E1104 18:55:57.901853 88521 kubelet_pods.go:1093] Failed killing the pod "kube-multus-ds-amd64-l97hp": failed to "KillContainer" for "kube-multus" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: a69f9ca15369645a5c2bc3dfea879321c92bdc066a98c1ef6ac02b4b5e776e88" 2019-11-04T18:55:57.901 controller-1 kubelet[88521]: info E1104 18:55:57.901861 88521 kubelet_pods.go:1093] Failed killing the pod "kube-sriov-cni-ds-amd64-hwc5l": failed to "KillContainer" for "kube-sriov-cni" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: 9146134a9f94357aac34c85b725b6a0025f5e4c8b5dd27c12968626cc4b9c5ce" 2019-11-04T18:55:57.927 controller-1 kubelet[88521]: info I1104 18:55:57.927661 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/59a873c6-47e3-4d1f-91dc-44d027d29903-cni") pod "59a873c6-47e3-4d1f-91dc-44d027d29903" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903") 2019-11-04T18:55:57.927 controller-1 kubelet[88521]: info I1104 18:55:57.927704 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a873c6-47e3-4d1f-91dc-44d027d29903-cni" (OuterVolumeSpecName: "cni") pod "59a873c6-47e3-4d1f-91dc-44d027d29903" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903"). InnerVolumeSpecName "cni". PluginName "kubernetes.io/host-path", VolumeGidValue "" 2019-11-04T18:55:57.927 controller-1 kubelet[88521]: info I1104 18:55:57.927785 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-jxtxx" (UniqueName: "kubernetes.io/secret/e88b6292-68ed-43ed-be3e-4667434abb79-default-token-jxtxx") pod "e88b6292-68ed-43ed-be3e-4667434abb79" (UID: "e88b6292-68ed-43ed-be3e-4667434abb79") 2019-11-04T18:55:57.927 controller-1 kubelet[88521]: info I1104 18:55:57.927840 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "cnibin" (UniqueName: "kubernetes.io/host-path/59a873c6-47e3-4d1f-91dc-44d027d29903-cnibin") pod "59a873c6-47e3-4d1f-91dc-44d027d29903" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903") 2019-11-04T18:55:57.927 controller-1 kubelet[88521]: info I1104 18:55:57.927882 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "cnibin" (UniqueName: "kubernetes.io/host-path/e88b6292-68ed-43ed-be3e-4667434abb79-cnibin") pod "e88b6292-68ed-43ed-be3e-4667434abb79" (UID: "e88b6292-68ed-43ed-be3e-4667434abb79") 2019-11-04T18:55:57.927 controller-1 kubelet[88521]: info I1104 18:55:57.927920 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "multus-cfg" (UniqueName: "kubernetes.io/configmap/59a873c6-47e3-4d1f-91dc-44d027d29903-multus-cfg") pod "59a873c6-47e3-4d1f-91dc-44d027d29903" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903") 2019-11-04T18:55:57.927 controller-1 kubelet[88521]: info I1104 18:55:57.927918 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/59a873c6-47e3-4d1f-91dc-44d027d29903-cnibin" (OuterVolumeSpecName: "cnibin") pod "59a873c6-47e3-4d1f-91dc-44d027d29903" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903"). InnerVolumeSpecName "cnibin". PluginName "kubernetes.io/host-path", VolumeGidValue "" 2019-11-04T18:55:57.927 controller-1 kubelet[88521]: info I1104 18:55:57.927956 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "multus-token-dtj6m" (UniqueName: "kubernetes.io/secret/59a873c6-47e3-4d1f-91dc-44d027d29903-multus-token-dtj6m") pod "59a873c6-47e3-4d1f-91dc-44d027d29903" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903") 2019-11-04T18:55:57.928 controller-1 kubelet[88521]: info I1104 18:55:57.927978 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e88b6292-68ed-43ed-be3e-4667434abb79-cnibin" (OuterVolumeSpecName: "cnibin") pod "e88b6292-68ed-43ed-be3e-4667434abb79" (UID: "e88b6292-68ed-43ed-be3e-4667434abb79"). InnerVolumeSpecName "cnibin". PluginName "kubernetes.io/host-path", VolumeGidValue "" 2019-11-04T18:55:57.928 controller-1 kubelet[88521]: info W1104 18:55:57.928013 88521 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/59a873c6-47e3-4d1f-91dc-44d027d29903/volumes/kubernetes.io~configmap/multus-cfg: ClearQuota called, but quotas disabled 2019-11-04T18:55:57.928 controller-1 kubelet[88521]: info I1104 18:55:57.928059 88521 reconciler.go:301] Volume detached for volume "cni" (UniqueName: "kubernetes.io/host-path/59a873c6-47e3-4d1f-91dc-44d027d29903-cni") on node "controller-1" DevicePath "" 2019-11-04T18:55:57.928 controller-1 kubelet[88521]: info I1104 18:55:57.928069 88521 reconciler.go:301] Volume detached for volume "cnibin" (UniqueName: "kubernetes.io/host-path/59a873c6-47e3-4d1f-91dc-44d027d29903-cnibin") on node "controller-1" DevicePath "" 2019-11-04T18:55:57.928 controller-1 kubelet[88521]: info I1104 18:55:57.928076 88521 reconciler.go:301] Volume detached for volume "cnibin" (UniqueName: "kubernetes.io/host-path/e88b6292-68ed-43ed-be3e-4667434abb79-cnibin") on node "controller-1" DevicePath "" 2019-11-04T18:55:57.928 controller-1 kubelet[88521]: info I1104 18:55:57.928211 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59a873c6-47e3-4d1f-91dc-44d027d29903-multus-cfg" (OuterVolumeSpecName: "multus-cfg") pod "59a873c6-47e3-4d1f-91dc-44d027d29903" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903"). InnerVolumeSpecName "multus-cfg". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T18:55:57.939 controller-1 kubelet[88521]: info I1104 18:55:57.938967 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59a873c6-47e3-4d1f-91dc-44d027d29903-multus-token-dtj6m" (OuterVolumeSpecName: "multus-token-dtj6m") pod "59a873c6-47e3-4d1f-91dc-44d027d29903" (UID: "59a873c6-47e3-4d1f-91dc-44d027d29903"). InnerVolumeSpecName "multus-token-dtj6m". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:55:57.939 controller-1 kubelet[88521]: info I1104 18:55:57.938968 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e88b6292-68ed-43ed-be3e-4667434abb79-default-token-jxtxx" (OuterVolumeSpecName: "default-token-jxtxx") pod "e88b6292-68ed-43ed-be3e-4667434abb79" (UID: "e88b6292-68ed-43ed-be3e-4667434abb79"). InnerVolumeSpecName "default-token-jxtxx". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T18:55:58.028 controller-1 kubelet[88521]: info I1104 18:55:58.028331 88521 reconciler.go:301] Volume detached for volume "multus-token-dtj6m" (UniqueName: "kubernetes.io/secret/59a873c6-47e3-4d1f-91dc-44d027d29903-multus-token-dtj6m") on node "controller-1" DevicePath "" 2019-11-04T18:55:58.028 controller-1 kubelet[88521]: info I1104 18:55:58.028347 88521 reconciler.go:301] Volume detached for volume "default-token-jxtxx" (UniqueName: "kubernetes.io/secret/e88b6292-68ed-43ed-be3e-4667434abb79-default-token-jxtxx") on node "controller-1" DevicePath "" 2019-11-04T18:55:58.028 controller-1 kubelet[88521]: info I1104 18:55:58.028355 88521 reconciler.go:301] Volume detached for volume "multus-cfg" (UniqueName: "kubernetes.io/configmap/59a873c6-47e3-4d1f-91dc-44d027d29903-multus-cfg") on node "controller-1" DevicePath "" 2019-11-04T18:55:58.103 controller-1 dockerd[12258]: info time="2019-11-04T18:55:58.103925004Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:58.103 controller-1 dockerd[12258]: info time="2019-11-04T18:55:58.103924766Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:55:58.149 controller-1 dockerd[12258]: info time="2019-11-04T18:55:58.149560650Z" level=error msg="Error running exec 84ae13ca816789c69e06b7ac4c5a0b106314cf4a83898eb6f47de7d3ead46794 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:55:58.149 controller-1 kubelet[88521]: info W1104 18:55:58.149940 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:56:07.593 controller-1 dockerd[12258]: info time="2019-11-04T18:56:07.593569956Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:07.593 controller-1 dockerd[12258]: info time="2019-11-04T18:56:07.593573768Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:07.643 controller-1 dockerd[12258]: info time="2019-11-04T18:56:07.643280829Z" level=error msg="Error running exec be0145d99892ab43244b248c81640ac8c9c75276d9fbd153b07e3b398bdb6656 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:07.643 controller-1 kubelet[88521]: info W1104 18:56:07.643835 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:56:08.105 controller-1 dockerd[12258]: info time="2019-11-04T18:56:08.105171546Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:08.105 controller-1 dockerd[12258]: info time="2019-11-04T18:56:08.105205151Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:08.151 controller-1 dockerd[12258]: info time="2019-11-04T18:56:08.151651541Z" level=error msg="Error running exec 58ce9a73a68c4e374014bb92da0c05ee9736f21a697853a8557c3215c2196a18 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:08.152 controller-1 kubelet[88521]: info W1104 18:56:08.152055 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:56:17.594 controller-1 dockerd[12258]: info time="2019-11-04T18:56:17.594734510Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:17.594 controller-1 dockerd[12258]: info time="2019-11-04T18:56:17.594731348Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:17.641 controller-1 dockerd[12258]: info time="2019-11-04T18:56:17.641704875Z" level=error msg="Error running exec 74f1ab2d23363960afdf613e1624379e2dcd8625bcb8498d199c4168238d04e3 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:17.642 controller-1 kubelet[88521]: info W1104 18:56:17.642276 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:56:18.102 controller-1 dockerd[12258]: info time="2019-11-04T18:56:18.102625827Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:18.102 controller-1 dockerd[12258]: info time="2019-11-04T18:56:18.102653894Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:18.147 controller-1 dockerd[12258]: info time="2019-11-04T18:56:18.147322207Z" level=error msg="Error running exec dedc3e94645b22ac9ae972862774b3f6c916252d7d8963829436c6ab95907a93 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:18.147 controller-1 kubelet[88521]: info W1104 18:56:18.147871 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:56:19.212 controller-1 kubelet[88521]: info E1104 18:56:19.212560 88521 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/158cc42f84f3334021b7ab12990e6ac6dc4a66a9bce3bd3b838967b1456b925b/diff" to get inode usage: stat /var/lib/docker/overlay2/158cc42f84f3334021b7ab12990e6ac6dc4a66a9bce3bd3b838967b1456b925b/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/04ffbb0b8469ff115606dbecd55963639c00d56f9b22e28e3631b033a70f01db" to get inode usage: stat /var/lib/docker/containers/04ffbb0b8469ff115606dbecd55963639c00d56f9b22e28e3631b033a70f01db: no such file or directory 2019-11-04T18:56:19.214 controller-1 kubelet[88521]: info E1104 18:56:19.214100 88521 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/b32f1db22bdbe468084e2e4130425c73b817faf6515465e08dd98515edf3c950/diff" to get inode usage: stat /var/lib/docker/overlay2/b32f1db22bdbe468084e2e4130425c73b817faf6515465e08dd98515edf3c950/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/9146134a9f94357aac34c85b725b6a0025f5e4c8b5dd27c12968626cc4b9c5ce" to get inode usage: stat /var/lib/docker/containers/9146134a9f94357aac34c85b725b6a0025f5e4c8b5dd27c12968626cc4b9c5ce: no such file or directory 2019-11-04T18:56:19.305 controller-1 kubelet[88521]: info E1104 18:56:19.305539 88521 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/5fbaef56a02eda28af8d9f4cda01cfb02ad07c87a4b3533b437588ed022e038a/diff" to get inode usage: stat /var/lib/docker/overlay2/5fbaef56a02eda28af8d9f4cda01cfb02ad07c87a4b3533b437588ed022e038a/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/a69f9ca15369645a5c2bc3dfea879321c92bdc066a98c1ef6ac02b4b5e776e88" to get inode usage: stat /var/lib/docker/containers/a69f9ca15369645a5c2bc3dfea879321c92bdc066a98c1ef6ac02b4b5e776e88: no such file or directory 2019-11-04T18:56:19.699 controller-1 kubelet[88521]: info E1104 18:56:19.699540 88521 remote_runtime.go:243] StopContainer "395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:56:19.699 controller-1 kubelet[88521]: info E1104 18:56:19.699584 88521 kuberuntime_container.go:590] Container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:56:19.699 controller-1 kubelet[88521]: info E1104 18:56:19.699540 88521 remote_runtime.go:243] StopContainer "9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:56:19.699 controller-1 kubelet[88521]: info E1104 18:56:19.699652 88521 kuberuntime_container.go:590] Container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:56:19.701 controller-1 kubelet[88521]: info E1104 18:56:19.701369 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:56:19.701 controller-1 kubelet[88521]: info E1104 18:56:19.701391 88521 pod_workers.go:191] Error syncing pod 99913751-ab01-4a00-8e4f-ff54b0232e5d ("mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:56:19.701 controller-1 kubelet[88521]: info E1104 18:56:19.701402 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:56:19.702 controller-1 kubelet[88521]: info E1104 18:56:19.702509 88521 pod_workers.go:191] Error syncing pod 5edf03ac-2483-4c65-ba4d-f40dde7dbf65 ("mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:56:19.716 controller-1 dockerd[12258]: info time="2019-11-04T18:56:19.716554039Z" level=info msg="Container 395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:56:19.718 controller-1 dockerd[12258]: info time="2019-11-04T18:56:19.718037782Z" level=info msg="Container 9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6 failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:56:27.593 controller-1 dockerd[12258]: info time="2019-11-04T18:56:27.593402189Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:27.593 controller-1 dockerd[12258]: info time="2019-11-04T18:56:27.593444194Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:27.639 controller-1 dockerd[12258]: info time="2019-11-04T18:56:27.639462849Z" level=error msg="Error running exec 7a934f5ad403378b23efc5d7dd2b8b465bae4d571fa046d95f4b5a1f9238b10e in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:27.640 controller-1 kubelet[88521]: info W1104 18:56:27.640070 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:56:28.102 controller-1 dockerd[12258]: info time="2019-11-04T18:56:28.102831565Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:28.102 controller-1 dockerd[12258]: info time="2019-11-04T18:56:28.102856794Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:28.151 controller-1 dockerd[12258]: info time="2019-11-04T18:56:28.151445018Z" level=error msg="Error running exec e41c338b135aaf73262f6888591a72ae188a772fb66e2016eb4cf641f7332efc in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:28.151 controller-1 kubelet[88521]: info W1104 18:56:28.151876 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:56:29.735 controller-1 dockerd[12258]: info time="2019-11-04T18:56:29.735604205Z" level=info msg="Container 395f343e30e3 failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:56:29.736 controller-1 dockerd[12258]: info time="2019-11-04T18:56:29.736585234Z" level=info msg="Container 9bde4ebdc7bb failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:56:37.596 controller-1 dockerd[12258]: info time="2019-11-04T18:56:37.596596918Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:37.596 controller-1 dockerd[12258]: info time="2019-11-04T18:56:37.596620338Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:37.645 controller-1 dockerd[12258]: info time="2019-11-04T18:56:37.645209085Z" level=error msg="Error running exec 8374e38af2388bdac5e216f35080704b03964bbf4463e91087ac143a0e4ed94a in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:37.645 controller-1 kubelet[88521]: info W1104 18:56:37.645826 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:56:38.106 controller-1 dockerd[12258]: info time="2019-11-04T18:56:38.106192440Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:38.106 controller-1 dockerd[12258]: info time="2019-11-04T18:56:38.106225908Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:38.151 controller-1 dockerd[12258]: info time="2019-11-04T18:56:38.151779272Z" level=error msg="Error running exec dae855fb147267de2b2af6c4b57c5e2afe4e5a82c7790e69ab0a376c105ed55c in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:38.152 controller-1 kubelet[88521]: info W1104 18:56:38.152192 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:56:47.594 controller-1 dockerd[12258]: info time="2019-11-04T18:56:47.594798689Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:47.594 controller-1 dockerd[12258]: info time="2019-11-04T18:56:47.594820198Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:47.640 controller-1 dockerd[12258]: info time="2019-11-04T18:56:47.640559391Z" level=error msg="Error running exec 7bc8219465293eb299f860401b7fe9cb7b28295cf4b3b872f904d5aa01dfa121 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:47.641 controller-1 kubelet[88521]: info W1104 18:56:47.640987 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:56:48.105 controller-1 dockerd[12258]: info time="2019-11-04T18:56:48.105178345Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:48.105 controller-1 dockerd[12258]: info time="2019-11-04T18:56:48.105241030Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:48.152 controller-1 dockerd[12258]: info time="2019-11-04T18:56:48.152642540Z" level=error msg="Error running exec f2d7103359df95324c842005d5e673dd7545ddcebb6465eac688f4c04bb9c783 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:48.153 controller-1 kubelet[88521]: info W1104 18:56:48.153087 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:56:57.597 controller-1 dockerd[12258]: info time="2019-11-04T18:56:57.597855310Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:57.598 controller-1 dockerd[12258]: info time="2019-11-04T18:56:57.597892747Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:57.647 controller-1 dockerd[12258]: info time="2019-11-04T18:56:57.647652004Z" level=error msg="Error running exec 46d7f7f07af2362af51c1834bff7347a99bbcf8ebd16ede99fba10e813c13560 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:57.648 controller-1 kubelet[88521]: info W1104 18:56:57.648197 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:56:58.106 controller-1 dockerd[12258]: info time="2019-11-04T18:56:58.106573548Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:58.106 controller-1 dockerd[12258]: info time="2019-11-04T18:56:58.106577828Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:56:58.153 controller-1 dockerd[12258]: info time="2019-11-04T18:56:58.153744420Z" level=error msg="Error running exec ff67cb1830568bfc34880c91f200bef820281e2959a1219e6389e4ec1188412d in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:56:58.154 controller-1 kubelet[88521]: info W1104 18:56:58.154250 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:57:03.638 controller-1 collectd[12249]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T18:57:07.597 controller-1 dockerd[12258]: info time="2019-11-04T18:57:07.597020887Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:07.597 controller-1 dockerd[12258]: info time="2019-11-04T18:57:07.597025335Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:07.648 controller-1 dockerd[12258]: info time="2019-11-04T18:57:07.648470177Z" level=error msg="Error running exec 1353e18de51259cc73af7a1ef00484fb9b3319a5c1aa5e1a74b5922ff3eb5174 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:07.649 controller-1 kubelet[88521]: info W1104 18:57:07.648972 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:57:08.106 controller-1 dockerd[12258]: info time="2019-11-04T18:57:08.106059842Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:08.106 controller-1 dockerd[12258]: info time="2019-11-04T18:57:08.106109451Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:08.152 controller-1 dockerd[12258]: info time="2019-11-04T18:57:08.152001406Z" level=error msg="Error running exec 55d7f86bc5391d00ae54b1b2c6d0d83d190a941f4fcadd890d1c8ca84789cf1d in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:08.152 controller-1 kubelet[88521]: info W1104 18:57:08.152419 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:57:17.595 controller-1 dockerd[12258]: info time="2019-11-04T18:57:17.595712784Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:17.595 controller-1 dockerd[12258]: info time="2019-11-04T18:57:17.595736647Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:17.645 controller-1 dockerd[12258]: info time="2019-11-04T18:57:17.645328887Z" level=error msg="Error running exec fff607263ae4d3de4d7be8f0dc7855840cb1b8ee3e1d3df6cb2654dee1095583 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:17.645 controller-1 kubelet[88521]: info W1104 18:57:17.645831 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:57:18.106 controller-1 dockerd[12258]: info time="2019-11-04T18:57:18.106396475Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:18.106 controller-1 dockerd[12258]: info time="2019-11-04T18:57:18.106396642Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:18.157 controller-1 dockerd[12258]: info time="2019-11-04T18:57:18.156968986Z" level=error msg="Error running exec e09720f179f385f0bcdc2e105cebcbeceb1dc8845b38ae9f293df3d13f329d95 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:18.157 controller-1 kubelet[88521]: info W1104 18:57:18.157387 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:57:27.597 controller-1 dockerd[12258]: info time="2019-11-04T18:57:27.597438287Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:27.597 controller-1 dockerd[12258]: info time="2019-11-04T18:57:27.597457421Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:27.646 controller-1 dockerd[12258]: info time="2019-11-04T18:57:27.646126891Z" level=error msg="Error running exec 3027d9d6bf39bf8de520d1dc2df237bc004f075c2e436e6fd6df326d6c7fb01d in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:27.646 controller-1 kubelet[88521]: info W1104 18:57:27.646557 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:57:28.106 controller-1 dockerd[12258]: info time="2019-11-04T18:57:28.106276657Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:28.106 controller-1 dockerd[12258]: info time="2019-11-04T18:57:28.106345389Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:28.155 controller-1 dockerd[12258]: info time="2019-11-04T18:57:28.155889781Z" level=error msg="Error running exec 6001c76dfa81ea72f722830791ffa67674d9c18a2d0b77eb01187faeb2ad53ca in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:28.156 controller-1 kubelet[88521]: info W1104 18:57:28.156306 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:57:37.599 controller-1 dockerd[12258]: info time="2019-11-04T18:57:37.599275524Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:37.599 controller-1 dockerd[12258]: info time="2019-11-04T18:57:37.599293335Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:37.651 controller-1 dockerd[12258]: info time="2019-11-04T18:57:37.650981715Z" level=error msg="Error running exec c8d84883567b8906f0b5cec17ee0fcf5e7caa58eb15d1bf1cf5d76043a03c2fa in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:37.651 controller-1 kubelet[88521]: info W1104 18:57:37.651408 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:57:38.106 controller-1 dockerd[12258]: info time="2019-11-04T18:57:38.106564501Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:38.106 controller-1 dockerd[12258]: info time="2019-11-04T18:57:38.106639796Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:38.154 controller-1 dockerd[12258]: info time="2019-11-04T18:57:38.154043171Z" level=error msg="Error running exec 1b6de88e4e3dedf7e7b252a7864cb2b1462a30a79f4c5503d0852d726ae7545f in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:38.154 controller-1 kubelet[88521]: info W1104 18:57:38.154529 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:57:47.597 controller-1 dockerd[12258]: info time="2019-11-04T18:57:47.597879853Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:47.597 controller-1 dockerd[12258]: info time="2019-11-04T18:57:47.597894545Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:47.644 controller-1 dockerd[12258]: info time="2019-11-04T18:57:47.643966775Z" level=error msg="Error running exec eacc5462e53c9f9a1c1511a4b7ed36a938a41c5482ceda716ee01afd883cdc12 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:47.644 controller-1 kubelet[88521]: info W1104 18:57:47.644454 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:57:48.107 controller-1 dockerd[12258]: info time="2019-11-04T18:57:48.107241517Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:48.107 controller-1 dockerd[12258]: info time="2019-11-04T18:57:48.107264569Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:48.154 controller-1 dockerd[12258]: info time="2019-11-04T18:57:48.154198437Z" level=error msg="Error running exec f308a6f00816c359ebcfb41741ae9acd361cd1f74c7a5b0f120ace0ad64146a9 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:48.154 controller-1 kubelet[88521]: info W1104 18:57:48.154648 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:57:57.597 controller-1 dockerd[12258]: info time="2019-11-04T18:57:57.597278541Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:57.597 controller-1 dockerd[12258]: info time="2019-11-04T18:57:57.597311566Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:57.650 controller-1 dockerd[12258]: info time="2019-11-04T18:57:57.649951601Z" level=error msg="Error running exec 03ab462c707fd944e91dc90a8934513c7f73718cec534b343dd1b20481258c91 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:57.650 controller-1 kubelet[88521]: info W1104 18:57:57.650348 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:57:58.105 controller-1 dockerd[12258]: info time="2019-11-04T18:57:58.105251593Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:58.105 controller-1 dockerd[12258]: info time="2019-11-04T18:57:58.105364165Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:57:58.154 controller-1 dockerd[12258]: info time="2019-11-04T18:57:58.154762316Z" level=error msg="Error running exec c475823a0ba494eeed6bd971c5f8cfdfe57acc1e4aa4145e672db9714ea805f2 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:57:58.155 controller-1 kubelet[88521]: info W1104 18:57:58.155282 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:58:07.592 controller-1 dockerd[12258]: info time="2019-11-04T18:58:07.592700300Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:07.592 controller-1 dockerd[12258]: info time="2019-11-04T18:58:07.592729147Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:07.644 controller-1 dockerd[12258]: info time="2019-11-04T18:58:07.644244006Z" level=error msg="Error running exec 92116b7b8ab60d16577340e06340972fd58005a4c9426e3f31565cd3bdfd99e9 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:07.644 controller-1 kubelet[88521]: info W1104 18:58:07.644879 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:58:08.107 controller-1 dockerd[12258]: info time="2019-11-04T18:58:08.107805718Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:08.107 controller-1 dockerd[12258]: info time="2019-11-04T18:58:08.107828494Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:08.159 controller-1 dockerd[12258]: info time="2019-11-04T18:58:08.158990001Z" level=error msg="Error running exec 0fe95bf48264a9b54ab8c85be8b8fbb820c47e41c99b76251066302354ea9078 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:08.159 controller-1 kubelet[88521]: info W1104 18:58:08.159437 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:58:17.595 controller-1 dockerd[12258]: info time="2019-11-04T18:58:17.595391830Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:17.595 controller-1 dockerd[12258]: info time="2019-11-04T18:58:17.595468157Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:17.641 controller-1 dockerd[12258]: info time="2019-11-04T18:58:17.641756857Z" level=error msg="Error running exec cb01caa637576d69593ea2ec6339faf9added2152976be75e22f3b01e6ce94c5 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:17.642 controller-1 kubelet[88521]: info W1104 18:58:17.642352 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:58:18.105 controller-1 dockerd[12258]: info time="2019-11-04T18:58:18.105084588Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:18.105 controller-1 dockerd[12258]: info time="2019-11-04T18:58:18.105152556Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:18.151 controller-1 dockerd[12258]: info time="2019-11-04T18:58:18.151627802Z" level=error msg="Error running exec 9b6cfcee204b127366397a3bf6498b782cc6a3d9f7ab810bb56d0b7c9a50abed in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:18.152 controller-1 kubelet[88521]: info W1104 18:58:18.152100 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:58:20.007 controller-1 kubelet[88521]: info E1104 18:58:20.007221 88521 remote_runtime.go:243] StopContainer "395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:58:20.007 controller-1 kubelet[88521]: info E1104 18:58:20.007263 88521 kuberuntime_container.go:590] Container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:58:20.007 controller-1 kubelet[88521]: info E1104 18:58:20.007811 88521 remote_runtime.go:243] StopContainer "9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:58:20.007 controller-1 kubelet[88521]: info E1104 18:58:20.007838 88521 kuberuntime_container.go:590] Container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T18:58:20.008 controller-1 kubelet[88521]: info E1104 18:58:20.008532 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:58:20.008 controller-1 kubelet[88521]: info E1104 18:58:20.008550 88521 pod_workers.go:191] Error syncing pod 99913751-ab01-4a00-8e4f-ff54b0232e5d ("mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:58:20.009 controller-1 kubelet[88521]: info E1104 18:58:20.008943 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:58:20.009 controller-1 kubelet[88521]: info E1104 18:58:20.009646 88521 pod_workers.go:191] Error syncing pod 5edf03ac-2483-4c65-ba4d-f40dde7dbf65 ("mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T18:58:20.023 controller-1 dockerd[12258]: info time="2019-11-04T18:58:20.023432795Z" level=info msg="Container 9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6 failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:58:20.024 controller-1 dockerd[12258]: info time="2019-11-04T18:58:20.024005715Z" level=info msg="Container 395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T18:58:27.597 controller-1 dockerd[12258]: info time="2019-11-04T18:58:27.597670268Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:27.597 controller-1 dockerd[12258]: info time="2019-11-04T18:58:27.597701204Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:27.649 controller-1 dockerd[12258]: info time="2019-11-04T18:58:27.648972439Z" level=error msg="Error running exec 24e64d512480013c29c6667474c94ac290fd3df72fc2199ef361729cde21f5c6 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:27.649 controller-1 kubelet[88521]: info W1104 18:58:27.649403 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:58:28.104 controller-1 dockerd[12258]: info time="2019-11-04T18:58:28.104125769Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:28.104 controller-1 dockerd[12258]: info time="2019-11-04T18:58:28.104154318Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:28.153 controller-1 dockerd[12258]: info time="2019-11-04T18:58:28.153211125Z" level=error msg="Error running exec e33b7812e88dfac45a2a8e696e90659028c9451c1494cd89edc4d170f39060bc in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:28.153 controller-1 kubelet[88521]: info W1104 18:58:28.153639 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:58:30.040 controller-1 dockerd[12258]: info time="2019-11-04T18:58:30.040557305Z" level=info msg="Container 9bde4ebdc7bb failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:58:30.040 controller-1 dockerd[12258]: info time="2019-11-04T18:58:30.040887898Z" level=info msg="Container 395f343e30e3 failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T18:58:31.242 controller-1 systemd[1]: info Stopping Name Service Cache Daemon... 2019-11-04T18:58:31.259 controller-1 systemd[1]: info Stopped Name Service Cache Daemon. 2019-11-04T18:58:37.591 controller-1 dockerd[12258]: info time="2019-11-04T18:58:37.591784962Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:37.591 controller-1 dockerd[12258]: info time="2019-11-04T18:58:37.591801349Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:37.637 controller-1 dockerd[12258]: info time="2019-11-04T18:58:37.637614787Z" level=error msg="Error running exec 7ad0bf55d259f93ab8c76102d897aa1ff6584895651794c7b8cc34aab6e968c2 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:37.638 controller-1 kubelet[88521]: info W1104 18:58:37.638145 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:58:38.104 controller-1 dockerd[12258]: info time="2019-11-04T18:58:38.104761855Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:38.104 controller-1 dockerd[12258]: info time="2019-11-04T18:58:38.104763967Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:38.153 controller-1 dockerd[12258]: info time="2019-11-04T18:58:38.153542112Z" level=error msg="Error running exec 254e3d59bad392c033e12f2d2bc271f1741eeb1bc697a933daa9a9da0dafe4f0 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:38.154 controller-1 kubelet[88521]: info W1104 18:58:38.153965 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:58:47.594 controller-1 dockerd[12258]: info time="2019-11-04T18:58:47.594366588Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:47.594 controller-1 dockerd[12258]: info time="2019-11-04T18:58:47.594396703Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:47.639 controller-1 dockerd[12258]: info time="2019-11-04T18:58:47.639293254Z" level=error msg="Error running exec 440f249f9f4e4c8ebd265754e464966ba0217b5144a2e1e162968d6f527491f2 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:47.639 controller-1 kubelet[88521]: info W1104 18:58:47.639757 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:58:48.103 controller-1 dockerd[12258]: info time="2019-11-04T18:58:48.103566596Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:48.103 controller-1 dockerd[12258]: info time="2019-11-04T18:58:48.103588858Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:48.151 controller-1 dockerd[12258]: info time="2019-11-04T18:58:48.151587140Z" level=error msg="Error running exec 8876fc614ab3da6157cb0f4407556e43496fe77568527bc90f715352407d2af1 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:48.152 controller-1 kubelet[88521]: info W1104 18:58:48.151974 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:58:57.595 controller-1 dockerd[12258]: info time="2019-11-04T18:58:57.595627714Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:57.595 controller-1 dockerd[12258]: info time="2019-11-04T18:58:57.595678902Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:57.642 controller-1 dockerd[12258]: info time="2019-11-04T18:58:57.642736695Z" level=error msg="Error running exec 568b2a319972a33d90b6c3609033098df29e892dfc87395f94cabac640583423 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:57.643 controller-1 kubelet[88521]: info W1104 18:58:57.643200 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:58:58.103 controller-1 dockerd[12258]: info time="2019-11-04T18:58:58.103539221Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:58.103 controller-1 dockerd[12258]: info time="2019-11-04T18:58:58.103542254Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:58:58.150 controller-1 dockerd[12258]: info time="2019-11-04T18:58:58.150856234Z" level=error msg="Error running exec 4361b99f8a0e6145a40d85725b5e79340d1d260520dceb7414f1fd082ae4a2eb in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:58:58.151 controller-1 kubelet[88521]: info W1104 18:58:58.151227 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:59:07.593 controller-1 dockerd[12258]: info time="2019-11-04T18:59:07.593530210Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:07.593 controller-1 dockerd[12258]: info time="2019-11-04T18:59:07.593555735Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:07.638 controller-1 dockerd[12258]: info time="2019-11-04T18:59:07.638353537Z" level=error msg="Error running exec 15da85bfd5b5d59505360a1ac3677ff0c4d9cc5b4c7ba7b25ed420dfcddee980 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:07.638 controller-1 kubelet[88521]: info W1104 18:59:07.638767 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:59:08.103 controller-1 dockerd[12258]: info time="2019-11-04T18:59:08.103856586Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:08.103 controller-1 dockerd[12258]: info time="2019-11-04T18:59:08.103864287Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:08.152 controller-1 dockerd[12258]: info time="2019-11-04T18:59:08.152500742Z" level=error msg="Error running exec 0a6f80ad8d740da374fdba55c50e778b6d7461a83a668eea4a91840a8cd2ab55 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:08.152 controller-1 kubelet[88521]: info W1104 18:59:08.152888 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:59:17.594 controller-1 dockerd[12258]: info time="2019-11-04T18:59:17.594616961Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:17.594 controller-1 dockerd[12258]: info time="2019-11-04T18:59:17.594616202Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:17.641 controller-1 dockerd[12258]: info time="2019-11-04T18:59:17.641441989Z" level=error msg="Error running exec c60db40e95b16e7e46f85fd97b26c77d3a87f9ed746769eef30c3430f8d1e6fe in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:17.641 controller-1 kubelet[88521]: info W1104 18:59:17.641926 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:59:18.104 controller-1 dockerd[12258]: info time="2019-11-04T18:59:18.104289428Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:18.104 controller-1 dockerd[12258]: info time="2019-11-04T18:59:18.104295561Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:18.149 controller-1 dockerd[12258]: info time="2019-11-04T18:59:18.149837705Z" level=error msg="Error running exec 50793753e976d3f43905c4b8d53390cce06ae50b255b4a7ab46debcc29c635c6 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:18.150 controller-1 kubelet[88521]: info W1104 18:59:18.150323 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:59:27.595 controller-1 dockerd[12258]: info time="2019-11-04T18:59:27.595718907Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:27.595 controller-1 dockerd[12258]: info time="2019-11-04T18:59:27.595744793Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:27.644 controller-1 dockerd[12258]: info time="2019-11-04T18:59:27.644274579Z" level=error msg="Error running exec 15a89522d7bf3f88c79ba0c58aca41792ca46d87a32d54e5a6ddebd68f2b0f81 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:27.644 controller-1 kubelet[88521]: info W1104 18:59:27.644814 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:59:28.106 controller-1 dockerd[12258]: info time="2019-11-04T18:59:28.106152810Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:28.106 controller-1 dockerd[12258]: info time="2019-11-04T18:59:28.106148929Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:28.153 controller-1 dockerd[12258]: info time="2019-11-04T18:59:28.152953057Z" level=error msg="Error running exec f113e608f26ff9b423404f9bda5f655abf9cc19e71bcc31de45dd807d448821f in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:28.153 controller-1 kubelet[88521]: info W1104 18:59:28.153439 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:59:37.598 controller-1 dockerd[12258]: info time="2019-11-04T18:59:37.598073461Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:37.598 controller-1 dockerd[12258]: info time="2019-11-04T18:59:37.598081723Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:37.647 controller-1 dockerd[12258]: info time="2019-11-04T18:59:37.647929186Z" level=error msg="Error running exec 05fe938625ca059374a1441a3d1c66a1e5aca923ddccc724ec91c429e3783443 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:37.648 controller-1 kubelet[88521]: info W1104 18:59:37.648318 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:59:38.107 controller-1 dockerd[12258]: info time="2019-11-04T18:59:38.107864695Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:38.107 controller-1 dockerd[12258]: info time="2019-11-04T18:59:38.107865976Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:38.156 controller-1 dockerd[12258]: info time="2019-11-04T18:59:38.156436022Z" level=error msg="Error running exec 852e3c2fc7d2bef1dd35629be377b5d47409977ce6f1821bf5a694a54d2c2af8 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:38.156 controller-1 kubelet[88521]: info W1104 18:59:38.156840 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:59:47.593 controller-1 dockerd[12258]: info time="2019-11-04T18:59:47.592995654Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:47.593 controller-1 dockerd[12258]: info time="2019-11-04T18:59:47.593018982Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:47.636 controller-1 dockerd[12258]: info time="2019-11-04T18:59:47.636924514Z" level=error msg="Error running exec d4ee4b8ec143b348a67e78db135305ce4586216ab0227f1613e15a5989bd9dcb in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:47.637 controller-1 kubelet[88521]: info W1104 18:59:47.637404 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:59:48.103 controller-1 dockerd[12258]: info time="2019-11-04T18:59:48.103869715Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:48.103 controller-1 dockerd[12258]: info time="2019-11-04T18:59:48.103910617Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:48.151 controller-1 dockerd[12258]: info time="2019-11-04T18:59:48.151560178Z" level=error msg="Error running exec 09d5128409ebe9650924f9c47340dd2de67462d01a4162b0f0fdd176fcd761f0 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:48.152 controller-1 kubelet[88521]: info W1104 18:59:48.152025 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T18:59:57.594 controller-1 dockerd[12258]: info time="2019-11-04T18:59:57.594128066Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:57.594 controller-1 dockerd[12258]: info time="2019-11-04T18:59:57.594164032Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:57.641 controller-1 dockerd[12258]: info time="2019-11-04T18:59:57.641795151Z" level=error msg="Error running exec 5b214ebf89cc03ee22d1eb33cbd9c8e96a672da0bbf5651e9346d281e485ae01 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:57.642 controller-1 kubelet[88521]: info W1104 18:59:57.642308 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T18:59:58.108 controller-1 dockerd[12258]: info time="2019-11-04T18:59:58.108321515Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:58.108 controller-1 dockerd[12258]: info time="2019-11-04T18:59:58.108401806Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T18:59:58.153 controller-1 dockerd[12258]: info time="2019-11-04T18:59:58.153357421Z" level=error msg="Error running exec 20f56fa7821f35d2397f82e7125935efaddee1820e95cba7364be0f7232ce16d in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T18:59:58.153 controller-1 kubelet[88521]: info W1104 18:59:58.153891 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T19:00:01.893 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T19:00:01.910 controller-1 systemd[1]: info Started Session 8 of user root. 2019-11-04T19:00:01.926 controller-1 systemd[1]: info Started Session 9 of user root. 2019-11-04T19:00:01.992 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T19:00:07.594 controller-1 dockerd[12258]: info time="2019-11-04T19:00:07.594846971Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:07.594 controller-1 dockerd[12258]: info time="2019-11-04T19:00:07.594822246Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:07.643 controller-1 dockerd[12258]: info time="2019-11-04T19:00:07.643064259Z" level=error msg="Error running exec 4c59e3190284eb20a91d3e7d2cc1a118f2565caea607081e250d3ece8a588b28 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T19:00:07.643 controller-1 kubelet[88521]: info W1104 19:00:07.643803 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T19:00:08.105 controller-1 dockerd[12258]: info time="2019-11-04T19:00:08.105649683Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:08.105 controller-1 dockerd[12258]: info time="2019-11-04T19:00:08.105651672Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:08.152 controller-1 dockerd[12258]: info time="2019-11-04T19:00:08.152734890Z" level=error msg="Error running exec e6bc38ce2198e45c164148285d89e13ef8ec38aa996e9d76d3eb25cfb7d88f2d in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T19:00:08.153 controller-1 kubelet[88521]: info W1104 19:00:08.153245 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T19:00:17.599 controller-1 dockerd[12258]: info time="2019-11-04T19:00:17.599911674Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:17.600 controller-1 dockerd[12258]: info time="2019-11-04T19:00:17.599945328Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:17.649 controller-1 dockerd[12258]: info time="2019-11-04T19:00:17.649405972Z" level=error msg="Error running exec 0fe58a79536326af24cc82accf36bfc840b88e4b0443922c8f57c9b2ba1491ca in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T19:00:17.649 controller-1 kubelet[88521]: info W1104 19:00:17.649877 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T19:00:18.106 controller-1 dockerd[12258]: info time="2019-11-04T19:00:18.106639406Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:18.106 controller-1 dockerd[12258]: info time="2019-11-04T19:00:18.106647534Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:18.153 controller-1 dockerd[12258]: info time="2019-11-04T19:00:18.153634507Z" level=error msg="Error running exec 2b0732aef7116a26ea6a2fe165c99197c3161679a3cc1c0eb833f15e9bfebce6 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T19:00:18.154 controller-1 kubelet[88521]: info W1104 19:00:18.154051 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T19:00:21.006 controller-1 kubelet[88521]: info E1104 19:00:21.006001 88521 remote_runtime.go:243] StopContainer "395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T19:00:21.006 controller-1 kubelet[88521]: info E1104 19:00:21.006037 88521 kuberuntime_container.go:590] Container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T19:00:21.006 controller-1 kubelet[88521]: info E1104 19:00:21.006029 88521 remote_runtime.go:243] StopContainer "9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" from runtime service failed: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T19:00:21.006 controller-1 kubelet[88521]: info E1104 19:00:21.006102 88521 kuberuntime_container.go:590] Container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" termination failed with gracePeriod 120: rpc error: code = Unknown desc = operation timeout: context deadline exceeded 2019-11-04T19:00:21.007 controller-1 kubelet[88521]: info E1104 19:00:21.007303 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T19:00:21.007 controller-1 kubelet[88521]: info E1104 19:00:21.007324 88521 pod_workers.go:191] Error syncing pod 99913751-ab01-4a00-8e4f-ff54b0232e5d ("mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T19:00:21.007 controller-1 kubelet[88521]: info E1104 19:00:21.007472 88521 kubelet.go:1576] error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T19:00:21.008 controller-1 kubelet[88521]: info E1104 19:00:21.008409 88521 pod_workers.go:191] Error syncing pod 5edf03ac-2483-4c65-ba4d-f40dde7dbf65 ("mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65)"), skipping: error killing pod: failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = operation timeout: context deadline exceeded" 2019-11-04T19:00:21.023 controller-1 dockerd[12258]: info time="2019-11-04T19:00:21.023784738Z" level=info msg="Container 9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6 failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T19:00:21.023 controller-1 dockerd[12258]: info time="2019-11-04T19:00:21.023804929Z" level=info msg="Container 395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f failed to exit within 120 seconds of signal 15 - using the force" 2019-11-04T19:00:27.601 controller-1 dockerd[12258]: info time="2019-11-04T19:00:27.601735303Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:27.601 controller-1 dockerd[12258]: info time="2019-11-04T19:00:27.601820087Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:27.652 controller-1 dockerd[12258]: info time="2019-11-04T19:00:27.652407474Z" level=error msg="Error running exec 4a259ffe9da5fc336e6368358a13590b0e32b2d3bd99def87b744587f103c055 in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T19:00:27.653 controller-1 kubelet[88521]: info W1104 19:00:27.653021 88521 prober.go:108] No ref for container "docker://395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" (mon-elasticsearch-data-1_monitor(99913751-ab01-4a00-8e4f-ff54b0232e5d):elasticsearch) 2019-11-04T19:00:28.107 controller-1 dockerd[12258]: info time="2019-11-04T19:00:28.107816843Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:28.107 controller-1 dockerd[12258]: info time="2019-11-04T19:00:28.107884689Z" level=error msg="stream copy error: reading from a closed fifo" 2019-11-04T19:00:28.153 controller-1 dockerd[12258]: info time="2019-11-04T19:00:28.153316861Z" level=error msg="Error running exec c52110a0e65785ed01865dfb925a81af4da1540240d485eff7cd75d67fec9bbe in container: OCI runtime exec failed: exec failed: cannot exec a container that has stopped: unknown" 2019-11-04T19:00:28.153 controller-1 kubelet[88521]: info W1104 19:00:28.153941 88521 prober.go:108] No ref for container "docker://9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" (mon-elasticsearch-master-1_monitor(5edf03ac-2483-4c65-ba4d-f40dde7dbf65):elasticsearch) 2019-11-04T19:00:31.041 controller-1 dockerd[12258]: info time="2019-11-04T19:00:31.041709179Z" level=info msg="Container 9bde4ebdc7bb failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T19:00:31.041 controller-1 dockerd[12258]: info time="2019-11-04T19:00:31.041736941Z" level=info msg="Container 395f343e30e3 failed to exit within 10 seconds of kill - trying direct SIGKILL" 2019-11-04T19:00:33.637 controller-1 collectd[12249]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T19:00:36.517 controller-1 containerd[12218]: info time="2019-11-04T19:00:36.517610346Z" level=info msg="shim reaped" id=9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6 2019-11-04T19:00:36.527 controller-1 dockerd[12258]: info time="2019-11-04T19:00:36.527344806Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:00:36.904 controller-1 containerd[12218]: info time="2019-11-04T19:00:36.904918248Z" level=info msg="shim reaped" id=395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f 2019-11-04T19:00:36.914 controller-1 dockerd[12258]: info time="2019-11-04T19:00:36.914731105Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:00:37.288 controller-1 kubelet[88521]: info I1104 19:00:37.288441 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "mon-elasticsearch-data" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") pod "99913751-ab01-4a00-8e4f-ff54b0232e5d" (UID: "99913751-ab01-4a00-8e4f-ff54b0232e5d") 2019-11-04T19:00:37.288 controller-1 kubelet[88521]: info I1104 19:00:37.288531 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "mon-elasticsearch-master" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1") pod "5edf03ac-2483-4c65-ba4d-f40dde7dbf65" (UID: "5edf03ac-2483-4c65-ba4d-f40dde7dbf65") 2019-11-04T19:00:37.288 controller-1 kubelet[88521]: info I1104 19:00:37.288595 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/5edf03ac-2483-4c65-ba4d-f40dde7dbf65-default-token-88gsr") pod "5edf03ac-2483-4c65-ba4d-f40dde7dbf65" (UID: "5edf03ac-2483-4c65-ba4d-f40dde7dbf65") 2019-11-04T19:00:37.288 controller-1 kubelet[88521]: info I1104 19:00:37.288667 88521 reconciler.go:181] operationExecutor.UnmountVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/99913751-ab01-4a00-8e4f-ff54b0232e5d-default-token-88gsr") pod "99913751-ab01-4a00-8e4f-ff54b0232e5d" (UID: "99913751-ab01-4a00-8e4f-ff54b0232e5d") 2019-11-04T19:00:37.296 controller-1 kubelet[88521]: info I1104 19:00:37.296824 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1" (OuterVolumeSpecName: "mon-elasticsearch-data") pod "99913751-ab01-4a00-8e4f-ff54b0232e5d" (UID: "99913751-ab01-4a00-8e4f-ff54b0232e5d"). InnerVolumeSpecName "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3". PluginName "kubernetes.io/rbd", VolumeGidValue "" 2019-11-04T19:00:37.296 controller-1 kubelet[88521]: info I1104 19:00:37.296843 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1" (OuterVolumeSpecName: "mon-elasticsearch-master") pod "5edf03ac-2483-4c65-ba4d-f40dde7dbf65" (UID: "5edf03ac-2483-4c65-ba4d-f40dde7dbf65"). InnerVolumeSpecName "pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722". PluginName "kubernetes.io/rbd", VolumeGidValue "" 2019-11-04T19:00:37.302 controller-1 kubelet[88521]: info I1104 19:00:37.302837 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99913751-ab01-4a00-8e4f-ff54b0232e5d-default-token-88gsr" (OuterVolumeSpecName: "default-token-88gsr") pod "99913751-ab01-4a00-8e4f-ff54b0232e5d" (UID: "99913751-ab01-4a00-8e4f-ff54b0232e5d"). InnerVolumeSpecName "default-token-88gsr". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T19:00:37.302 controller-1 kubelet[88521]: info I1104 19:00:37.302879 88521 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5edf03ac-2483-4c65-ba4d-f40dde7dbf65-default-token-88gsr" (OuterVolumeSpecName: "default-token-88gsr") pod "5edf03ac-2483-4c65-ba4d-f40dde7dbf65" (UID: "5edf03ac-2483-4c65-ba4d-f40dde7dbf65"). InnerVolumeSpecName "default-token-88gsr". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T19:00:37.388 controller-1 kubelet[88521]: info I1104 19:00:37.388931 88521 reconciler.go:301] Volume detached for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/5edf03ac-2483-4c65-ba4d-f40dde7dbf65-default-token-88gsr") on node "controller-1" DevicePath "" 2019-11-04T19:00:37.389 controller-1 kubelet[88521]: info I1104 19:00:37.388950 88521 reconciler.go:301] Volume detached for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/99913751-ab01-4a00-8e4f-ff54b0232e5d-default-token-88gsr") on node "controller-1" DevicePath "" 2019-11-04T19:00:37.389 controller-1 kubelet[88521]: info I1104 19:00:37.388987 88521 reconciler.go:294] operationExecutor.UnmountDevice started for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") on node "controller-1" 2019-11-04T19:00:37.389 controller-1 kubelet[88521]: info I1104 19:00:37.389005 88521 reconciler.go:294] operationExecutor.UnmountDevice started for volume "pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1") on node "controller-1" 2019-11-04T19:00:37.560 controller-1 kubelet[88521]: info I1104 19:00:37.559970 88521 operation_generator.go:931] UnmountDevice succeeded for volume "pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722" %!(EXTRA string=UnmountDevice succeeded for volume "pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1") on node "controller-1" ) 2019-11-04T19:00:37.589 controller-1 kubelet[88521]: info I1104 19:00:37.589412 88521 reconciler.go:301] Volume detached for volume "pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1") on node "controller-1" DevicePath "/dev/rbd1" 2019-11-04T19:00:37.748 controller-1 kubelet[88521]: info I1104 19:00:37.748929 88521 operation_generator.go:931] UnmountDevice succeeded for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" %!(EXTRA string=UnmountDevice succeeded for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") on node "controller-1" ) 2019-11-04T19:00:37.789 controller-1 kubelet[88521]: info I1104 19:00:37.789888 88521 reconciler.go:301] Volume detached for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") on node "controller-1" DevicePath "/dev/rbd0" 2019-11-04T19:00:37.901 controller-1 kubelet[88521]: info E1104 19:00:37.901160 88521 remote_runtime.go:295] ContainerStatus "9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6 2019-11-04T19:00:37.901 controller-1 kubelet[88521]: info E1104 19:00:37.901227 88521 kubelet_pods.go:1093] Failed killing the pod "mon-elasticsearch-master-1": failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: 9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" 2019-11-04T19:00:37.901 controller-1 kubelet[88521]: info E1104 19:00:37.901313 88521 remote_runtime.go:295] ContainerStatus "395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f 2019-11-04T19:00:37.901 controller-1 kubelet[88521]: info E1104 19:00:37.901369 88521 kubelet_pods.go:1093] Failed killing the pod "mon-elasticsearch-data-1": failed to "KillContainer" for "elasticsearch" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: 395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" 2019-11-04T19:01:01.996 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T19:01:02.005 controller-1 systemd[1]: info Started Session 10 of user root. 2019-11-04T19:01:02.058 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T19:01:15.119 controller-1 kubelet[88521]: info E1104 19:01:15.119588 88521 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/1047a3a31c0628f5866866405607a30049aae64a63a111b3bc2fbb901977816a/diff" to get inode usage: stat /var/lib/docker/overlay2/1047a3a31c0628f5866866405607a30049aae64a63a111b3bc2fbb901977816a/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f" to get inode usage: stat /var/lib/docker/containers/395f343e30e35e99749ddb2f1277cb1ac3955078c6b49a850dc97ac68d793b8f: no such file or directory 2019-11-04T19:01:17.149 controller-1 kubelet[88521]: info E1104 19:01:17.149017 88521 fsHandler.go:118] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/docker/overlay2/e0930be7bc3a35122110fb04ecad2883a4fa0b10e61cc33ebe77b60901e1f971/diff" to get inode usage: stat /var/lib/docker/overlay2/e0930be7bc3a35122110fb04ecad2883a4fa0b10e61cc33ebe77b60901e1f971/diff: no such file or directory, extraDiskErr: could not stat "/var/lib/docker/containers/9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6" to get inode usage: stat /var/lib/docker/containers/9bde4ebdc7bb814bbde0602457919a9617b5b0589bd897903747438837915fa6: no such file or directory 2019-11-04T19:01:18.861 controller-1 collectd[12249]: info 2019-11-04 19:01:18,861 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='fd00:205::2', port=6443): Read timed out. (read timeout=None)",)': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:01:18.862 controller-1 collectd[12249]: info 2019-11-04 19:01:18,861 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='fd00:205::2', port=6443): Read timed out. (read timeout=None)",)': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:01:18.862 controller-1 collectd[12249]: info 2019-11-04 19:01:18,862 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='fd00:205::2', port=6443): Read timed out. (read timeout=None)",)': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:01:18.863 controller-1 collectd[12249]: info WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='fd00:205::2', port=6443): Read timed out. (read timeout=None)",)': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:01:18.863 controller-1 collectd[12249]: info 2019-11-04 19:01:18,862 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='fd00:205::2', port=6443): Read timed out. (read timeout=None)",)': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:01:18.863 controller-1 collectd[12249]: info WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ReadTimeoutError("HTTPSConnectionPool(host='fd00:205::2', port=6443): Read timed out. (read timeout=None)",)': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:01:18.981 controller-1 collectd[12249]: info platform cpu usage plugin uid 8958d9a6-f190-4920-87e4-03c61bfa595b not found 2019-11-04T19:01:18.981 controller-1 collectd[12249]: info platform cpu usage plugin uid 4c64b3d0-34e3-4e7f-b65c-7d0935baeaa1 not found 2019-11-04T19:01:18.984 controller-1 collectd[12249]: info platform cpu usage plugin uid 0b61d7cb-a47f-4975-a90d-9d5745291ec3 not found 2019-11-04T19:01:18.992 controller-1 collectd[12249]: info platform cpu usage plugin uid 470fe5e7-08dd-4310-ac81-7520e37a88ea not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform memory usage: uid 8958d9a6-f190-4920-87e4-03c61bfa595b not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform cpu usage plugin uid 42aff923-3b79-48fd-b2ae-45921bfa46a3 not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform memory usage: uid 4c64b3d0-34e3-4e7f-b65c-7d0935baeaa1 not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform cpu usage plugin uid e048eafc-2ed6-4a66-8ad0-57799d976d11 not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform cpu usage plugin uid b0c7cd69-b649-4ea7-999d-1d5a3f18c8c2 not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform memory usage: uid 0b61d7cb-a47f-4975-a90d-9d5745291ec3 not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform cpu usage plugin uid fd9861e3-2af5-4433-a8ec-2f3509f19b0b not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform memory usage: uid 470fe5e7-08dd-4310-ac81-7520e37a88ea not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform cpu usage plugin uid 193f83c1-6632-4268-8a94-8ce20a067385 not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform memory usage: uid 5edf03ac-2483-4c65-ba4d-f40dde7dbf65 not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform cpu usage plugin uid 8fe130f5-1c66-429e-990e-dacffdbca8b9 not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform memory usage: uid e048eafc-2ed6-4a66-8ad0-57799d976d11 not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform cpu usage plugin uid 5edf03ac-2483-4c65-ba4d-f40dde7dbf65 not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform cpu usage plugin uid 5f76f380-b632-4adb-ae14-2f39938a10bb not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform cpu usage plugin uid 902bed75-d971-4868-83e1-1f629ca76b4c not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform memory usage: uid b0c7cd69-b649-4ea7-999d-1d5a3f18c8c2 not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform cpu usage plugin uid 99913751-ab01-4a00-8e4f-ff54b0232e5d not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform cpu usage plugin uid fbbf3d3e-ca3b-463b-9dc5-2d7dc9a750ba not found 2019-11-04T19:01:18.998 controller-1 collectd[12249]: info platform memory usage: uid fd9861e3-2af5-4433-a8ec-2f3509f19b0b not found 2019-11-04T19:01:18.999 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 3.4% (avg per cpu); cpus: 36, Platform: 2.8% (Base: 2.1, k8s-system: 0.7), k8s-addon: 0.0 2019-11-04T19:01:18.999 controller-1 collectd[12249]: info platform memory usage: uid 193f83c1-6632-4268-8a94-8ce20a067385 not found 2019-11-04T19:01:18.999 controller-1 collectd[12249]: info platform memory usage: uid 8fe130f5-1c66-429e-990e-dacffdbca8b9 not found 2019-11-04T19:01:18.999 controller-1 collectd[12249]: info platform memory usage: uid 42aff923-3b79-48fd-b2ae-45921bfa46a3 not found 2019-11-04T19:01:18.999 controller-1 collectd[12249]: info platform memory usage: uid 5f76f380-b632-4adb-ae14-2f39938a10bb not found 2019-11-04T19:01:18.999 controller-1 collectd[12249]: info platform memory usage: uid 902bed75-d971-4868-83e1-1f629ca76b4c not found 2019-11-04T19:01:18.999 controller-1 collectd[12249]: info platform memory usage: uid 99913751-ab01-4a00-8e4f-ff54b0232e5d not found 2019-11-04T19:01:18.999 controller-1 collectd[12249]: info plugin_read_thread: read-function of the `python.cpu' plugin took 935.361 seconds, which is above its read interval (10.000 seconds). You might want to adjust the `Interval' or `ReadThreads' settings. 2019-11-04T19:01:18.999 controller-1 collectd[12249]: info platform memory usage: uid fbbf3d3e-ca3b-463b-9dc5-2d7dc9a750ba not found 2019-11-04T19:01:19.000 controller-1 collectd[12249]: info platform memory usage: Usage: 2.7%; Reserved: 126053.8 MiB, Platform: 3451.5 MiB (Base: 2694.1, k8s-system: 757.4), k8s-addon: 100.0 2019-11-04T19:01:19.000 controller-1 collectd[12249]: info 4K memory usage: Anon: 8.7%, Anon: 10972.9 MiB, cgroup-rss: 10963.0 MiB, Avail: 115080.9 MiB, Total: 126053.8 MiB 2019-11-04T19:01:19.000 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 11.50%, Anon: 7294.1 MiB, Avail: 56139.8 MiB, Total: 63433.9 MiB 2019-11-04T19:01:19.000 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 5.80%, Anon: 3679.3 MiB, Avail: 59729.7 MiB, Total: 63409.0 MiB 2019-11-04T19:01:19.000 controller-1 collectd[12249]: info plugin_read_thread: read-function of the `python.memory' plugin took 935.360 seconds, which is above its read interval (10.000 seconds). You might want to adjust the `Interval' or `ReadThreads' settings. 2019-11-04T19:01:19.003 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.8% (avg per cpu); cpus: 36, Platform: 2.3% (Base: 1.8, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:01:19.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126284.0 MiB, Platform: 1680.8 MiB (Base: 690.6, k8s-system: 990.2), k8s-addon: 99.1 2019-11-04T19:01:19.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1795.0 MiB, cgroup-rss: 1784.0 MiB, Avail: 124489.0 MiB, Total: 126284.0 MiB 2019-11-04T19:01:19.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.25%, Anon: 792.7 MiB, Avail: 62699.1 MiB, Total: 63491.8 MiB 2019-11-04T19:01:19.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.58%, Anon: 1002.3 MiB, Avail: 62546.6 MiB, Total: 63548.9 MiB 2019-11-04T19:01:19.007 controller-1 collectd[12249]: info alarm notifier reading: 1.42 % usage - Platform Memory total 2019-11-04T19:01:29.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.9% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.4, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:01:29.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126283.1 MiB, Platform: 1679.6 MiB (Base: 689.3, k8s-system: 990.2), k8s-addon: 99.1 2019-11-04T19:01:29.008 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1794.2 MiB, cgroup-rss: 1782.8 MiB, Avail: 124488.9 MiB, Total: 126283.1 MiB 2019-11-04T19:01:29.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.25%, Anon: 792.2 MiB, Avail: 62698.2 MiB, Total: 63490.5 MiB 2019-11-04T19:01:29.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.58%, Anon: 1002.0 MiB, Avail: 62547.3 MiB, Total: 63549.3 MiB 2019-11-04T19:01:39.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.3% (avg per cpu); cpus: 36, Platform: 2.2% (Base: 1.7, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:01:39.005 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126280.5 MiB, Platform: 1691.8 MiB (Base: 701.6, k8s-system: 990.3), k8s-addon: 98.2 2019-11-04T19:01:39.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1803.2 MiB, cgroup-rss: 1793.9 MiB, Avail: 124477.3 MiB, Total: 126280.5 MiB 2019-11-04T19:01:39.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.25%, Anon: 792.5 MiB, Avail: 62698.9 MiB, Total: 63491.4 MiB 2019-11-04T19:01:39.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.59%, Anon: 1010.8 MiB, Avail: 62535.1 MiB, Total: 63545.8 MiB 2019-11-04T19:01:49.001 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.3% (avg per cpu); cpus: 36, Platform: 2.1% (Base: 1.4, k8s-system: 0.8), k8s-addon: 0.1 2019-11-04T19:01:49.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126280.3 MiB, Platform: 1682.0 MiB (Base: 689.9, k8s-system: 992.2), k8s-addon: 98.2 2019-11-04T19:01:49.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1791.9 MiB, cgroup-rss: 1784.4 MiB, Avail: 124488.5 MiB, Total: 126280.3 MiB 2019-11-04T19:01:49.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.25%, Anon: 790.5 MiB, Avail: 62698.8 MiB, Total: 63489.3 MiB 2019-11-04T19:01:49.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.58%, Anon: 1001.4 MiB, Avail: 62546.3 MiB, Total: 63547.7 MiB 2019-11-04T19:01:53.640 controller-1 collectd[12249]: info NTP query plugin server list: ['0.pool.ntp.org', '1.pool.ntp.org', '3.pool.ntp.org'] 2019-11-04T19:01:53.659 controller-1 collectd[12249]: info NTPQ: +fd00:204::3 2019-11-04T19:01:53.659 controller-1 collectd[12249]: info NTPQ: 153.24.162.44 2 u 12 128 377 0.053 -1.116 0.194 2019-11-04T19:01:53.659 controller-1 collectd[12249]: info NTPQ: *64:ff9b::607e:7a27 2019-11-04T19:01:53.701 controller-1 collectd[12249]: info NTP query plugin 100.114:host=controller-1.ntp alarm cleared 2019-11-04T19:01:53.701 controller-1 collectd[12249]: info NTPQ: 67.201.132.53 2 u 92 128 377 40.588 -0.451 0.548 2019-11-04T19:01:53.701 controller-1 collectd[12249]: info NTPQ: +64:ff9b::d073:7e46 2019-11-04T19:01:53.781 controller-1 collectd[12249]: info NTP query plugin 100.114:host=controller-1.ntp=64:ff9b::d073:7e46 alarm cleared 2019-11-04T19:01:53.781 controller-1 collectd[12249]: info NTPQ: 140.142.1.8 3 u 101 128 373 72.855 0.367 2.489 2019-11-04T19:01:53.781 controller-1 collectd[12249]: info NTPQ: +64:ff9b::6c3d:3823 2019-11-04T19:01:53.862 controller-1 collectd[12249]: info NTP query plugin 100.114:host=controller-1.ntp=64:ff9b::6c3d:3823 alarm cleared 2019-11-04T19:01:53.862 controller-1 collectd[12249]: info NTPQ: 198.30.92.2 2 u 87 128 377 14.953 -2.639 0.355 2019-11-04T19:01:59.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 2.0% (Base: 1.5, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:01:59.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126282.7 MiB, Platform: 1666.2 MiB (Base: 686.3, k8s-system: 979.9), k8s-addon: 98.2 2019-11-04T19:01:59.008 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1779.6 MiB, cgroup-rss: 1768.6 MiB, Avail: 124503.1 MiB, Total: 126282.7 MiB 2019-11-04T19:01:59.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 780.8 MiB, Avail: 62709.1 MiB, Total: 63489.8 MiB 2019-11-04T19:01:59.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 998.8 MiB, Avail: 62550.7 MiB, Total: 63549.5 MiB 2019-11-04T19:02:09.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:02:09.005 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126281.8 MiB, Platform: 1667.2 MiB (Base: 687.3, k8s-system: 979.9), k8s-addon: 98.2 2019-11-04T19:02:09.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1780.6 MiB, cgroup-rss: 1769.5 MiB, Avail: 124501.3 MiB, Total: 126281.8 MiB 2019-11-04T19:02:09.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 781.8 MiB, Avail: 62708.4 MiB, Total: 63490.2 MiB 2019-11-04T19:02:09.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 998.7 MiB, Avail: 62549.5 MiB, Total: 63548.2 MiB 2019-11-04T19:02:19.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:02:19.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126285.3 MiB, Platform: 1667.7 MiB (Base: 687.5, k8s-system: 980.3), k8s-addon: 98.2 2019-11-04T19:02:19.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1779.8 MiB, cgroup-rss: 1770.1 MiB, Avail: 124505.5 MiB, Total: 126285.3 MiB 2019-11-04T19:02:19.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 781.1 MiB, Avail: 62711.6 MiB, Total: 63492.7 MiB 2019-11-04T19:02:19.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 998.7 MiB, Avail: 62550.6 MiB, Total: 63549.3 MiB 2019-11-04T19:02:29.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.9% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.3, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:02:29.008 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126285.4 MiB, Platform: 1666.6 MiB (Base: 686.3, k8s-system: 980.3), k8s-addon: 98.2 2019-11-04T19:02:29.009 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1779.9 MiB, cgroup-rss: 1768.9 MiB, Avail: 124505.4 MiB, Total: 126285.4 MiB 2019-11-04T19:02:29.009 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 780.7 MiB, Avail: 62712.3 MiB, Total: 63493.0 MiB 2019-11-04T19:02:29.009 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 999.2 MiB, Avail: 62549.8 MiB, Total: 63549.0 MiB 2019-11-04T19:02:39.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.3% (avg per cpu); cpus: 36, Platform: 2.2% (Base: 1.7, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:02:39.005 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126279.3 MiB, Platform: 1664.7 MiB (Base: 687.5, k8s-system: 977.2), k8s-addon: 98.2 2019-11-04T19:02:39.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1778.1 MiB, cgroup-rss: 1767.1 MiB, Avail: 124501.2 MiB, Total: 126279.3 MiB 2019-11-04T19:02:39.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 780.8 MiB, Avail: 62708.8 MiB, Total: 63489.5 MiB 2019-11-04T19:02:39.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 997.3 MiB, Avail: 62549.1 MiB, Total: 63546.4 MiB 2019-11-04T19:02:49.003 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.9% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.4, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:02:49.008 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126285.2 MiB, Platform: 1664.8 MiB (Base: 687.5, k8s-system: 977.2), k8s-addon: 98.2 2019-11-04T19:02:49.008 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1778.2 MiB, cgroup-rss: 1767.1 MiB, Avail: 124507.0 MiB, Total: 126285.2 MiB 2019-11-04T19:02:49.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 780.7 MiB, Avail: 62712.0 MiB, Total: 63492.8 MiB 2019-11-04T19:02:49.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 997.5 MiB, Avail: 62551.6 MiB, Total: 63549.1 MiB 2019-11-04T19:02:59.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.2% (avg per cpu); cpus: 36, Platform: 2.0% (Base: 1.5, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:02:59.005 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126284.2 MiB, Platform: 1663.6 MiB (Base: 686.4, k8s-system: 977.2), k8s-addon: 98.2 2019-11-04T19:02:59.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1774.8 MiB, cgroup-rss: 1766.0 MiB, Avail: 124509.4 MiB, Total: 126284.2 MiB 2019-11-04T19:02:59.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 779.6 MiB, Avail: 62712.5 MiB, Total: 63492.1 MiB 2019-11-04T19:02:59.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.2 MiB, Avail: 62553.5 MiB, Total: 63548.8 MiB 2019-11-04T19:03:09.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.9% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.3, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:03:09.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126283.5 MiB, Platform: 1664.9 MiB (Base: 687.5, k8s-system: 977.5), k8s-addon: 98.2 2019-11-04T19:03:09.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1778.6 MiB, cgroup-rss: 1767.3 MiB, Avail: 124504.9 MiB, Total: 126283.5 MiB 2019-11-04T19:03:09.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 781.5 MiB, Avail: 62709.1 MiB, Total: 63490.7 MiB 2019-11-04T19:03:09.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 997.0 MiB, Avail: 62552.4 MiB, Total: 63549.4 MiB 2019-11-04T19:03:19.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:03:19.005 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126288.4 MiB, Platform: 1665.2 MiB (Base: 687.6, k8s-system: 977.6), k8s-addon: 98.2 2019-11-04T19:03:19.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1778.6 MiB, cgroup-rss: 1767.6 MiB, Avail: 124509.8 MiB, Total: 126288.4 MiB 2019-11-04T19:03:19.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 781.6 MiB, Avail: 62712.7 MiB, Total: 63494.3 MiB 2019-11-04T19:03:19.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 997.0 MiB, Avail: 62553.8 MiB, Total: 63550.7 MiB 2019-11-04T19:03:29.003 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.9% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.4, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:03:29.008 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126286.4 MiB, Platform: 1663.9 MiB (Base: 686.3, k8s-system: 977.6), k8s-addon: 98.2 2019-11-04T19:03:29.008 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1777.5 MiB, cgroup-rss: 1766.3 MiB, Avail: 124508.8 MiB, Total: 126286.4 MiB 2019-11-04T19:03:29.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 780.6 MiB, Avail: 62713.2 MiB, Total: 63493.8 MiB 2019-11-04T19:03:29.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 997.0 MiB, Avail: 62552.3 MiB, Total: 63549.2 MiB 2019-11-04T19:03:39.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.4% (avg per cpu); cpus: 36, Platform: 2.2% (Base: 1.8, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:03:39.005 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126288.8 MiB, Platform: 1665.2 MiB (Base: 687.6, k8s-system: 977.6), k8s-addon: 98.2 2019-11-04T19:03:39.005 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1779.7 MiB, cgroup-rss: 1767.6 MiB, Avail: 124509.1 MiB, Total: 126288.8 MiB 2019-11-04T19:03:39.005 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 782.0 MiB, Avail: 62711.8 MiB, Total: 63493.8 MiB 2019-11-04T19:03:39.005 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 997.7 MiB, Avail: 62553.9 MiB, Total: 63551.6 MiB 2019-11-04T19:03:49.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 2.0% (Base: 1.3, k8s-system: 0.7), k8s-addon: 0.1 2019-11-04T19:03:49.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126288.3 MiB, Platform: 1675.5 MiB (Base: 697.8, k8s-system: 977.6), k8s-addon: 98.2 2019-11-04T19:03:49.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1786.7 MiB, cgroup-rss: 1777.6 MiB, Avail: 124501.7 MiB, Total: 126288.3 MiB 2019-11-04T19:03:49.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 781.6 MiB, Avail: 62713.6 MiB, Total: 63495.2 MiB 2019-11-04T19:03:49.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.58%, Anon: 1005.0 MiB, Avail: 62544.7 MiB, Total: 63549.7 MiB 2019-11-04T19:03:59.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:03:59.008 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126288.8 MiB, Platform: 1664.1 MiB (Base: 686.5, k8s-system: 977.7), k8s-addon: 98.2 2019-11-04T19:03:59.008 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1777.5 MiB, cgroup-rss: 1766.5 MiB, Avail: 124511.3 MiB, Total: 126288.8 MiB 2019-11-04T19:03:59.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 781.7 MiB, Avail: 62713.9 MiB, Total: 63495.6 MiB 2019-11-04T19:03:59.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.8 MiB, Avail: 62554.0 MiB, Total: 63549.8 MiB 2019-11-04T19:04:03.637 controller-1 collectd[12249]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T19:04:09.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:04:09.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126282.9 MiB, Platform: 1664.7 MiB (Base: 687.1, k8s-system: 977.7), k8s-addon: 96.8 2019-11-04T19:04:09.008 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1776.7 MiB, cgroup-rss: 1765.7 MiB, Avail: 124506.3 MiB, Total: 126282.9 MiB 2019-11-04T19:04:09.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 781.1 MiB, Avail: 62710.4 MiB, Total: 63491.5 MiB 2019-11-04T19:04:09.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.6 MiB, Avail: 62552.4 MiB, Total: 63548.0 MiB 2019-11-04T19:04:19.001 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:04:19.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126285.5 MiB, Platform: 1664.8 MiB (Base: 687.1, k8s-system: 977.7), k8s-addon: 96.8 2019-11-04T19:04:19.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1777.1 MiB, cgroup-rss: 1765.7 MiB, Avail: 124508.4 MiB, Total: 126285.5 MiB 2019-11-04T19:04:19.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 781.3 MiB, Avail: 62712.5 MiB, Total: 63493.8 MiB 2019-11-04T19:04:19.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.9 MiB, Avail: 62552.4 MiB, Total: 63548.3 MiB 2019-11-04T19:04:29.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.5, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:04:29.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126286.0 MiB, Platform: 1663.2 MiB (Base: 686.0, k8s-system: 977.2), k8s-addon: 96.8 2019-11-04T19:04:29.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1789.5 MiB, cgroup-rss: 1777.9 MiB, Avail: 124496.5 MiB, Total: 126286.0 MiB 2019-11-04T19:04:29.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.24%, Anon: 788.0 MiB, Avail: 62705.7 MiB, Total: 63493.7 MiB 2019-11-04T19:04:29.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.58%, Anon: 1001.5 MiB, Avail: 62547.5 MiB, Total: 63548.9 MiB 2019-11-04T19:04:39.003 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.2% (avg per cpu); cpus: 36, Platform: 2.1% (Base: 1.6, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:04:39.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126283.1 MiB, Platform: 1664.1 MiB (Base: 686.8, k8s-system: 977.2), k8s-addon: 96.8 2019-11-04T19:04:39.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1776.0 MiB, cgroup-rss: 1765.0 MiB, Avail: 124507.1 MiB, Total: 126283.1 MiB 2019-11-04T19:04:39.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 780.7 MiB, Avail: 62710.9 MiB, Total: 63491.5 MiB 2019-11-04T19:04:39.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.4 MiB, Avail: 62552.8 MiB, Total: 63548.2 MiB 2019-11-04T19:04:49.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.9% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:04:49.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126285.8 MiB, Platform: 1664.2 MiB (Base: 686.9, k8s-system: 977.2), k8s-addon: 96.8 2019-11-04T19:04:49.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1775.9 MiB, cgroup-rss: 1765.1 MiB, Avail: 124509.9 MiB, Total: 126285.8 MiB 2019-11-04T19:04:49.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 780.2 MiB, Avail: 62712.3 MiB, Total: 63492.5 MiB 2019-11-04T19:04:49.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.8 MiB, Avail: 62554.1 MiB, Total: 63549.9 MiB 2019-11-04T19:04:59.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 2.0% (Base: 1.5, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:04:59.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126287.4 MiB, Platform: 1662.9 MiB (Base: 685.7, k8s-system: 977.3), k8s-addon: 96.8 2019-11-04T19:04:59.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1775.9 MiB, cgroup-rss: 1763.9 MiB, Avail: 124511.5 MiB, Total: 126287.4 MiB 2019-11-04T19:04:59.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 779.6 MiB, Avail: 62713.3 MiB, Total: 63492.9 MiB 2019-11-04T19:04:59.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 996.3 MiB, Avail: 62554.8 MiB, Total: 63551.1 MiB 2019-11-04T19:05:09.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.5, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:05:09.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126286.7 MiB, Platform: 1663.0 MiB (Base: 686.7, k8s-system: 976.2), k8s-addon: 96.8 2019-11-04T19:05:09.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1771.1 MiB, cgroup-rss: 1763.9 MiB, Avail: 124515.6 MiB, Total: 126286.7 MiB 2019-11-04T19:05:09.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.22%, Anon: 776.6 MiB, Avail: 62717.1 MiB, Total: 63493.7 MiB 2019-11-04T19:05:09.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 994.6 MiB, Avail: 62555.1 MiB, Total: 63549.7 MiB 2019-11-04T19:05:19.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.9% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.3, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:05:19.009 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126287.2 MiB, Platform: 1663.2 MiB (Base: 686.9, k8s-system: 976.3), k8s-addon: 96.8 2019-11-04T19:05:19.009 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1775.1 MiB, cgroup-rss: 1764.2 MiB, Avail: 124512.1 MiB, Total: 126287.2 MiB 2019-11-04T19:05:19.009 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 779.7 MiB, Avail: 62713.5 MiB, Total: 63493.2 MiB 2019-11-04T19:05:19.009 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.5 MiB, Avail: 62555.1 MiB, Total: 63550.6 MiB 2019-11-04T19:05:29.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:05:29.008 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126284.2 MiB, Platform: 1662.1 MiB (Base: 685.7, k8s-system: 976.3), k8s-addon: 96.8 2019-11-04T19:05:29.008 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1774.0 MiB, cgroup-rss: 1763.0 MiB, Avail: 124510.2 MiB, Total: 126284.2 MiB 2019-11-04T19:05:29.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 778.8 MiB, Avail: 62712.1 MiB, Total: 63490.9 MiB 2019-11-04T19:05:29.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.2 MiB, Avail: 62554.6 MiB, Total: 63549.9 MiB 2019-11-04T19:05:39.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.4% (avg per cpu); cpus: 36, Platform: 2.3% (Base: 1.8, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:05:39.005 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126282.5 MiB, Platform: 1663.2 MiB (Base: 686.9, k8s-system: 976.4), k8s-addon: 96.8 2019-11-04T19:05:39.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1775.2 MiB, cgroup-rss: 1764.2 MiB, Avail: 124507.3 MiB, Total: 126282.5 MiB 2019-11-04T19:05:39.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 779.6 MiB, Avail: 62710.5 MiB, Total: 63490.1 MiB 2019-11-04T19:05:39.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.6 MiB, Avail: 62553.4 MiB, Total: 63549.0 MiB 2019-11-04T19:05:49.001 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.3% (avg per cpu); cpus: 36, Platform: 2.1% (Base: 1.4, k8s-system: 0.8), k8s-addon: 0.1 2019-11-04T19:05:49.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126282.1 MiB, Platform: 1663.3 MiB (Base: 686.9, k8s-system: 976.4), k8s-addon: 96.8 2019-11-04T19:05:49.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1771.2 MiB, cgroup-rss: 1764.2 MiB, Avail: 124510.9 MiB, Total: 126282.1 MiB 2019-11-04T19:05:49.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.22%, Anon: 776.8 MiB, Avail: 62712.8 MiB, Total: 63489.5 MiB 2019-11-04T19:05:49.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.56%, Anon: 994.4 MiB, Avail: 62554.7 MiB, Total: 63549.1 MiB 2019-11-04T19:05:59.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.8% (avg per cpu); cpus: 36, Platform: 1.7% (Base: 1.3, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:05:59.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126283.4 MiB, Platform: 1662.3 MiB (Base: 685.9, k8s-system: 976.4), k8s-addon: 96.8 2019-11-04T19:05:59.008 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1774.0 MiB, cgroup-rss: 1763.1 MiB, Avail: 124509.4 MiB, Total: 126283.4 MiB 2019-11-04T19:05:59.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 778.9 MiB, Avail: 62712.5 MiB, Total: 63491.4 MiB 2019-11-04T19:05:59.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.2 MiB, Avail: 62553.5 MiB, Total: 63548.7 MiB 2019-11-04T19:06:09.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:06:09.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126284.2 MiB, Platform: 1663.1 MiB (Base: 686.7, k8s-system: 976.4), k8s-addon: 96.8 2019-11-04T19:06:09.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1775.0 MiB, cgroup-rss: 1764.0 MiB, Avail: 124509.3 MiB, Total: 126284.2 MiB 2019-11-04T19:06:09.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 779.7 MiB, Avail: 62711.3 MiB, Total: 63491.0 MiB 2019-11-04T19:06:09.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.3 MiB, Avail: 62554.5 MiB, Total: 63549.8 MiB 2019-11-04T19:06:19.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:06:19.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126287.9 MiB, Platform: 1663.4 MiB (Base: 686.9, k8s-system: 976.5), k8s-addon: 96.8 2019-11-04T19:06:19.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1776.5 MiB, cgroup-rss: 1764.4 MiB, Avail: 124511.5 MiB, Total: 126287.9 MiB 2019-11-04T19:06:19.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 780.3 MiB, Avail: 62713.0 MiB, Total: 63493.4 MiB 2019-11-04T19:06:19.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 996.1 MiB, Avail: 62555.0 MiB, Total: 63551.2 MiB 2019-11-04T19:06:29.001 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:06:29.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126288.9 MiB, Platform: 1662.2 MiB (Base: 685.7, k8s-system: 976.5), k8s-addon: 96.8 2019-11-04T19:06:29.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1775.0 MiB, cgroup-rss: 1763.2 MiB, Avail: 124513.9 MiB, Total: 126288.9 MiB 2019-11-04T19:06:29.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 779.2 MiB, Avail: 62714.6 MiB, Total: 63493.8 MiB 2019-11-04T19:06:29.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.57%, Anon: 995.9 MiB, Avail: 62555.8 MiB, Total: 63551.7 MiB 2019-11-04T19:06:39.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.2% (avg per cpu); cpus: 36, Platform: 2.1% (Base: 1.6, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:06:39.005 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126284.5 MiB, Platform: 1661.5 MiB (Base: 685.0, k8s-system: 976.5), k8s-addon: 90.6 2019-11-04T19:06:39.005 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1763.3 MiB, cgroup-rss: 1756.3 MiB, Avail: 124521.2 MiB, Total: 126284.5 MiB 2019-11-04T19:06:39.005 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.22%, Anon: 774.6 MiB, Avail: 62715.8 MiB, Total: 63490.4 MiB 2019-11-04T19:06:39.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.56%, Anon: 988.7 MiB, Avail: 62562.0 MiB, Total: 63550.7 MiB 2019-11-04T19:06:49.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.8% (avg per cpu); cpus: 36, Platform: 1.7% (Base: 1.3, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:06:49.008 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126286.1 MiB, Platform: 1661.7 MiB (Base: 685.2, k8s-system: 976.5), k8s-addon: 90.6 2019-11-04T19:06:49.008 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1768.0 MiB, cgroup-rss: 1756.5 MiB, Avail: 124518.1 MiB, Total: 126286.1 MiB 2019-11-04T19:06:49.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.22%, Anon: 776.7 MiB, Avail: 62715.0 MiB, Total: 63491.8 MiB 2019-11-04T19:06:49.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.56%, Anon: 991.3 MiB, Avail: 62559.7 MiB, Total: 63551.0 MiB 2019-11-04T19:06:59.003 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 2.0% (Base: 1.5, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:06:59.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126287.5 MiB, Platform: 1629.0 MiB (Base: 682.4, k8s-system: 946.6), k8s-addon: 90.6 2019-11-04T19:06:59.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1735.6 MiB, cgroup-rss: 1723.8 MiB, Avail: 124551.8 MiB, Total: 126287.5 MiB 2019-11-04T19:06:59.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 770.6 MiB, Avail: 62722.9 MiB, Total: 63493.4 MiB 2019-11-04T19:06:59.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 965.1 MiB, Avail: 62585.6 MiB, Total: 63550.6 MiB 2019-11-04T19:07:09.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.5, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:07:09.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126288.8 MiB, Platform: 1630.0 MiB (Base: 683.4, k8s-system: 946.7), k8s-addon: 90.6 2019-11-04T19:07:09.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1735.8 MiB, cgroup-rss: 1724.8 MiB, Avail: 124553.0 MiB, Total: 126288.8 MiB 2019-11-04T19:07:09.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 770.3 MiB, Avail: 62724.4 MiB, Total: 63494.7 MiB 2019-11-04T19:07:09.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 965.5 MiB, Avail: 62585.2 MiB, Total: 63550.7 MiB 2019-11-04T19:07:19.001 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:07:19.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126287.3 MiB, Platform: 1630.2 MiB (Base: 683.4, k8s-system: 946.8), k8s-addon: 90.6 2019-11-04T19:07:19.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1733.2 MiB, cgroup-rss: 1724.9 MiB, Avail: 124554.1 MiB, Total: 126287.3 MiB 2019-11-04T19:07:19.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 768.8 MiB, Avail: 62724.3 MiB, Total: 63493.1 MiB 2019-11-04T19:07:19.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 964.4 MiB, Avail: 62586.3 MiB, Total: 63550.8 MiB 2019-11-04T19:07:29.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.8% (avg per cpu); cpus: 36, Platform: 1.7% (Base: 1.3, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:07:29.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126288.2 MiB, Platform: 1629.3 MiB (Base: 682.5, k8s-system: 946.8), k8s-addon: 90.6 2019-11-04T19:07:29.008 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1735.1 MiB, cgroup-rss: 1724.1 MiB, Avail: 124553.1 MiB, Total: 126288.2 MiB 2019-11-04T19:07:29.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 770.2 MiB, Avail: 62723.1 MiB, Total: 63493.3 MiB 2019-11-04T19:07:29.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 964.9 MiB, Avail: 62586.6 MiB, Total: 63551.5 MiB 2019-11-04T19:07:33.638 controller-1 collectd[12249]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T19:07:39.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.4% (avg per cpu); cpus: 36, Platform: 2.3% (Base: 1.8, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:07:39.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126284.4 MiB, Platform: 1629.9 MiB (Base: 683.5, k8s-system: 946.4), k8s-addon: 90.6 2019-11-04T19:07:39.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1736.4 MiB, cgroup-rss: 1724.7 MiB, Avail: 124548.1 MiB, Total: 126284.4 MiB 2019-11-04T19:07:39.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 771.0 MiB, Avail: 62722.5 MiB, Total: 63493.5 MiB 2019-11-04T19:07:39.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 965.4 MiB, Avail: 62582.1 MiB, Total: 63547.5 MiB 2019-11-04T19:07:49.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.3% (avg per cpu); cpus: 36, Platform: 2.2% (Base: 1.4, k8s-system: 0.8), k8s-addon: 0.1 2019-11-04T19:07:49.008 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126287.6 MiB, Platform: 1630.3 MiB (Base: 683.9, k8s-system: 946.4), k8s-addon: 90.6 2019-11-04T19:07:49.009 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1736.0 MiB, cgroup-rss: 1725.0 MiB, Avail: 124551.6 MiB, Total: 126287.6 MiB 2019-11-04T19:07:49.009 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 771.2 MiB, Avail: 62723.8 MiB, Total: 63495.0 MiB 2019-11-04T19:07:49.009 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 964.8 MiB, Avail: 62584.4 MiB, Total: 63549.2 MiB 2019-11-04T19:07:59.001 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:07:59.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126288.0 MiB, Platform: 1629.1 MiB (Base: 682.6, k8s-system: 946.4), k8s-addon: 88.9 2019-11-04T19:07:59.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1731.5 MiB, cgroup-rss: 1722.1 MiB, Avail: 124556.5 MiB, Total: 126288.0 MiB 2019-11-04T19:07:59.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 768.9 MiB, Avail: 62726.4 MiB, Total: 63495.3 MiB 2019-11-04T19:07:59.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.51%, Anon: 962.6 MiB, Avail: 62586.6 MiB, Total: 63549.3 MiB 2019-11-04T19:08:09.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.3, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:08:09.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126284.6 MiB, Platform: 1634.8 MiB (Base: 688.4, k8s-system: 946.4), k8s-addon: 88.9 2019-11-04T19:08:09.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1737.5 MiB, cgroup-rss: 1727.7 MiB, Avail: 124547.0 MiB, Total: 126284.6 MiB 2019-11-04T19:08:09.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.22%, Anon: 774.6 MiB, Avail: 62718.9 MiB, Total: 63493.5 MiB 2019-11-04T19:08:09.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 963.4 MiB, Avail: 62584.7 MiB, Total: 63548.1 MiB 2019-11-04T19:08:19.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 2.0% (Base: 1.5, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:08:19.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126283.8 MiB, Platform: 1630.1 MiB (Base: 683.7, k8s-system: 946.4), k8s-addon: 88.9 2019-11-04T19:08:19.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1734.6 MiB, cgroup-rss: 1723.1 MiB, Avail: 124549.2 MiB, Total: 126283.8 MiB 2019-11-04T19:08:19.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 770.8 MiB, Avail: 62720.5 MiB, Total: 63491.3 MiB 2019-11-04T19:08:19.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 963.8 MiB, Avail: 62585.3 MiB, Total: 63549.1 MiB 2019-11-04T19:08:29.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.5, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:08:29.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126285.5 MiB, Platform: 1628.7 MiB (Base: 682.3, k8s-system: 946.4), k8s-addon: 88.9 2019-11-04T19:08:29.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1732.7 MiB, cgroup-rss: 1721.7 MiB, Avail: 124552.8 MiB, Total: 126285.5 MiB 2019-11-04T19:08:29.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 769.4 MiB, Avail: 62722.2 MiB, Total: 63491.6 MiB 2019-11-04T19:08:29.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 963.4 MiB, Avail: 62587.2 MiB, Total: 63550.5 MiB 2019-11-04T19:08:39.001 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.3% (avg per cpu); cpus: 36, Platform: 2.2% (Base: 1.7, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:08:39.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126285.3 MiB, Platform: 1629.8 MiB (Base: 683.4, k8s-system: 946.4), k8s-addon: 88.9 2019-11-04T19:08:39.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1733.9 MiB, cgroup-rss: 1722.9 MiB, Avail: 124551.5 MiB, Total: 126285.3 MiB 2019-11-04T19:08:39.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 771.1 MiB, Avail: 62720.3 MiB, Total: 63491.4 MiB 2019-11-04T19:08:39.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 962.8 MiB, Avail: 62587.7 MiB, Total: 63550.5 MiB 2019-11-04T19:08:49.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.8% (avg per cpu); cpus: 36, Platform: 1.7% (Base: 1.3, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:08:49.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126285.8 MiB, Platform: 1642.3 MiB (Base: 695.4, k8s-system: 946.9), k8s-addon: 88.9 2019-11-04T19:08:49.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1743.7 MiB, cgroup-rss: 1735.3 MiB, Avail: 124542.1 MiB, Total: 126285.8 MiB 2019-11-04T19:08:49.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.23%, Anon: 780.3 MiB, Avail: 62712.8 MiB, Total: 63493.0 MiB 2019-11-04T19:08:49.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 963.4 MiB, Avail: 62585.9 MiB, Total: 63549.3 MiB 2019-11-04T19:08:59.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.5, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:08:59.005 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126286.9 MiB, Platform: 1629.4 MiB (Base: 682.5, k8s-system: 946.9), k8s-addon: 88.9 2019-11-04T19:08:59.005 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1734.3 MiB, cgroup-rss: 1722.4 MiB, Avail: 124552.7 MiB, Total: 126286.9 MiB 2019-11-04T19:08:59.005 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 770.1 MiB, Avail: 62724.6 MiB, Total: 63494.7 MiB 2019-11-04T19:08:59.005 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 964.2 MiB, Avail: 62584.6 MiB, Total: 63548.9 MiB 2019-11-04T19:09:09.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.1% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:09:09.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126285.6 MiB, Platform: 1630.1 MiB (Base: 683.1, k8s-system: 946.9), k8s-addon: 88.9 2019-11-04T19:09:09.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1734.1 MiB, cgroup-rss: 1723.1 MiB, Avail: 124551.6 MiB, Total: 126285.6 MiB 2019-11-04T19:09:09.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 770.5 MiB, Avail: 62721.9 MiB, Total: 63492.4 MiB 2019-11-04T19:09:09.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 963.5 MiB, Avail: 62586.3 MiB, Total: 63549.8 MiB 2019-11-04T19:09:19.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:09:19.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126288.3 MiB, Platform: 1630.3 MiB (Base: 683.4, k8s-system: 947.0), k8s-addon: 88.9 2019-11-04T19:09:19.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1734.4 MiB, cgroup-rss: 1723.4 MiB, Avail: 124553.9 MiB, Total: 126288.3 MiB 2019-11-04T19:09:19.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 770.7 MiB, Avail: 62723.7 MiB, Total: 63494.4 MiB 2019-11-04T19:09:19.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 963.7 MiB, Avail: 62586.8 MiB, Total: 63550.4 MiB 2019-11-04T19:09:29.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 1.9% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.4, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:09:29.005 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126286.4 MiB, Platform: 1624.9 MiB (Base: 681.7, k8s-system: 943.2), k8s-addon: 88.9 2019-11-04T19:09:29.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1725.7 MiB, cgroup-rss: 1718.0 MiB, Avail: 124560.7 MiB, Total: 126286.4 MiB 2019-11-04T19:09:29.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 767.3 MiB, Avail: 62727.0 MiB, Total: 63494.2 MiB 2019-11-04T19:09:29.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.51%, Anon: 958.4 MiB, Avail: 62590.3 MiB, Total: 63548.8 MiB 2019-11-04T19:09:39.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.3% (avg per cpu); cpus: 36, Platform: 2.2% (Base: 1.7, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:09:39.007 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126285.0 MiB, Platform: 1626.2 MiB (Base: 683.0, k8s-system: 943.2), k8s-addon: 89.1 2019-11-04T19:09:39.007 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1731.2 MiB, cgroup-rss: 1719.4 MiB, Avail: 124553.8 MiB, Total: 126285.0 MiB 2019-11-04T19:09:39.007 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 769.3 MiB, Avail: 62724.2 MiB, Total: 63493.5 MiB 2019-11-04T19:09:39.007 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.51%, Anon: 961.9 MiB, Avail: 62586.2 MiB, Total: 63548.1 MiB 2019-11-04T19:09:49.003 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.3% (avg per cpu); cpus: 36, Platform: 2.2% (Base: 1.4, k8s-system: 0.8), k8s-addon: 0.1 2019-11-04T19:09:49.008 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126288.3 MiB, Platform: 1626.2 MiB (Base: 683.0, k8s-system: 943.2), k8s-addon: 89.1 2019-11-04T19:09:49.008 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1730.4 MiB, cgroup-rss: 1719.4 MiB, Avail: 124557.9 MiB, Total: 126288.3 MiB 2019-11-04T19:09:49.008 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 768.9 MiB, Avail: 62725.8 MiB, Total: 63494.8 MiB 2019-11-04T19:09:49.008 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.51%, Anon: 961.5 MiB, Avail: 62588.6 MiB, Total: 63550.1 MiB 2019-11-04T19:09:59.001 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:09:59.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126287.7 MiB, Platform: 1625.1 MiB (Base: 681.9, k8s-system: 943.2), k8s-addon: 89.1 2019-11-04T19:09:59.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1729.0 MiB, cgroup-rss: 1718.3 MiB, Avail: 124558.7 MiB, Total: 126287.7 MiB 2019-11-04T19:09:59.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 767.5 MiB, Avail: 62726.2 MiB, Total: 63493.7 MiB 2019-11-04T19:09:59.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.51%, Anon: 961.5 MiB, Avail: 62589.1 MiB, Total: 63550.6 MiB 2019-11-04T19:10:01.051 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T19:10:01.060 controller-1 systemd[1]: info Started Session 11 of user root. 2019-11-04T19:10:01.115 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T19:10:09.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.8% (Base: 1.4, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:10:09.005 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126284.5 MiB, Platform: 1626.2 MiB (Base: 682.8, k8s-system: 943.3), k8s-addon: 90.4 2019-11-04T19:10:09.005 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1727.7 MiB, cgroup-rss: 1720.8 MiB, Avail: 124556.7 MiB, Total: 126284.5 MiB 2019-11-04T19:10:09.005 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 766.6 MiB, Avail: 62723.9 MiB, Total: 63490.5 MiB 2019-11-04T19:10:09.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.51%, Anon: 961.2 MiB, Avail: 62589.4 MiB, Total: 63550.5 MiB 2019-11-04T19:10:19.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.5), k8s-addon: 0.1 2019-11-04T19:10:19.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126287.0 MiB, Platform: 1626.9 MiB (Base: 683.5, k8s-system: 943.4), k8s-addon: 90.5 2019-11-04T19:10:19.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1733.0 MiB, cgroup-rss: 1721.7 MiB, Avail: 124554.0 MiB, Total: 126287.0 MiB 2019-11-04T19:10:19.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 769.5 MiB, Avail: 62722.8 MiB, Total: 63492.3 MiB 2019-11-04T19:10:19.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 963.5 MiB, Avail: 62587.2 MiB, Total: 63550.7 MiB 2019-11-04T19:10:29.002 controller-1 collectd[12249]: info platform cpu usage plugin Usage: 2.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.4, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:10:29.006 controller-1 collectd[12249]: info platform memory usage: Usage: 1.3%; Reserved: 126287.3 MiB, Platform: 1625.1 MiB (Base: 681.7, k8s-system: 943.4), k8s-addon: 90.5 2019-11-04T19:10:29.006 controller-1 collectd[12249]: info 4K memory usage: Anon: 1.4%, Anon: 1731.2 MiB, cgroup-rss: 1719.8 MiB, Avail: 124556.0 MiB, Total: 126287.3 MiB 2019-11-04T19:10:29.006 controller-1 collectd[12249]: info 4K numa memory usage: node0, Anon: 1.21%, Anon: 767.7 MiB, Avail: 62725.4 MiB, Total: 63493.1 MiB 2019-11-04T19:10:29.006 controller-1 collectd[12249]: info 4K numa memory usage: node1, Anon: 1.52%, Anon: 963.5 MiB, Avail: 62587.2 MiB, Total: 63550.7 MiB 2019-11-04T19:10:34.439 controller-1 systemd[1]: info Removed slice system-systemd\x2dfsck.slice. 2019-11-04T19:10:34.445 controller-1 systemd[1]: info Stopped Dump dmesg to /var/log/dmesg. 2019-11-04T19:10:34.445 controller-1 systemd[1]: info Stopping Naming services LDAP client daemon.... 2019-11-04T19:10:34.000 controller-1 nslcd[84484]: info caught signal SIGTERM (15), shutting down 2019-11-04T19:10:34.000 controller-1 nslcd[84484]: info version 0.8.13 bailing out 2019-11-04T19:10:34.453 controller-1 systemd[1]: info Removed slice system-selinux\x2dpolicy\x2dmigrate\x2dlocal\x2dchanges.slice. 2019-11-04T19:10:34.453 controller-1 systemd[1]: info Stopped Stop Read-Ahead Data Collection 10s After Completed Startup. 2019-11-04T19:10:34.453 controller-1 systemd[1]: info Stopped target Multi-User System. 2019-11-04T19:10:34.453 controller-1 systemd[1]: info Stopping Command Scheduler... 2019-11-04T19:10:34.454 controller-1 systemd[1]: info Stopping Titanium Cloud Filesystem Initialization... 2019-11-04T19:10:34.454 controller-1 systemd[1]: info Stopping Self Monitoring and Reporting Technology (SMART) Daemon... 2019-11-04T19:10:34.000 controller-1 smartd[10816]: info smartd received signal 15: Terminated 2019-11-04T19:10:34.000 controller-1 smartd[10816]: info smartd is exiting (exit status 0) 2019-11-04T19:10:34.454 controller-1 systemd[1]: info Stopping memcached daemon... 2019-11-04T19:10:34.454 controller-1 memcached[12131]: info Signal handled: Terminated. 2019-11-04T19:10:34.455 controller-1 systemd[1]: info Stopping Titanium Cloud libvirt QEMU cleanup... 2019-11-04T19:10:34.455 controller-1 systemd[1]: info Stopping Dynamic System Tuning Daemon... 2019-11-04T19:10:34.455 controller-1 systemd[1]: info Stopped target Login Prompts. 2019-11-04T19:10:34.455 controller-1 systemd[1]: info Stopping Getty on tty1... 2019-11-04T19:10:34.470 controller-1 systemd[1]: info Stopped Resets System Activity Logs. 2019-11-04T19:10:34.470 controller-1 systemd[1]: info Stopping fast remote file copy program daemon... 2019-11-04T19:10:34.471 controller-1 systemd[1]: info Stopping Authorization Manager... 2019-11-04T19:10:34.471 controller-1 systemd[1]: info Stopping Fault Management REST API Service... 2019-11-04T19:10:34.471 controller-1 systemd[1]: info Stopping Titanium Cloud Maintenance Host Watchdog... 2019-11-04T19:10:34.472 controller-1 systemd[1]: info Stopping Kubernetes Kubelet Server... 2019-11-04T19:10:34.472 controller-1 systemd[1]: info Stopping D-Bus System Message Bus... 2019-11-04T19:10:34.472 controller-1 systemd[1]: info Stopping Service Management Shutdown Unit... 2019-11-04T19:10:34.486 controller-1 systemd[1]: info Stopping LVM2 PV scan on device 8:4... 2019-11-04T19:10:34.486 controller-1 systemd[1]: info Stopped target Timers. 2019-11-04T19:10:34.486 controller-1 systemd[1]: info Stopped Daily Cleanup of Temporary Directories. 2019-11-04T19:10:34.486 controller-1 systemd[1]: info Stopped daily update of the root trust anchor for DNSSEC. 2019-11-04T19:10:34.486 controller-1 systemd[1]: info Stopping Serial Getty on ttyS0... 2019-11-04T19:10:34.486 controller-1 systemd[1]: info Stopping LLDP daemon... 2019-11-04T19:10:34.491 controller-1 systemd[1]: info Stopped target RPC Port Mapper. 2019-11-04T19:10:34.497 controller-1 systemd[1]: info Stopped target rpc_pipefs.target. 2019-11-04T19:10:34.504 controller-1 systemd[1]: info Unmounting RPC Pipe File System... 2019-11-04T19:10:34.506 controller-1 umount[492815]: info umount: /var/lib/nfs/rpc_pipefs: target is busy. 2019-11-04T19:10:34.506 controller-1 umount[492815]: info (In some cases useful info about processes that use 2019-11-04T19:10:34.506 controller-1 umount[492815]: info the device is found by lsof(8) or fuser(1)) 2019-11-04T19:10:34.510 controller-1 systemd[1]: info Stopping Login Service... 2019-11-04T19:10:34.536 controller-1 systemd[1]: info Stopped Authorization Manager. 2019-11-04T19:10:34.550 controller-1 systemd[1]: info Stopped Login Service. 2019-11-04T19:10:34.564 controller-1 systemd[1]: info Stopped D-Bus System Message Bus. 2019-11-04T19:10:34.582 controller-1 systemd[1]: info Stopped Self Monitoring and Reporting Technology (SMART) Daemon. 2019-11-04T19:10:34.603 controller-1 systemd[1]: info Stopped memcached daemon. 2019-11-04T19:10:34.608 controller-1 systemd[1]: notice tuned.service: main process exited, code=exited, status=1/FAILURE 2019-11-04T19:10:34.620 controller-1 systemd[1]: info Stopped Dynamic System Tuning Daemon. 2019-11-04T19:10:34.626 controller-1 systemd[1]: notice Unit tuned.service entered failed state. 2019-11-04T19:10:34.626 controller-1 systemd[1]: warning tuned.service failed. 2019-11-04T19:10:34.626 controller-1 systemd[1]: notice lldpd.service: main process exited, code=exited, status=1/FAILURE 2019-11-04T19:10:34.634 controller-1 systemd[1]: info Stopped LLDP daemon. 2019-11-04T19:10:34.638 controller-1 systemd[1]: notice Unit lldpd.service entered failed state. 2019-11-04T19:10:34.638 controller-1 systemd[1]: warning lldpd.service failed. 2019-11-04T19:10:34.648 controller-1 systemd[1]: info Stopped Command Scheduler. 2019-11-04T19:10:34.660 controller-1 systemd[1]: info Stopped Naming services LDAP client daemon.. 2019-11-04T19:10:34.678 controller-1 systemd[1]: info Stopped Serial Getty on ttyS0. 2019-11-04T19:10:34.682 controller-1 fm-api[492791]: info Stopping fm-api: Stopped 2019-11-04T19:10:34.697 controller-1 systemd[1]: info Stopped Getty on tty1. 2019-11-04T19:10:34.709 controller-1 systemd[1]: info Stopped Titanium Cloud Filesystem Initialization. 2019-11-04T19:10:34.730 controller-1 systemd[1]: info Stopped Titanium Cloud libvirt QEMU cleanup. 2019-11-04T19:10:34.749 controller-1 systemd[1]: info Stopped Fault Management REST API Service. 2019-11-04T19:10:34.768 controller-1 systemd[1]: info Stopped LVM2 PV scan on device 8:4. 2019-11-04T19:10:34.774 controller-1 systemd[1]: notice var-lib-nfs-rpc_pipefs.mount mount process exited, code=exited status=32 2019-11-04T19:10:34.774 controller-1 systemd[1]: err Failed unmounting RPC Pipe File System. 2019-11-04T19:10:34.800 controller-1 systemd[1]: info Stopped Kubernetes Kubelet Server. 2019-11-04T19:10:34.808 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.808 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.809 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.809 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.810 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.810 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.811 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.812 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.812 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.813 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.813 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.813 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.813 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.813 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.814 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.815 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.815 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.815 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.815 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.816 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.817 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.817 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.818 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.818 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.818 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.818 controller-1 systemd[1]: warning Failed to propagate agent release message: Transport endpoint is not connected 2019-11-04T19:10:34.820 controller-1 systemd[1]: info Stopping Docker Application Container Engine... 2019-11-04T19:10:34.821 controller-1 dockerd[12258]: info time="2019-11-04T19:10:34.821049741Z" level=info msg="Processing signal 'terminated'" 2019-11-04T19:10:34.839 controller-1 systemd[1]: info Removed slice system-lvm2\x2dpvscan.slice. 2019-11-04T19:10:34.847 controller-1 systemd[1]: info Stopping StarlingX Cloud Filesystem Auto-mounter... 2019-11-04T19:10:34.862 controller-1 systemd[1]: info Removed slice system-getty.slice. 2019-11-04T19:10:34.000 controller-1 rsyncd[10648]: info sent 0 bytes received 0 bytes total size 0 2019-11-04T19:10:34.883 controller-1 systemd[1]: info Removed slice system-serial\x2dgetty.slice. 2019-11-04T19:10:34.891 controller-1 systemd[1]: info Stopping Permit User Sessions... 2019-11-04T19:10:34.895 controller-1 systemd[1]: info Stopped target System Time Synchronized. 2019-11-04T19:10:34.917 controller-1 systemd[1]: info Stopped fast remote file copy program daemon. 2019-11-04T19:10:35.016 controller-1 systemd[1]: info Stopped StarlingX Cloud Filesystem Auto-mounter. 2019-11-04T19:10:35.020 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.020357869Z" level=info msg="shim reaped" id=dd3ec6de4fc48f4088e52d7e7f300adbf08db41c5fafbefcce527a1e9c38d94c 2019-11-04T19:10:35.026 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.026180654Z" level=info msg="shim reaped" id=7a008a2f396814ecf5e1f8e63f9cc46fb62fa0deac5166c9ab721d210db7b5db 2019-11-04T19:10:35.026 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.026717323Z" level=info msg="shim reaped" id=acbc4ddc6fb8489789c3a08ac833dc119395db5b82c7bdb0a2a40aedc6f01892 2019-11-04T19:10:35.030 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.030291370Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.035 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.035329100Z" level=info msg="shim reaped" id=7c5dd5b9f49fe00cc80931492a699aaa5df4583ba13af0a5358df9f765ff027f 2019-11-04T19:10:35.036 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.036193230Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.036 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.036636350Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.045 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.045374604Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.054 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.054600306Z" level=info msg="shim reaped" id=99407b02f730293fda5d88f55474c98a52876fb2e44b55fc5ae4bd85e749e245 2019-11-04T19:10:35.064 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.064255665Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.083 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.083286667Z" level=info msg="shim reaped" id=ee44ef250a59a517a55a204d81cfbf50b6629e22a1046e3fdb75abb0d3492910 2019-11-04T19:10:35.093 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.093089632Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.097 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.097621598Z" level=info msg="shim reaped" id=8a83b43c787e05157907f9c85fd813c42cfd0ebacd013c79188e39ff78409a38 2019-11-04T19:10:35.107 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.107430762Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.115 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.115619059Z" level=info msg="shim reaped" id=80b320026035c0e539d530f585948dea33400153ae8017f0bee6f55fbff2d50f 2019-11-04T19:10:35.115 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.115790038Z" level=info msg="shim reaped" id=9c568fdd885eb49cdb876cdb089c84c272533378a6dc96904501700416d558bd 2019-11-04T19:10:35.115 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.115794320Z" level=info msg="shim reaped" id=ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41 2019-11-04T19:10:35.116 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.116248507Z" level=info msg="shim reaped" id=92dc553c15ea21e4ff78c1437005ab191d5e554b8c6f0fc9567d7c92b67d4e75 2019-11-04T19:10:35.116 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.116576100Z" level=info msg="shim reaped" id=62f1a7b299abd95d2dbc00e3aea65f2f932bac44bbc2e7bc00117b9685c32808 2019-11-04T19:10:35.116 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.116578161Z" level=info msg="shim reaped" id=8a48cde1943c4f7f2160103b715c84769145784819dc41c1ee2fd2f9176361a0 2019-11-04T19:10:35.125 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.125545924Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.125 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.125566196Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.125 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.125605664Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.126 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.126150153Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.126 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.126421720Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.126 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.126442868Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.128 controller-1 systemd[1]: info Stopped Permit User Sessions. 2019-11-04T19:10:35.130 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.130131246Z" level=info msg="shim reaped" id=6be52826841dfadedcbe4b597d8335dd8c3a552c4a294bd8c3dd3d646f4b3251 2019-11-04T19:10:35.140 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.140004570Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.148 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/b4891368f3e5d033385206ca649b2cd5e1fa75f79452057d3c511721772171aa/merged. 2019-11-04T19:10:35.177 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/fa7567cdc5a601f96fdcec4705a73d0ab750a2a007a89e402ed4b59ff9a6663a/merged. 2019-11-04T19:10:35.179 controller-1 containerd[12218]: info time="2019-11-04T19:10:35.179279434Z" level=info msg="shim reaped" id=5e492954af5b8adb96a1f47d5121f7cece06db7ecec2ca58b65b0aadbe8896d1 2019-11-04T19:10:35.189 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.189144566Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:10:35.198 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/0196d24787eeb88f8cfe015af28e964832517749ae1462bf420ee26eaea7de69/merged. 2019-11-04T19:10:35.207 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.207355993Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby 2019-11-04T19:10:35.207 controller-1 dockerd[12258]: info time="2019-11-04T19:10:35.207735205Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby 2019-11-04T19:10:35.222 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/ad1fdb3b1e7728cae1fda21b0eeb06c0aa75d254e3459e5bbd73fed2ac325270/merged. 2019-11-04T19:10:35.242 controller-1 systemd[1]: info Unmounted /run/docker/netns/0b29635cf8e0. 2019-11-04T19:10:35.255 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/4e12483caa6a91a8a0d3e95631f4796953bb30eec7d015e76b3cccd6767b5ab6/merged. 2019-11-04T19:10:35.273 controller-1 systemd[1]: info Unmounted /var/lib/docker/containers/8a48cde1943c4f7f2160103b715c84769145784819dc41c1ee2fd2f9176361a0/mounts/shm. 2019-11-04T19:10:35.288 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/aa4813794fafa2a0728f00fbe79c2e5b266395d5a4c21499fc232361e86a8d30/merged. 2019-11-04T19:10:35.306 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/032cea2890f10439d6a3456fc99f4abc27c5f84ff54f4f3734796ccb81623a36/merged. 2019-11-04T19:10:35.324 controller-1 systemd[1]: info Unmounted /var/lib/docker/containers/7a008a2f396814ecf5e1f8e63f9cc46fb62fa0deac5166c9ab721d210db7b5db/mounts/shm. 2019-11-04T19:10:35.343 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/8eca3ae73dce0048575575a05e591ed3ee4e35fe147c0a1243c806c40a591a7a/merged. 2019-11-04T19:10:35.358 controller-1 systemd[1]: info Unmounted /var/lib/docker/containers/dd3ec6de4fc48f4088e52d7e7f300adbf08db41c5fafbefcce527a1e9c38d94c/mounts/shm. 2019-11-04T19:10:35.376 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/7e6f8efc908cea9eae9f8fd725e1a94abb25335e4e5da731c2b9d40573b62588/merged. 2019-11-04T19:10:35.394 controller-1 systemd[1]: info Unmounted /var/lib/docker/containers/7c5dd5b9f49fe00cc80931492a699aaa5df4583ba13af0a5358df9f765ff027f/mounts/shm. 2019-11-04T19:10:35.410 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/b5a0dfb127f68eb914a5f467c6578bbf84a7c0289427066b6c1a94e6f79ba946/merged. 2019-11-04T19:10:35.428 controller-1 systemd[1]: info Unmounted /var/lib/docker/containers/acbc4ddc6fb8489789c3a08ac833dc119395db5b82c7bdb0a2a40aedc6f01892/mounts/shm. 2019-11-04T19:10:35.445 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/0f59ca33dbb2bbf7125f9b02f6b10cc4bd8a429da64c62234d6dd10e40778931/merged. 2019-11-04T19:10:35.463 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/7f4506d4d642493672bb3caa775d9c59b683c95903492a149606440420090140/merged. 2019-11-04T19:10:35.484 controller-1 systemd[1]: info Unmounted /var/lib/docker/containers/ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41/mounts/shm. 2019-11-04T19:10:35.502 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/0d8baaa1734eba215379841d8e219952daac5f5008dac84b807177787fa810e9/merged. 2019-11-04T19:10:35.521 controller-1 systemd[1]: info Unmounted /var/lib/docker/containers/6be52826841dfadedcbe4b597d8335dd8c3a552c4a294bd8c3dd3d646f4b3251/mounts/shm. 2019-11-04T19:10:35.536 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/9ef0c17cbb72f12ae70a0ab8ba16b99c64e40bb28caac1f542c9191791f27772/merged. 2019-11-04T19:10:35.554 controller-1 systemd[1]: info Unmounted /var/lib/docker/overlay2/2cf6eb9ec835779043591f3a1a55c9d7445eb5bdff02319a2ad7072814c37530/merged. 2019-11-04T19:10:35.570 controller-1 systemd[1]: info Stopping OpenLDAP Server Daemon... 2019-11-04T19:10:35.601 controller-1 openldap[493287]: info Stopping NSCD: Stopping SLAPD: . 2019-11-04T19:10:35.601 controller-1 systemd[1]: info Stopped Docker Application Container Engine. 2019-11-04T19:10:35.613 controller-1 hostw[492792]: info Stopping hostwd: [ OK ] OK 2019-11-04T19:10:35.614 controller-1 systemd[1]: info Stopped OpenLDAP Server Daemon. 2019-11-04T19:10:35.630 controller-1 systemd[1]: info Stopped Titanium Cloud Maintenance Host Watchdog. 2019-11-04T19:10:35.641 controller-1 systemd[1]: info Stopping Titanium Cloud Maintenance Process Monitor... 2019-11-04T19:10:35.649 controller-1 systemd[1]: info Stopping containerd container runtime... 2019-11-04T19:10:35.670 controller-1 systemd[1]: info Stopped containerd container runtime. 2019-11-04T19:10:35.766 controller-1 pmon[493311]: info Stopping pmond: [ OK ] OK 2019-11-04T19:10:35.836 controller-1 systemd[1]: info Stopped Titanium Cloud Maintenance Process Monitor. 2019-11-04T19:10:35.845 controller-1 systemd[1]: info Stopping Titanium Cloud Maintenance Command Handler Client... 2019-11-04T19:10:35.854 controller-1 systemd[1]: info Stopping Service Management Event Recorder Unit... 2019-11-04T19:10:35.861 controller-1 systemd[1]: info Stopping Starling-X Maintenance Link Monitor... 2019-11-04T19:10:35.868 controller-1 systemd[1]: info Stopping ACPI Event Daemon... 2019-11-04T19:10:35.875 controller-1 systemd[1]: info Stopping Titanium Cloud Maintenance Heartbeat Agent... 2019-11-04T19:10:35.000 controller-1 ntpd[87544]: notice ntpd exiting on signal 15 2019-11-04T19:10:35.883 controller-1 systemd[1]: info Stopping Network Time Service... 2019-11-04T19:10:35.000 controller-1 acpid: notice exiting 2019-11-04T19:10:35.888 controller-1 systemd[1]: info Stopping Titanium Cloud Maintenance Logger... 2019-11-04T19:10:35.896 controller-1 systemd[1]: info Stopping OpenSSH server daemon... 2019-11-04T19:10:35.900 controller-1 systemd[1]: info Stopping TIS Patching Controller Daemon... 2019-11-04T19:10:35.904 controller-1 sw-patch-controller-daemon[493369]: info Stopping sw-patch-controller-daemon...done. 2019-11-04T19:10:35.906 controller-1 hbsAgent[493348]: info Starting hbsAgent: is already running OK 2019-11-04T19:10:35.908 controller-1 systemd[1]: info Stopping Titanium Cloud Maintenance Goenable Ready... 2019-11-04T19:10:35.911 controller-1 sshd[493367]: info Stopping sshd: Stopping sshd: [FAILED] 2019-11-04T19:10:35.912 controller-1 goenabled[493382]: info Stopping goenabled: [ OK ] 2019-11-04T19:10:35.915 controller-1 systemd[1]: info Stopping Titanium Cloud Maintenance Alarm Handler Client... 2019-11-04T19:10:35.923 controller-1 collectd[12249]: info Exiting normally. 2019-11-04T19:10:35.923 controller-1 systemd[1]: info Stopping Collectd statistics daemon and extension services... 2019-11-04T19:10:35.923 controller-1 collectd[12249]: info collectd: Stopping 5 read threads. 2019-11-04T19:10:35.924 controller-1 collectd[12249]: info collectd: Stopping 5 write threads. 2019-11-04T19:10:35.931 controller-1 systemd[1]: info Stopping Titanium Cloud Maintenance Filesystem Monitor... 2019-11-04T19:10:35.939 controller-1 systemd[1]: info Stopping TIS Patching Agent... 2019-11-04T19:10:35.943 controller-1 sw-patch-agent[493419]: info Stopping sw-patch-agent...done. 2019-11-04T19:10:35.958 controller-1 systemd[1]: info Stopped ACPI Event Daemon. 2019-11-04T19:10:35.970 controller-1 systemd[1]: info Stopped Network Time Service. 2019-11-04T19:10:35.978 controller-1 systemd[1]: notice sshd.service: control process exited, code=exited status=1 2019-11-04T19:10:35.979 controller-1 systemd[1]: info Stopped OpenSSH server daemon. 2019-11-04T19:10:35.986 controller-1 systemd[1]: notice Unit sshd.service entered failed state. 2019-11-04T19:10:35.986 controller-1 systemd[1]: warning sshd.service failed. 2019-11-04T19:10:35.987 controller-1 lmon[493334]: info Stopping lmond: [ OK ] OK 2019-11-04T19:10:35.993 controller-1 systemd[1]: info Stopped TIS Patching Controller Daemon. 2019-11-04T19:10:36.014 controller-1 systemd[1]: info Stopped Titanium Cloud Maintenance Heartbeat Agent. 2019-11-04T19:10:36.026 controller-1 mtclog[493361]: info Stopping mtclogd: [ OK ] OK 2019-11-04T19:10:36.034 controller-1 systemd[1]: info Stopped Starling-X Maintenance Link Monitor. 2019-11-04T19:10:36.049 controller-1 systemd[1]: info Stopped Titanium Cloud Maintenance Logger. 2019-11-04T19:10:36.051 controller-1 mtcalarm[493403]: info Stopping mtcalarmd: [ OK ] OK 2019-11-04T19:10:36.066 controller-1 fsmon[493408]: info Stopping fsmond: [ OK ] OK 2019-11-04T19:10:36.076 controller-1 systemd[1]: info Stopped Titanium Cloud Maintenance Goenable Ready. 2019-11-04T19:10:36.093 controller-1 systemd[1]: info Stopped Titanium Cloud Maintenance Alarm Handler Client. 2019-11-04T19:10:36.110 controller-1 systemd[1]: info Stopped Titanium Cloud Maintenance Filesystem Monitor. 2019-11-04T19:10:36.125 controller-1 systemd[1]: info Stopped TIS Patching Agent. 2019-11-04T19:10:36.148 controller-1 systemd[1]: info Stopped TIS Patching Controller. 2019-11-04T19:10:36.158 controller-1 collectd[12249]: info ===== 2019-11-04T19:10:36.162 controller-1 systemd[1]: info Stopped Set time via NTP. 2019-11-04T19:10:36.168 controller-1 systemd[1]: info Stopping StarlingX Filesystem Server... 2019-11-04T19:10:36.179 controller-1 nfsserver[493466]: info stopping mountd: done 2019-11-04T19:10:36.000 controller-1 rpc.mountd[12389]: notice Caught signal 15, un-registering and exiting. 2019-11-04T19:10:36.189 controller-1 systemd[1]: info Stopped Collectd statistics daemon and extension services. 2019-11-04T19:10:36.198 controller-1 systemd[1]: info Stopping InfluxDB open-source, distributed, time series database... 2019-11-04T19:10:36.222 controller-1 systemd[1]: info Stopped InfluxDB open-source, distributed, time series database. 2019-11-04T19:10:36.993 controller-1 sm-eru[493330]: info Stopping sm-eru: [ OK ] OK 2019-11-04T19:10:37.013 controller-1 systemd[1]: info Stopped Service Management Event Recorder Unit. 2019-11-04T19:10:37.022 controller-1 systemd[1]: info Stopping Service Management API Unit... 2019-11-04T19:10:37.033 controller-1 sm-api[493621]: info Stopping sm-api: OK 2019-11-04T19:10:37.043 controller-1 systemd[1]: info Stopped Service Management API Unit. 2019-11-04T19:10:37.212 controller-1 nfsserver[493466]: info stopping nfsd: .done 2019-11-04T19:10:37.220 controller-1 systemd[1]: info Stopped StarlingX Filesystem Server. 2019-11-04T19:10:38.979 controller-1 systemd[1]: notice mtcClient.service: main process exited, code=killed, status=9/KILL 2019-11-04T19:10:39.094 controller-1 mtcClient[493326]: info Stopping mtcClient: [ OK ] FAIL 2019-11-04T19:10:39.096 controller-1 systemd[1]: notice mtcClient.service: control process exited, code=exited status=7 2019-11-04T19:10:39.096 controller-1 systemd[1]: info Stopped Titanium Cloud Maintenance Command Handler Client. 2019-11-04T19:10:39.104 controller-1 systemd[1]: notice Unit mtcClient.service entered failed state. 2019-11-04T19:10:39.104 controller-1 systemd[1]: warning mtcClient.service failed. 2019-11-04T19:10:39.105 controller-1 systemd[1]: info Stopping Titanium Cloud Maintenance Heartbeat Client... 2019-11-04T19:10:39.246 controller-1 hbsClient[494179]: info Stopping hbsClient: [ OK ] OK 2019-11-04T19:10:39.255 controller-1 systemd[1]: info Stopped Titanium Cloud Maintenance Heartbeat Client. 2019-11-04T19:10:39.500 controller-1 systemd[1]: info Stopped Service Management Shutdown Unit. 2019-11-04T19:10:39.508 controller-1 systemd[1]: info Stopping Service Management Unit... 2019-11-04T19:10:42.642 controller-1 systemd[1]: notice sm.service: main process exited, code=killed, status=9/KILL 2019-11-04T19:10:43.769 controller-1 sm[494198]: info Stopping sm: [ OK ] OK 2019-11-04T19:10:43.780 controller-1 systemd[1]: info Stopped Service Management Unit. 2019-11-04T19:10:43.787 controller-1 systemd[1]: notice Unit sm.service entered failed state. 2019-11-04T19:10:43.787 controller-1 systemd[1]: warning sm.service failed. 2019-11-04T19:10:43.788 controller-1 systemd[1]: info Stopping Service Management Watchdog... 2019-11-04T19:10:44.929 controller-1 sm-watchdog[494406]: info Stopping sm-watchdog: [ OK ] OK 2019-11-04T19:10:44.948 controller-1 systemd[1]: info Stopped Service Management Watchdog. 2019-11-04T19:10:44.964 controller-1 systemd[1]: info Stopped General TIS config gate. 2019-11-04T19:10:44.993 controller-1 systemd[1]: info Stopped controllerconfig service. 2019-11-04T19:10:44.999 controller-1 systemd[1]: info Stopped target Remote File Systems. 2019-11-04T19:10:45.006 controller-1 systemd[1]: info Stopped target Remote File Systems (Pre). 2019-11-04T19:10:45.014 controller-1 systemd[1]: info Stopping Logout off all iSCSI sessions on shutdown... 2019-11-04T19:10:45.020 controller-1 iscsiadm[494469]: info iscsiadm: No matching sessions found 2019-11-04T19:10:45.022 controller-1 systemd[1]: info Stopped target NFS client services. 2019-11-04T19:10:45.029 controller-1 systemd[1]: info Stopping GSSAPI Proxy Daemon... 2019-11-04T19:10:45.035 controller-1 systemd[1]: info Stopping Titanium Cloud System Inventory Agent... 2019-11-04T19:10:45.042 controller-1 systemd[1]: info Stopping Titanium Cloud Log Management... 2019-11-04T19:10:45.046 controller-1 logmgmt[494475]: info Stopping logmgmt...done. 2019-11-04T19:10:45.049 controller-1 sysinv-agent[494472]: info Stopping sysinv-agent: OK 2019-11-04T19:10:45.060 controller-1 systemd[1]: info Stopped GSSAPI Proxy Daemon. 2019-11-04T19:10:45.067 controller-1 systemd[1]: notice logmgmt.service: main process exited, code=exited, status=1/FAILURE 2019-11-04T19:10:45.078 controller-1 systemd[1]: info Stopped Logout off all iSCSI sessions on shutdown. 2019-11-04T19:10:45.093 controller-1 systemd[1]: info Stopped Titanium Cloud Log Management. 2019-11-04T19:10:45.099 controller-1 systemd[1]: notice Unit logmgmt.service entered failed state. 2019-11-04T19:10:45.099 controller-1 systemd[1]: warning logmgmt.service failed. 2019-11-04T19:10:45.110 controller-1 systemd[1]: info Stopped Titanium Cloud System Inventory Agent. 2019-11-04T19:10:45.121 controller-1 systemd[1]: info Stopping StarlingX Filesystem Common... 2019-11-04T19:10:45.135 controller-1 systemd[1]: info Stopped TIS Patching. 2019-11-04T19:10:45.140 controller-1 systemd[1]: info Stopped target Network is Online. 2019-11-04T19:10:45.147 controller-1 systemd[1]: info Stopping System Logger Daemon... 2019-11-04T19:10:45.150 controller-1 nfscommon[494489]: info stopping idmapd: done 2019-11-04T19:10:45.151 controller-1 nfscommon[494489]: info stopping statd: done 2019-11-04T19:10:45.153 controller-1 systemd[1]: info Stopping Open-iSCSI... 2019-11-04T19:10:45.000 controller-1 iscsid: warning iscsid shutting down. 2019-11-04T19:10:45.168 controller-1 systemd[1]: notice Unit var-lib-nfs-rpc_pipefs.mount entered failed state. 2019-11-04T19:10:45.178 controller-1 systemd[1]: info Stopped StarlingX Filesystem Common. 2019-11-04T19:10:45.192 controller-1 systemd[1]: info Stopped Open-iSCSI. 2019-11-04T19:10:45.198 controller-1 systemd[1]: info Stopped target Network. 2019-11-04T19:10:45.203 controller-1 systemd[1]: info Stopping LSB: Bring up/down networking... 2019-11-04T19:10:45.210 controller-1 systemd[1]: info Stopping RPC bind service... 2019-11-04T19:10:45.225 controller-1 systemd[1]: info Stopped RPC bind service. 2019-11-04T19:12:25.573 controller-1 systemd[1]: info systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) 2019-11-04T19:12:25.573 controller-1 systemd[1]: info Detected architecture x86-64. 2019-11-04T19:12:25.580 controller-1 systemd[1]: info Set hostname to . 2019-11-04T19:12:25.660 controller-1 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2019-11-04T19:12:26.015 controller-1 systemd[1]: info Started Create list of required static device nodes for the current kernel. 2019-11-04T19:12:26.025 controller-1 systemd[1]: notice systemd-readahead-collect.service: main process exited, code=exited, status=1/FAILURE 2019-11-04T19:12:26.033 controller-1 systemd[1]: info Started Collect Read-Ahead Data. 2019-11-04T19:12:26.048 controller-1 systemd[1]: info Started Read and set NIS domainname from /etc/sysconfig/network. 2019-11-04T19:12:26.067 controller-1 systemd[1]: info Starting Remount Root and Kernel File Systems... 2019-11-04T19:12:26.083 controller-1 systemd[1]: info Starting Setup Virtual Console... 2019-11-04T19:12:26.096 controller-1 systemd[1]: info Starting Load Kernel Modules... 2019-11-04T19:12:26.102 controller-1 systemd-modules-load[6131]: err Failed to insert 'integrity': Operation not permitted 2019-11-04T19:12:26.102 controller-1 systemd-modules-load[6131]: err Failed to insert 'ima': Operation not permitted 2019-11-04T19:12:26.109 controller-1 systemd[1]: info Starting Create Static Device Nodes in /dev... 2019-11-04T19:12:26.147 controller-1 systemd[1]: info Started Remount Root and Kernel File Systems. 2019-11-04T19:12:26.160 controller-1 systemd[1]: info Started Setup Virtual Console. 2019-11-04T19:12:26.167 controller-1 systemd[1]: notice systemd-modules-load.service: main process exited, code=exited, status=1/FAILURE 2019-11-04T19:12:26.173 controller-1 systemd[1]: err Failed to start Load Kernel Modules. 2019-11-04T19:12:26.188 controller-1 systemd[1]: notice Unit systemd-modules-load.service entered failed state. 2019-11-04T19:12:26.188 controller-1 systemd[1]: warning systemd-modules-load.service failed. 2019-11-04T19:12:26.194 controller-1 systemd[1]: info Started Create Static Device Nodes in /dev. 2019-11-04T19:12:26.210 controller-1 systemd[1]: info Starting udev Kernel Device Manager... 2019-11-04T19:12:26.214 controller-1 systemd-udevd[6138]: info starting version 219 2019-11-04T19:12:26.219 controller-1 systemd-udevd[6138]: err specified group 'video' unknown 2019-11-04T19:12:26.219 controller-1 systemd-udevd[6138]: err specified group 'audio' unknown 2019-11-04T19:12:26.219 controller-1 systemd-udevd[6138]: err specified group 'lp' unknown 2019-11-04T19:12:26.222 controller-1 systemd[1]: info Starting Apply Kernel Variables... 2019-11-04T19:12:26.239 controller-1 systemd[1]: info Starting udev Coldplug all Devices... 2019-11-04T19:12:26.257 controller-1 systemd[1]: info Starting Configure read-only root support... 2019-11-04T19:12:26.298 controller-1 systemd[1]: info Started udev Kernel Device Manager. 2019-11-04T19:12:26.315 controller-1 systemd[1]: info Started Apply Kernel Variables. 2019-11-04T19:12:26.329 controller-1 systemd[1]: info Started Configure read-only root support. 2019-11-04T19:12:26.350 controller-1 systemd[1]: info Starting Load/Save Random Seed... 2019-11-04T19:12:26.371 controller-1 systemd-udevd[6317]: err could not read from '/sys/module/pcc_cpufreq/initstate': No such device 2019-11-04T19:12:26.393 controller-1 systemd[1]: info Started udev Coldplug all Devices. 2019-11-04T19:12:26.411 controller-1 systemd[1]: info Started Load/Save Random Seed. 2019-11-04T19:12:26.438 controller-1 systemd[1]: info Starting udev Wait for Complete Device Initialization... 2019-11-04T19:12:26.581 controller-1 systemd[1]: info Found device /dev/ttyS0. 2019-11-04T19:12:26.635 controller-1 systemd[1]: info Found device INTEL_SSDSC2BB480G6 2. 2019-11-04T19:12:26.676 controller-1 systemd[1]: info Created slice system-lvm2\x2dpvscan.slice. 2019-11-04T19:12:26.697 controller-1 systemd[1]: info Starting LVM2 PV scan on device 8:4... 2019-11-04T19:12:26.723 controller-1 systemd[1]: info Started LVM2 PV scan on device 8:4. 2019-11-04T19:12:26.870 controller-1 systemd[1]: info Started udev Wait for Complete Device Initialization. 2019-11-04T19:12:26.884 controller-1 systemd[1]: info Starting Activation of LVM2 logical volumes... 2019-11-04T19:12:26.955 controller-1 systemd[1]: info Found device /dev/mapper/cgts--vg-log--lv. 2019-11-04T19:12:26.962 controller-1 systemd[1]: info Found device /dev/cgts-vg/docker-lv. 2019-11-04T19:12:26.967 controller-1 lvm[10393]: info 12 logical volume(s) in volume group "cgts-vg" now active 2019-11-04T19:12:26.980 controller-1 systemd[1]: info Started Activation of LVM2 logical volumes. 2019-11-04T19:12:26.988 controller-1 systemd[1]: info Found device /dev/cgts-vg/kubelet-lv. 2019-11-04T19:12:26.994 controller-1 systemd[1]: info Found device /dev/cgts-vg/scratch-lv. 2019-11-04T19:12:27.001 controller-1 systemd[1]: info Found device /dev/cgts-vg/backup-lv. 2019-11-04T19:12:27.008 controller-1 systemd[1]: info Found device /dev/cgts-vg/ceph-mon-lv. 2019-11-04T19:12:27.015 controller-1 systemd[1]: info Reached target Local Encrypted Volumes. 2019-11-04T19:12:27.029 controller-1 systemd[1]: info Starting Activation of LVM2 logical volumes... 2019-11-04T19:12:27.039 controller-1 lvm[10537]: info 12 logical volume(s) in volume group "cgts-vg" now active 2019-11-04T19:12:27.048 controller-1 systemd[1]: info Started Activation of LVM2 logical volumes. 2019-11-04T19:12:27.067 controller-1 systemd[1]: info Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling... 2019-11-04T19:12:27.075 controller-1 lvm[10539]: info 12 logical volume(s) in volume group "cgts-vg" monitored 2019-11-04T19:12:27.083 controller-1 systemd[1]: info Started Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling. 2019-11-04T19:12:27.093 controller-1 systemd[1]: info Reached target Local File Systems (Pre). 2019-11-04T19:12:27.107 controller-1 systemd[1]: info Starting File System Check on /dev/disk/by-uuid/7b8cfef2-c0cb-4cc9-94f8-a831edf06629... 2019-11-04T19:12:27.123 controller-1 systemd-fsck[10541]: info /dev/sda2: clean, 351/128016 files, 144298/512000 blocks 2019-11-04T19:12:27.129 controller-1 systemd[1]: info Mounting /opt/backups... 2019-11-04T19:12:27.149 controller-1 systemd[1]: info Starting File System Check on /dev/mapper/cgts--vg-log--lv... 2019-11-04T19:12:27.153 controller-1 systemd-fsck[10550]: info /dev/mapper/cgts--vg-log--lv: clean, 337/512064 files, 152537/2048000 blocks 2019-11-04T19:12:27.164 controller-1 systemd[1]: info Mounting /var/lib/ceph/mon... 2019-11-04T19:12:27.183 controller-1 systemd[1]: info Mounting /scratch... 2019-11-04T19:12:27.187 controller-1 systemd[1]: notice var-lib-docker.mount: Directory /var/lib/docker to mount over is not empty, mounting anyway. 2019-11-04T19:12:27.196 controller-1 systemd[1]: info Mounting /var/lib/docker... 2019-11-04T19:12:27.213 controller-1 systemd[1]: info Mounting /var/lib/kubelet... 2019-11-04T19:12:27.226 controller-1 systemd[1]: info Mounted /scratch. 2019-11-04T19:12:27.231 controller-1 systemd[1]: info Mounted /var/lib/ceph/mon. 2019-11-04T19:12:27.236 controller-1 systemd[1]: info Mounted /var/lib/kubelet. 2019-11-04T19:12:27.241 controller-1 systemd[1]: info Mounted /opt/backups. 2019-11-04T19:12:27.253 controller-1 systemd[1]: info Started File System Check on /dev/disk/by-uuid/7b8cfef2-c0cb-4cc9-94f8-a831edf06629. 2019-11-04T19:12:27.271 controller-1 systemd[1]: info Started File System Check on /dev/mapper/cgts--vg-log--lv. 2019-11-04T19:12:27.279 controller-1 systemd[1]: info Mounted /var/lib/docker. 2019-11-04T19:12:27.292 controller-1 systemd[1]: info Mounting /var/log... 2019-11-04T19:12:27.303 controller-1 systemd[1]: info Mounting /boot... 2019-11-04T19:12:27.308 controller-1 systemd[1]: info Mounted /boot. 2019-11-04T19:12:27.313 controller-1 systemd[1]: info Mounted /var/log. 2019-11-04T19:12:27.327 controller-1 systemd[1]: info Starting Flush Journal to Persistent Storage... 2019-11-04T19:12:27.333 controller-1 systemd[1]: info Reached target Local File Systems. 2019-11-04T19:12:27.349 controller-1 systemd[1]: info Starting Import network configuration from initramfs... 2019-11-04T19:12:27.365 controller-1 systemd[1]: info Starting Preprocess NFS configuration... 2019-11-04T19:12:27.382 controller-1 systemd[1]: info Started Import network configuration from initramfs. 2019-11-04T19:12:27.397 controller-1 systemd[1]: info Started Preprocess NFS configuration. 2019-11-04T19:12:27.412 controller-1 systemd[1]: info Started Flush Journal to Persistent Storage. 2019-11-04T19:12:27.428 controller-1 systemd[1]: info Starting Create Volatile Files and Directories... 2019-11-04T19:12:27.454 controller-1 systemd[1]: info Started Create Volatile Files and Directories. 2019-11-04T19:12:27.467 controller-1 systemd[1]: info Starting Update UTMP about System Boot/Shutdown... 2019-11-04T19:12:27.482 controller-1 systemd[1]: info Mounting RPC Pipe File System... 2019-11-04T19:12:27.509 controller-1 systemd[1]: info Started Update UTMP about System Boot/Shutdown. 2019-11-04T19:12:27.517 controller-1 systemd[1]: info Mounted RPC Pipe File System. 2019-11-04T19:12:27.525 controller-1 systemd[1]: info Reached target rpc_pipefs.target. 2019-11-04T19:12:27.532 controller-1 systemd[1]: info Reached target System Initialization. 2019-11-04T19:12:27.539 controller-1 systemd[1]: info Started Daily Cleanup of Temporary Directories. 2019-11-04T19:12:27.547 controller-1 systemd[1]: info Listening on Open-iSCSI iscsid Socket. 2019-11-04T19:12:27.554 controller-1 systemd[1]: info Listening on Open-iSCSI iscsiuio Socket. 2019-11-04T19:12:27.561 controller-1 systemd[1]: info Listening on D-Bus System Message Bus Socket. 2019-11-04T19:12:27.569 controller-1 systemd[1]: info Starting Docker Socket for the API. 2019-11-04T19:12:27.574 controller-1 systemd[1]: info Listening on RPCbind Server Activation Socket. 2019-11-04T19:12:27.588 controller-1 systemd[1]: info Starting RPC bind service... 2019-11-04T19:12:27.599 controller-1 systemd[1]: info Listening on Docker Socket for the API. 2019-11-04T19:12:27.606 controller-1 systemd[1]: info Reached target Sockets. 2019-11-04T19:12:27.611 controller-1 systemd[1]: info Reached target Basic System. 2019-11-04T19:12:27.625 controller-1 systemd[1]: info Starting Load CPU microcode update... 2019-11-04T19:12:27.639 controller-1 systemd[1]: info Starting ACPI Event Daemon... 2019-11-04T19:12:27.651 controller-1 systemd[1]: info Starting Login Service... 2019-11-04T19:12:27.667 controller-1 systemd[1]: info Started fast remote file copy program daemon. 2019-11-04T19:12:27.000 controller-1 rsyncd[10667]: info rsyncd version 3.1.2 starting, listening on port 873 2019-11-04T19:12:27.684 controller-1 systemd[1]: info Starting Wind River Mellanox port-type configuration scripts... 2019-11-04T19:12:27.699 controller-1 systemd[1]: info Starting Resets System Activity Logs... 2019-11-04T19:12:27.716 controller-1 systemd[1]: info Started Self Monitoring and Reporting Technology (SMART) Daemon. 2019-11-04T19:12:27.000 controller-1 acpid: info starting up with netlink and the input layer 2019-11-04T19:12:27.000 controller-1 acpid: info skipping incomplete file /etc/acpi/events/videoconf 2019-11-04T19:12:27.000 controller-1 acpid: info 1 rule loaded 2019-11-04T19:12:27.000 controller-1 acpid: info waiting for events: event logging is off 2019-11-04T19:12:27.734 controller-1 systemd[1]: info Starting GSSAPI Proxy Daemon... 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info smartd 6.5 2016-05-07 r4318 [x86_64-linux-3.10.0-957.21.3.el7.2.tis.x86_64] (local build) 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Opened configuration file /etc/smartmontools/smartd.conf 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Configuration file /etc/smartmontools/smartd.conf was parsed, found DEVICESCAN, scanning devices 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sda, type changed from 'scsi' to 'sat' 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sda [SAT], opened 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sda [SAT], INTEL SSDSC2BB480G6, S/N:PHWA613500H7480FGN, WWN:5-5cd2e4-04c72cd75, FW:G2010140, 480 GB 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sda [SAT], found in smartd database: Intel 730 and DC S35x0/3610/3700 Series SSDs 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sda [SAT], can't monitor Offline_Uncorrectable count - no Attribute 198 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sda [SAT], is SMART capable. Adding to "monitor" list. 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdb, type changed from 'scsi' to 'sat' 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdb [SAT], opened 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdb [SAT], INTEL SSDSC2BB480G6, S/N:PHWA613500DJ480FGN, WWN:5-5cd2e4-04c72cd57, FW:G2010140, 480 GB 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdb [SAT], found in smartd database: Intel 730 and DC S35x0/3610/3700 Series SSDs 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdb [SAT], can't monitor Offline_Uncorrectable count - no Attribute 198 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdb [SAT], is SMART capable. Adding to "monitor" list. 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdc, type changed from 'scsi' to 'sat' 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdc [SAT], opened 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdc [SAT], INTEL SSDSC2KB480G7, S/N:PHYS820300A5480BGN, WWN:5-5cd2e4-14f826f37, FW:SCV10142, 480 GB 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdc [SAT], not found in smartd database. 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdc [SAT], can't monitor Offline_Uncorrectable count - no Attribute 198 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Device: /dev/sdc [SAT], is SMART capable. Adding to "monitor" list. 2019-11-04T19:12:27.000 controller-1 smartd[10698]: info Monitoring 3 ATA/SATA, 0 SCSI/SAS and 0 NVMe devices 2019-11-04T19:12:27.755 controller-1 systemd[1]: info Starting Authorization Manager... 2019-11-04T19:12:27.767 controller-1 systemd[1]: info Started D-Bus System Message Bus. 2019-11-04T19:12:27.000 controller-1 polkitd[10748]: info Started polkitd version 0.112 2019-11-04T19:12:27.000 controller-1 dbus[10760]: notice [system] Successfully activated service 'org.freedesktop.systemd1' 2019-11-04T19:12:27.795 controller-1 systemd[1]: info Starting System Logger Daemon... 2019-11-04T19:12:27.809 controller-1 systemd[1]: info Starting Dump dmesg to /var/log/dmesg... 2019-11-04T19:12:27.824 controller-1 systemd[1]: info Starting LSB: Bring up/down networking... 2019-11-04T19:12:27.832 controller-1 systemd[1]: info Started RPC bind service. 2019-11-04T19:12:27.849 controller-1 systemd[1]: info Started Load CPU microcode update. 2019-11-04T19:12:27.856 controller-1 systemd[1]: info Started ACPI Event Daemon. 2019-11-04T19:12:27.867 controller-1 systemd[1]: info Started Resets System Activity Logs. 2019-11-04T19:12:27.874 controller-1 systemd[1]: info Started GSSAPI Proxy Daemon. 2019-11-04T19:12:27.889 controller-1 systemd[1]: info Started Dump dmesg to /var/log/dmesg. 2019-11-04T19:12:27.905 controller-1 systemd[1]: info Started Authorization Manager. 2019-11-04T19:12:27.911 controller-1 systemd[1]: info Started Login Service. 2019-11-04T19:12:27.916 controller-1 systemd[1]: info Reached target NFS client services. 2019-11-04T19:12:27.923 controller-1 systemd[1]: info Started System Logger Daemon. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.666632] systemd[1]: systemd 219 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 -SECCOMP +BLKID +ELFUTILS +KMOD +IDN) 2019-11-04T19:12:27.940 controller-1 kernel: info 4.687536] systemd[1]: Detected architecture x86-64. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.693180] systemd[1]: Running in initial RAM disk. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.708665] systemd[1]: Set hostname to . 2019-11-04T19:12:27.940 controller-1 kernel: info 4.753877] systemd[1]: Reached target Swap. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.763643] systemd[1]: Reached target Local File Systems. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.776711] systemd[1]: Started Dispatch Password Requests to Console Directory Watch. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.795643] systemd[1]: Reached target Paths. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.805765] systemd[1]: Created slice Root Slice. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.817658] systemd[1]: Listening on udev Kernel Socket. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.830626] systemd[1]: Listening on udev Control Socket. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.843634] systemd[1]: Reached target Timers. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.853673] systemd[1]: Created slice System Slice. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.865640] systemd[1]: Reached target Slices. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.875719] systemd[1]: Listening on Journal Socket. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.886655] systemd[1]: Reached target Sockets. 2019-11-04T19:12:27.940 controller-1 kernel: info 4.897456] systemd[1]: Starting Load Kernel Modules... 2019-11-04T19:12:27.941 controller-1 kernel: info 4.980165] systemd[1]: Starting Create list of required static device nodes for the current kernel... 2019-11-04T19:12:27.941 controller-1 kernel: info 5.026249] systemd[1]: Starting Journal Service... 2019-11-04T19:12:27.941 controller-1 kernel: info 5.037326] systemd[1]: Starting dracut cmdline hook... 2019-11-04T19:12:27.941 controller-1 kernel: info 5.048014] systemd[1]: Started Create list of required static device nodes for the current kernel. 2019-11-04T19:12:27.941 controller-1 kernel: info 5.069339] systemd[1]: Starting Create Static Device Nodes in /dev... 2019-11-04T19:12:27.941 controller-1 kernel: info 5.083860] systemd[1]: Started Journal Service. 2019-11-04T19:12:27.945 controller-1 kernel: info 8.622781] systemd[1]: Inserted module 'ip_tables' 2019-11-04T19:12:27.945 controller-1 kernel: err 8.773388] systemd-readahead[6113]: Failed to create fanotify object: Function not implemented 2019-11-04T19:12:27.969 controller-1 network[10835]: info Bringing up loopback interface: ERROR : [/etc/sysconfig/network-scripts/ifup-ipv6] Global IPv6 forwarding is disabled in configuration, but not currently disabled in kernel 2019-11-04T19:12:27.000 controller-1 /etc/sysconfig/network-scripts/ifup-ipv6: err Global IPv6 forwarding is disabled in configuration, but not currently disabled in kernel 2019-11-04T19:12:27.970 controller-1 network[10835]: info ERROR : [/etc/sysconfig/network-scripts/ifup-ipv6] Please restart network with '/sbin/service network restart' 2019-11-04T19:12:27.000 controller-1 /etc/sysconfig/network-scripts/ifup-ipv6: err Please restart network with '/sbin/service network restart' 2019-11-04T19:12:28.014 controller-1 network[10835]: info [ OK ] 2019-11-04T19:12:28.101 controller-1 systemd[1]: info Started Wind River Mellanox port-type configuration scripts. 2019-11-04T19:12:32.746 controller-1 network[10835]: info Bringing up interface eno1: ERROR : [/etc/sysconfig/network-scripts/ifup-ipv6] Global IPv6 forwarding is disabled in configuration, but not currently disabled in kernel 2019-11-04T19:12:32.000 controller-1 /etc/sysconfig/network-scripts/ifup-ipv6: err Global IPv6 forwarding is disabled in configuration, but not currently disabled in kernel 2019-11-04T19:12:32.747 controller-1 network[10835]: info ERROR : [/etc/sysconfig/network-scripts/ifup-ipv6] Please restart network with '/sbin/service network restart' 2019-11-04T19:12:32.000 controller-1 /etc/sysconfig/network-scripts/ifup-ipv6: err Please restart network with '/sbin/service network restart' 2019-11-04T19:12:32.769 controller-1 network[10835]: info INFO : [ipv6_wait_tentative] Waiting for interface eno1 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:32.000 controller-1 ipv6_wait_tentative: info Waiting for interface eno1 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:33.775 controller-1 network[10835]: info INFO : [ipv6_wait_tentative] Waiting for interface eno1 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:33.000 controller-1 ipv6_wait_tentative: info Waiting for interface eno1 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:34.821 controller-1 network[10835]: info [ OK ] 2019-11-04T19:12:39.166 controller-1 network[10835]: info Bringing up interface pxeboot0: ERROR : [/etc/sysconfig/network-scripts/ifup-ipv6] Global IPv6 forwarding is disabled in configuration, but not currently disabled in kernel 2019-11-04T19:12:39.000 controller-1 /etc/sysconfig/network-scripts/ifup-ipv6: err Global IPv6 forwarding is disabled in configuration, but not currently disabled in kernel 2019-11-04T19:12:39.167 controller-1 network[10835]: info ERROR : [/etc/sysconfig/network-scripts/ifup-ipv6] Please restart network with '/sbin/service network restart' 2019-11-04T19:12:39.000 controller-1 /etc/sysconfig/network-scripts/ifup-ipv6: err Please restart network with '/sbin/service network restart' 2019-11-04T19:12:49.222 controller-1 network[10835]: info [ OK ] 2019-11-04T19:12:49.326 controller-1 network[10835]: info Bringing up interface vlan108: ERROR : [/etc/sysconfig/network-scripts/ifup-ipv6] Global IPv6 forwarding is disabled in configuration, but not currently disabled in kernel 2019-11-04T19:12:49.000 controller-1 /etc/sysconfig/network-scripts/ifup-ipv6: err Global IPv6 forwarding is disabled in configuration, but not currently disabled in kernel 2019-11-04T19:12:49.328 controller-1 network[10835]: info ERROR : [/etc/sysconfig/network-scripts/ifup-ipv6] Please restart network with '/sbin/service network restart' 2019-11-04T19:12:49.000 controller-1 /etc/sysconfig/network-scripts/ifup-ipv6: err Please restart network with '/sbin/service network restart' 2019-11-04T19:12:49.340 controller-1 network[10835]: info INFO : [ipv6_wait_tentative] Waiting for interface vlan108 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:49.000 controller-1 ipv6_wait_tentative: info Waiting for interface vlan108 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:50.346 controller-1 network[10835]: info INFO : [ipv6_wait_tentative] Waiting for interface vlan108 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:50.000 controller-1 ipv6_wait_tentative: info Waiting for interface vlan108 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:51.396 controller-1 network[10835]: info [ OK ] 2019-11-04T19:12:51.475 controller-1 network[10835]: info Bringing up interface vlan109: ERROR : [/etc/sysconfig/network-scripts/ifup-ipv6] Global IPv6 forwarding is disabled in configuration, but not currently disabled in kernel 2019-11-04T19:12:51.000 controller-1 /etc/sysconfig/network-scripts/ifup-ipv6: err Global IPv6 forwarding is disabled in configuration, but not currently disabled in kernel 2019-11-04T19:12:51.476 controller-1 network[10835]: info ERROR : [/etc/sysconfig/network-scripts/ifup-ipv6] Please restart network with '/sbin/service network restart' 2019-11-04T19:12:51.000 controller-1 /etc/sysconfig/network-scripts/ifup-ipv6: err Please restart network with '/sbin/service network restart' 2019-11-04T19:12:51.488 controller-1 network[10835]: info INFO : [ipv6_wait_tentative] Waiting for interface vlan109 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:51.000 controller-1 ipv6_wait_tentative: info Waiting for interface vlan109 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:52.494 controller-1 network[10835]: info INFO : [ipv6_wait_tentative] Waiting for interface vlan109 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:52.000 controller-1 ipv6_wait_tentative: info Waiting for interface vlan109 IPv6 address(es) to leave the 'tentative' state 2019-11-04T19:12:53.538 controller-1 network[10835]: info [ OK ] 2019-11-04T19:12:53.571 controller-1 systemd[1]: info Started LSB: Bring up/down networking. 2019-11-04T19:12:53.578 controller-1 systemd[1]: info Reached target Network. 2019-11-04T19:12:53.589 controller-1 systemd[1]: info Starting containerd container runtime... 2019-11-04T19:12:53.607 controller-1 systemd[1]: info Starting Dynamic System Tuning Daemon... 2019-11-04T19:12:53.623 controller-1 systemd[1]: info Starting StarlingX Filesystem Common... 2019-11-04T19:12:53.640 controller-1 nfscommon[12149]: info creating NFS state directory: done 2019-11-04T19:12:53.640 controller-1 systemd[1]: info Starting Open-iSCSI... 2019-11-04T19:12:53.000 controller-1 iscsid: warning iSCSI logger with pid=12162 started! 2019-11-04T19:12:53.000 controller-1 rpc.statd[12161]: notice Version 1.3.0 starting 2019-11-04T19:12:53.000 controller-1 sm-notify[12163]: notice Version 1.3.0 starting 2019-11-04T19:12:53.659 controller-1 systemd[1]: info Started OpenSSH server daemon. 2019-11-04T19:12:53.664 controller-1 nfscommon[12149]: info starting statd: done 2019-11-04T19:12:53.666 controller-1 nfscommon[12149]: info mount: rpc_pipefs is already mounted or /var/lib/nfs/rpc_pipefs busy 2019-11-04T19:12:53.673 controller-1 nfscommon[12149]: info starting idmapd: done 2019-11-04T19:12:53.674 controller-1 systemd[1]: info Starting LLDP daemon... 2019-11-04T19:12:53.679 controller-1 systemd[1]: info Reached target Network is Online. 2019-11-04T19:12:53.692 controller-1 systemd[1]: info Starting InfluxDB open-source, distributed, time series database... 2019-11-04T19:12:53.713 controller-1 systemd[1]: info Starting Notify NFS peers of a restart... 2019-11-04T19:12:53.000 controller-1 sm-notify[12189]: notice Version 1.3.0 starting 2019-11-04T19:12:53.000 controller-1 sm-notify[12189]: notice Already notifying clients; Exiting! 2019-11-04T19:12:53.718 controller-1 sshd[12166]: info Starting sshd: [ OK ] 2019-11-04T19:12:53.728 controller-1 systemd[1]: info Starting Set time via NTP... 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: notice ntpd 4.2.6p5@1.2349-o Mon Oct 21 00:21:18 UTC 2019 (1) 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: notice proto: precision = 0.029 usec 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info 0.0.0.0 c01d 0d kern kernel time sync enabled 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: debug ntp_io: estimated max descriptors: 1024, initial socket boundary: 16 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen and drop on 1 v6wildcard :: UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen normally on 2 lo 127.0.0.1 UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen normally on 3 pxeboot0 192.168.202.4 UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen normally on 4 lo ::1 UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen normally on 5 eno1 2620:10a:a001:a103::233 UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen normally on 6 eno1 fe80::21e:67ff:fefe:f7bb UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen normally on 7 vlan109 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen normally on 8 vlan108 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen normally on 9 pxeboot0 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen normally on 10 vlan109 fd00:205::4 UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listen normally on 11 vlan108 fd00:204::4 UDP 123 2019-11-04T19:12:53.000 controller-1 ntpd[12194]: info Listening on routing socket on fd #28 for interface updates 2019-11-04T19:12:53.740 controller-1 systemd[1]: info Starting TIS Patching... 2019-11-04T19:12:53.757 controller-1 systemd[1]: info Started memcached daemon. 2019-11-04T19:12:53.772 controller-1 systemd[1]: info Started containerd container runtime. 2019-11-04T19:12:53.779 controller-1 systemd[1]: info Started StarlingX Filesystem Common. 2019-11-04T19:12:53.786 controller-1 systemd[1]: info Started Open-iSCSI. 2019-11-04T19:12:53.799 controller-1 systemd[1]: info Started Notify NFS peers of a restart. 2019-11-04T19:12:53.808 controller-1 systemd[1]: info Started InfluxDB open-source, distributed, time series database. 2019-11-04T19:12:53.819 controller-1 systemd[1]: info Started Dynamic System Tuning Daemon. 2019-11-04T19:12:53.835 controller-1 systemd[1]: info Starting Collectd statistics daemon and extension services... 2019-11-04T19:12:53.850 controller-1 systemd[1]: info Starting Logout off all iSCSI sessions on shutdown... 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: info /etc/localtime copied to chroot 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: info protocol LLDP enabled 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: info protocol CDPv1 disabled 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: info protocol CDPv2 disabled 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: info protocol SONMP disabled 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: info protocol EDP disabled 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: info protocol FDP disabled 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: info libevent 2.0.21-stable initialized with epoll method 2019-11-04T19:12:53.863 controller-1 collectd[12276]: info plugin_load: plugin "network" successfully loaded. 2019-11-04T19:12:53.864 controller-1 collectd[12276]: info plugin_load: plugin "python" successfully loaded. 2019-11-04T19:12:53.865 controller-1 systemd[1]: info Starting Activation of LVM2 logical volumes... 2019-11-04T19:12:53.879 controller-1 systemd[1]: info Starting StarlingX Filesystem Server... 2019-11-04T19:12:53.891 controller-1 nfsserver[12292]: info exportfs: internal: no supported addresses in nfs_client 2019-11-04T19:12:53.891 controller-1 nfsserver[12292]: info exportfs: fd00:204::3:/etc/platform: No such file or directory 2019-11-04T19:12:53.898 controller-1 systemd[1]: info Starting StarlingX Cloud Filesystem Auto-mounter... 2019-11-04T19:12:53.914 controller-1 systemd[1]: info Starting Docker Application Container Engine... 2019-11-04T19:12:53.921 controller-1 systemd[1]: info Started LLDP daemon. 2019-11-04T19:12:53.935 controller-1 systemd[1]: info Started Logout off all iSCSI sessions on shutdown. 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: warning error while receiving frame on eno2: Network is down 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: warning error while receiving frame on ens801f2: Network is down 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: warning error while receiving frame on ens785f0: Network is down 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: warning error while receiving frame on ens801f3: Network is down 2019-11-04T19:12:53.000 controller-1 lldpd[12281]: warning error while receiving frame on ens785f1: Network is down 2019-11-04T19:12:53.000 controller-1 lldpcli[12280]: info system name set to new value controller-1:yow-cgcs-wildcat-35-60 2019-11-04T19:12:53.000 controller-1 lldpcli[12280]: info transmit delay set to new value 2019-11-04T19:12:53.000 controller-1 lldpcli[12280]: info transmit hold set to new value 4 2019-11-04T19:12:53.000 controller-1 lldpcli[12280]: info iface-pattern set to new value *,!br*,!ovs*,!tap*,!cali*,!tunl*,!docker* 2019-11-04T19:12:53.000 controller-1 lldpcli[12280]: info lldpd should resume operations 2019-11-04T19:12:53.000 controller-1 nfsdcltrack[12367]: err Failed to init database: -13 2019-11-04T19:12:53.955 controller-1 nfsserver[12292]: info starting 8 nfsd kernel threads: done 2019-11-04T19:12:53.956 controller-1 systemd[1]: info Started StarlingX Cloud Filesystem Auto-mounter. 2019-11-04T19:12:53.980 controller-1 lvm[12369]: info 12 logical volume(s) in volume group "cgts-vg" now active 2019-11-04T19:12:53.985 controller-1 systemd[1]: info Starting Titanium Cloud Filesystem Initialization... 2019-11-04T19:12:53.992 controller-1 nfsserver[12292]: info starting mountd: done 2019-11-04T19:12:53.000 controller-1 rpc.mountd[12408]: notice Version 1.3.0 starting 2019-11-04T19:12:54.015 controller-1 systemd[1]: info Started StarlingX Filesystem Server. 2019-11-04T19:12:54.030 controller-1 systemd[1]: info Started Activation of LVM2 logical volumes. 2019-11-04T19:12:54.048 controller-1 systemd[1]: info Started Titanium Cloud Filesystem Initialization. 2019-11-04T19:12:54.056 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.056073732Z" level=info msg="starting containerd" revision=bb71b10fd8f58240ca47fbb579b9d1028eea7c84 version=1.2.5 2019-11-04T19:12:54.056 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.056418785Z" level=info msg="loading plugin "io.containerd.content.v1.content"..." type=io.containerd.content.v1 2019-11-04T19:12:54.056 controller-1 systemd[1]: info Reached target Remote File Systems (Pre). 2019-11-04T19:12:54.057 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.057105096Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.btrfs"..." type=io.containerd.snapshotter.v1 2019-11-04T19:12:54.057 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.057251881Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.btrfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 2019-11-04T19:12:54.057 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.057269412Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.aufs"..." type=io.containerd.snapshotter.v1 2019-11-04T19:12:54.058 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.058378538Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.aufs" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found.\n": exit status 1" 2019-11-04T19:12:54.058 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.058393993Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.native"..." type=io.containerd.snapshotter.v1 2019-11-04T19:12:54.058 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.058791789Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.overlayfs"..." type=io.containerd.snapshotter.v1 2019-11-04T19:12:54.059 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.059237690Z" level=info msg="loading plugin "io.containerd.snapshotter.v1.zfs"..." type=io.containerd.snapshotter.v1 2019-11-04T19:12:54.059 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.059408673Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter" 2019-11-04T19:12:54.059 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.059420958Z" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 2019-11-04T19:12:54.059 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.059436514Z" level=warning msg="could not use snapshotter btrfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs must be a btrfs filesystem to be used with the btrfs snapshotter" 2019-11-04T19:12:54.059 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.059442497Z" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found.\n": exit status 1" 2019-11-04T19:12:54.059 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.059449895Z" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter" 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061446109Z" level=info msg="loading plugin "io.containerd.differ.v1.walking"..." type=io.containerd.differ.v1 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061466112Z" level=info msg="loading plugin "io.containerd.gc.v1.scheduler"..." type=io.containerd.gc.v1 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061501067Z" level=info msg="loading plugin "io.containerd.service.v1.containers-service"..." type=io.containerd.service.v1 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061519412Z" level=info msg="loading plugin "io.containerd.service.v1.content-service"..." type=io.containerd.service.v1 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061529099Z" level=info msg="loading plugin "io.containerd.service.v1.diff-service"..." type=io.containerd.service.v1 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061542951Z" level=info msg="loading plugin "io.containerd.service.v1.images-service"..." type=io.containerd.service.v1 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061556415Z" level=info msg="loading plugin "io.containerd.service.v1.leases-service"..." type=io.containerd.service.v1 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061571926Z" level=info msg="loading plugin "io.containerd.service.v1.namespaces-service"..." type=io.containerd.service.v1 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061585614Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061596459Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061664913Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2 2019-11-04T19:12:54.061 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.061727283Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062120714Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062163838Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062202214Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062218362Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062231109Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062239207Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062246390Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062254502Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062262240Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062270230Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062279790Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062473263Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062491828Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062501410Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062510962Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062664902Z" level=info msg=serving... address="/run/containerd/containerd.sock" 2019-11-04T19:12:54.062 controller-1 containerd[12214]: info time="2019-11-04T19:12:54.062677858Z" level=info msg="containerd successfully booted in 0.011903s" 2019-11-04T19:12:54.063 controller-1 systemd[1]: info Reached target Remote File Systems. 2019-11-04T19:12:54.076 controller-1 systemd[1]: info Starting Permit User Sessions... 2019-11-04T19:12:54.092 controller-1 systemd[1]: info Starting Crash recovery kernel arming... 2019-11-04T19:12:54.120 controller-1 systemd[1]: info Started Permit User Sessions. 2019-11-04T19:12:54.228 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228038961Z" level=info msg="parsed scheme: \"unix\"" module=grpc 2019-11-04T19:12:54.228 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228134988Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc 2019-11-04T19:12:54.228 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228212216Z" level=info msg="parsed scheme: \"unix\"" module=grpc 2019-11-04T19:12:54.228 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228221926Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc 2019-11-04T19:12:54.228 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228714671Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 }]" module=grpc 2019-11-04T19:12:54.228 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228762082Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc 2019-11-04T19:12:54.228 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228715958Z" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 }]" module=grpc 2019-11-04T19:12:54.228 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228827254Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420b6c190, CONNECTING" module=grpc 2019-11-04T19:12:54.228 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228830814Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc 2019-11-04T19:12:54.228 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228866847Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201be190, CONNECTING" module=grpc 2019-11-04T19:12:54.229 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228989538Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420b6c190, READY" module=grpc 2019-11-04T19:12:54.229 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.228994588Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4201be190, READY" module=grpc 2019-11-04T19:12:54.300 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.300375946Z" level=info msg="[graphdriver] using prior storage driver: overlay2" 2019-11-04T19:12:54.000 controller-1 ntpd[12194]: info 0.0.0.0 c016 06 restart 2019-11-04T19:12:54.000 controller-1 ntpd[12194]: info 0.0.0.0 c012 02 freq_set kernel 0.000 PPM 2019-11-04T19:12:54.000 controller-1 ntpd[12194]: info 0.0.0.0 c011 01 freq_not_set 2019-11-04T19:12:54.427 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.427682533Z" level=info msg="Graph migration to content-addressability took 0.00 seconds" 2019-11-04T19:12:54.427 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.427877093Z" level=warning msg="Your kernel does not support kernel memory limit" 2019-11-04T19:12:54.427 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.427897559Z" level=warning msg="Your kernel does not support cgroup rt period" 2019-11-04T19:12:54.427 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.427904059Z" level=warning msg="Your kernel does not support cgroup rt runtime" 2019-11-04T19:12:54.428 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.428234623Z" level=info msg="Loading containers: start." 2019-11-04T19:12:54.645 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.645132627Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" 2019-11-04T19:12:54.000 controller-1 iscsid: err iSCSI daemon with pid=12164 started! 2019-11-04T19:12:54.691 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.691021439Z" level=info msg="Loading containers: done." 2019-11-04T19:12:54.787 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.787256384Z" level=info msg="Docker daemon" commit=481bc77 graphdriver(s)=overlay2 version=18.09.6 2019-11-04T19:12:54.787 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.787400172Z" level=info msg="Daemon has completed initialization" 2019-11-04T19:12:54.796 controller-1 dockerd[12332]: info time="2019-11-04T19:12:54.796579490Z" level=info msg="API listen on /var/run/docker.sock" 2019-11-04T19:12:54.808 controller-1 systemd[1]: info Started Docker Application Container Engine. 2019-11-04T19:12:55.276 controller-1 collectd[12276]: info plugin_load: plugin "threshold" successfully loaded. 2019-11-04T19:12:55.276 controller-1 collectd[12276]: info plugin_load: plugin "df" successfully loaded. 2019-11-04T19:12:55.278 controller-1 collectd[12276]: info platform cpu usage plugin debug=False, verbose=True 2019-11-04T19:12:55.280 controller-1 collectd[12276]: info platform memory usage: debug=False, verbose=True 2019-11-04T19:12:55.282 controller-1 collectd[12276]: info interface plugin configured by config file [http://localhost:2122/mtce/lmon] 2019-11-04T19:12:55.282 controller-1 collectd[12276]: info Systemd detected, trying to signal readyness. 2019-11-04T19:12:55.282 controller-1 collectd[12276]: info remote logging server initialization complete 2019-11-04T19:12:55.282 controller-1 collectd[12276]: info interface plugin initialization complete 2019-11-04T19:12:55.285 controller-1 systemd[1]: info Started Collectd statistics daemon and extension services. 2019-11-04T19:12:55.291 controller-1 collectd[12276]: info ptp plugin Timestamping Mode: hardware 2019-11-04T19:12:55.291 controller-1 collectd[12276]: info ptp plugin interface eno1 supports timestamping modes: ['legacy', 'software', 'hardware'] 2019-11-04T19:12:55.291 controller-1 collectd[12276]: info ptp plugin interface ens801f1 supports timestamping modes: ['legacy', 'software', 'hardware'] 2019-11-04T19:12:55.291 controller-1 collectd[12276]: info ptp plugin interface ens801f0 supports timestamping modes: ['legacy', 'software', 'hardware'] 2019-11-04T19:12:55.392 controller-1 kdumpctl[12462]: info kexec: loaded kdump kernel 2019-11-04T19:12:55.392 controller-1 kdumpctl[12462]: info Starting kdump: [OK] 2019-11-04T19:12:55.405 controller-1 systemd[1]: info Started Crash recovery kernel arming. 2019-11-04T19:12:55.580 controller-1 collectd[12276]: info ptp plugin initialization complete 2019-11-04T19:12:55.580 controller-1 collectd[12276]: info platform memory usage: init function for controller-1 2019-11-04T19:12:55.580 controller-1 collectd[12276]: info platform memory usage: strict_memory_accounting: False 2019-11-04T19:12:55.580 controller-1 collectd[12276]: info platform memory usage: WORKER_BASE_RESERVED not found in file: /etc/platform/worker_reserved.conf 2019-11-04T19:12:55.581 controller-1 collectd[12276]: info platform memory usage: reserve_all: True, reserved_MiB: 0 2019-11-04T19:12:55.581 controller-1 collectd[12276]: info platform memory usage: initialization complete 2019-11-04T19:12:55.581 controller-1 collectd[12276]: info platform cpu usage plugin init function for controller-1 2019-11-04T19:12:55.582 controller-1 collectd[12276]: info platform cpu usage plugin found 36 cpus total; monitoring 36 cpus, cpu list: 0-35 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info platform cpu usage plugin initialization complete 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier mtce port 2101 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier controller:controller-1 init function 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring Platform CPU usage 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring Platform Memory usage 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring / usage 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring /tmp usage 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring /dev usage 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring /dev/shm usage 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring /var/run usage 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring /var/log usage 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring /var/lock usage 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring /boot usage 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring /scratch usage 2019-11-04T19:12:55.583 controller-1 collectd[12276]: info alarm notifier monitoring /opt/etcd usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier monitoring /opt/platform usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier monitoring /opt/extension usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier monitoring /var/lib/rabbitmq usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier monitoring /var/lib/postgresql usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier monitoring /var/lib/ceph/mon usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier monitoring /var/lib/docker usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier monitoring /var/lib/docker-distribution usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier monitoring /var/lib/kubelet usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier monitoring /var/lib/nova/instances usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier monitoring /opt/backups usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier monitoring Example usage 2019-11-04T19:12:55.584 controller-1 collectd[12276]: info alarm notifier setting up influxdb:collectd database 2019-11-04T19:12:55.590 controller-1 collectd[12276]: info Initialization complete, entering read-loop. 2019-11-04T19:12:55.627 controller-1 collectd[12276]: info ptp plugin no startup alarms found 2019-11-04T19:12:55.000 controller-1 ntpd[12194]: info Listen normally on 12 docker0 172.17.0.1 UDP 123 2019-11-04T19:12:55.000 controller-1 ntpd[12194]: debug new interface(s) found: waking up resolver 2019-11-04T19:12:56.470 controller-1 collectd[12276]: info interface plugin found no startup alarms 2019-11-04T19:12:56.473 controller-1 collectd[12276]: info interface plugin http request exception ; [Errno 111] Connection refused 2019-11-04T19:12:57.072 controller-1 collectd[12276]: info alarm notifier initialization complete 2019-11-04T19:12:57.073 controller-1 collectd[12276]: info alarm notifier setting up influxdb:collectd database 2019-11-04T19:12:57.114 controller-1 collectd[12276]: info remote logging server 100.118:host=controller-1 alarm clear 2019-11-04T19:12:57.114 controller-1 collectd[12276]: info remote logging server is disabled 2019-11-04T19:12:57.119 controller-1 collectd[12276]: info ptp plugin PTP Service Disabled 2019-11-04T19:12:57.119 controller-1 collectd[12276]: info alarm notifier reading: 0.00 % usage - /dev 2019-11-04T19:12:57.119 controller-1 collectd[12276]: info alarm notifier reading: 0.35 % usage - /var/lib/ceph/mon 2019-11-04T19:12:57.119 controller-1 collectd[12276]: info alarm notifier reading: 4.42 % usage - /var/log 2019-11-04T19:12:57.120 controller-1 collectd[12276]: info alarm notifier reading: 24.59 % usage - /boot 2019-11-04T19:12:57.120 controller-1 collectd[12276]: info alarm notifier reading: 1.86 % usage - /scratch 2019-11-04T19:12:57.121 controller-1 collectd[12276]: info alarm notifier reading: 18.91 % usage - /var/lib/docker 2019-11-04T19:12:57.123 controller-1 collectd[12276]: info alarm notifier monitoring Platform Memory platform % usage 2019-11-04T19:12:57.123 controller-1 collectd[12276]: info platform memory usage: Usage: 0.2%; Reserved: 126778.5 MiB, Platform: 223.9 MiB (Base: 223.9, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:12:57.123 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.2%, Anon: 235.5 MiB, cgroup-rss: 228.0 MiB, Avail: 126543.0 MiB, Total: 126778.5 MiB 2019-11-04T19:12:57.123 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.19%, Anon: 118.2 MiB, Avail: 63438.0 MiB, Total: 63556.2 MiB 2019-11-04T19:12:57.123 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.19%, Anon: 118.7 MiB, Avail: 63755.3 MiB, Total: 63874.0 MiB 2019-11-04T19:12:57.123 controller-1 collectd[12276]: info alarm notifier reading: 0.18 % usage - Platform Memory 2019-11-04T19:12:57.124 controller-1 collectd[12276]: info alarm notifier monitoring Platform Memory total % usage 2019-11-04T19:12:57.124 controller-1 collectd[12276]: info alarm notifier monitoring Platform Memory node0 % usage 2019-11-04T19:12:57.124 controller-1 collectd[12276]: info alarm notifier monitoring Platform Memory node1 % usage 2019-11-04T19:12:57.124 controller-1 collectd[12276]: info alarm notifier reading: 0.19 % usage - Platform Memory node1 2019-11-04T19:12:57.124 controller-1 collectd[12276]: info alarm notifier reading: 0.19 % usage - Platform Memory node0 2019-11-04T19:12:57.124 controller-1 collectd[12276]: info alarm notifier reading: 0.19 % usage - Platform Memory total 2019-11-04T19:12:57.126 controller-1 collectd[12276]: info alarm notifier influxdb:collectd database already exists 2019-11-04T19:12:57.130 controller-1 collectd[12276]: info alarm notifier influxdb:collectd retention policy already exists 2019-11-04T19:12:57.134 controller-1 collectd[12276]: info alarm notifier influxdb:collectd samples retention policy: {u'duration': u'168h0m0s', u'default': True, u'replicaN': 1, u'name': u'collectd samples'} 2019-11-04T19:12:57.134 controller-1 collectd[12276]: info alarm notifier influxdb:collectd is setup 2019-11-04T19:12:57.134 controller-1 collectd[12276]: info alarm notifier reading: 0.37 % usage - /var/lib/kubelet 2019-11-04T19:13:01.000 controller-1 ntpd[12194]: notice ntpd: time set +0.468985 s 2019-11-04T19:13:01.257 controller-1 systemd[1]: info Time has been changed 2019-11-04T19:13:01.257 controller-1 ntpd[12194]: info ntpd: time set +0.468985s 2019-11-04T19:13:01.270 controller-1 systemd[1]: info Started Set time via NTP. 2019-11-04T19:13:01.285 controller-1 systemd[1]: info Starting Network Time Service... 2019-11-04T19:13:01.000 controller-1 ntpd[13138]: notice ntpd 4.2.6p5@1.2349-o Mon Oct 21 00:21:18 UTC 2019 (1) 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: notice proto: precision = 0.030 usec 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info 0.0.0.0 c01d 0d kern kernel time sync enabled 2019-11-04T19:13:01.290 controller-1 systemd[1]: info Reached target System Time Synchronized. 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: debug ntp_io: estimated max descriptors: 1024, initial socket boundary: 16 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen and drop on 1 v6wildcard :: UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen normally on 2 lo 127.0.0.1 UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen normally on 3 pxeboot0 192.168.202.4 UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen normally on 4 docker0 172.17.0.1 UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen normally on 5 lo ::1 UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen normally on 6 eno1 2620:10a:a001:a103::233 UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen normally on 7 eno1 fe80::21e:67ff:fefe:f7bb UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen normally on 8 vlan109 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen normally on 9 vlan108 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen normally on 10 pxeboot0 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen normally on 11 vlan109 fd00:205::4 UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listen normally on 12 vlan108 fd00:204::4 UDP 123 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info Listening on routing socket on fd #29 for interface updates 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info 0.0.0.0 c016 06 restart 2019-11-04T19:13:01.000 controller-1 ntpd[13139]: info 0.0.0.0 c012 02 freq_set kernel -7.511 PPM 2019-11-04T19:13:01.305 controller-1 systemd[1]: info Started Command Scheduler. 2019-11-04T19:13:01.310 controller-1 systemd[1]: info Started daily update of the root trust anchor for DNSSEC. 2019-11-04T19:13:01.320 controller-1 systemd[1]: info Reached target Timers. 2019-11-04T19:13:01.338 controller-1 systemd[1]: info Started Network Time Service. 2019-11-04T19:13:05.277 controller-1 collectd[12276]: info alarm notifier reading: 0.00 % usage - /tmp 2019-11-04T19:13:05.277 controller-1 collectd[12276]: info alarm notifier reading: 0.00 % usage - /dev/shm 2019-11-04T19:13:05.278 controller-1 collectd[12276]: info alarm notifier reading: 0.10 % usage - /opt/backups 2019-11-04T19:13:05.278 controller-1 collectd[12276]: info alarm notifier reading: 37.34 % usage - / 2019-11-04T19:13:05.279 controller-1 collectd[12276]: info degrade notifier controller ip: fd00:204::2 2019-11-04T19:13:05.279 controller-1 collectd[12276]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T19:13:05.279 controller-1 collectd[12276]: info degrade notifier ipv6 addressing (fd00:204::2) 2019-11-04T19:13:05.280 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 2.3% (avg per cpu); cpus: 36, Platform: 2.5% (Base: 2.5, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:05.281 controller-1 collectd[12276]: info alarm notifier reading: 2.29 % usage - Platform CPU 2019-11-04T19:13:05.283 controller-1 collectd[12276]: info platform memory usage: Usage: 0.1%; Reserved: 126780.8 MiB, Platform: 180.6 MiB (Base: 180.6, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:05.283 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.2%, Anon: 192.6 MiB, cgroup-rss: 184.7 MiB, Avail: 126588.2 MiB, Total: 126780.8 MiB 2019-11-04T19:13:05.283 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.19%, Anon: 119.1 MiB, Avail: 63442.2 MiB, Total: 63561.3 MiB 2019-11-04T19:13:05.283 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.12%, Anon: 73.5 MiB, Avail: 63794.8 MiB, Total: 63868.2 MiB 2019-11-04T19:13:14.090 controller-1 sw-patch[12195]: info Checking for software updates... 2019-11-04T19:13:14.099 controller-1 sw-patch[12195]: info Nothing to install. 2019-11-04T19:13:14.105 controller-1 sw-patch[12195]: info Nothing to remove. 2019-11-04T19:13:14.112 controller-1 systemd[1]: info Started TIS Patching. 2019-11-04T19:13:14.127 controller-1 systemd[1]: info Started Fault Management REST API Service. 2019-11-04T19:13:14.141 controller-1 systemd[1]: info Starting TIS Patching Controller... 2019-11-04T19:13:14.157 controller-1 fm-api[13204]: info OK 2019-11-04T19:13:14.158 controller-1 systemd[1]: info Starting Titanium Cloud Log Management... 2019-11-04T19:13:14.175 controller-1 systemd[1]: info Starting Titanium Cloud System Inventory Agent... 2019-11-04T19:13:14.190 controller-1 sysinv-agent[13228]: info Setting up config for sysinv-agent: Installing virtio_net driver: OK 2019-11-04T19:13:14.191 controller-1 sysinv-agent[13228]: info Starting sysinv-agent: OK 2019-11-04T19:13:14.192 controller-1 systemd[1]: info Started Titanium Cloud System Inventory Agent. 2019-11-04T19:13:14.204 controller-1 logmgmt[13225]: info Starting logmgmt...done. 2019-11-04T19:13:14.212 controller-1 systemd[1]: info Started controllerconfig service. 2019-11-04T19:13:14.219 controller-1 systemd[1]: info Started Titanium Cloud Log Management. 2019-11-04T19:13:14.221 controller-1 controller_config[13249]: info Configuring controller node... 2019-11-04T19:13:14.231 controller-1 systemd[1]: info Starting Titanium Cloud libvirt QEMU cleanup... 2019-11-04T19:13:14.234 controller-1 controller_config[13249]: info Checking connectivity to controller-platform-nfs for up to 70 seconds over interface fd00:204::4 2019-11-04T19:13:14.241 controller-1 systemd[1]: info Starting General TIS config gate... 2019-11-04T19:13:14.268 controller-1 systemd[1]: info Started Titanium Cloud libvirt QEMU cleanup. 2019-11-04T19:13:14.428 controller-1 controller_config[13249]: info /etc/init.d/controller_config: Running puppet manifest apply 2019-11-04T19:13:14.448 controller-1 controller_config[13249]: info Applying puppet controller manifest... 2019-11-04T19:13:14.462 controller-1 systemd[1]: info Started TIS Patching Controller. 2019-11-04T19:13:14.480 controller-1 systemd[1]: info Starting TIS Patching Controller Daemon... 2019-11-04T19:13:14.484 controller-1 sw-patch-controller-daemon[13424]: info Starting sw-patch-controller-daemon...done. 2019-11-04T19:13:14.497 controller-1 systemd[1]: info Starting TIS Patching Agent... 2019-11-04T19:13:14.501 controller-1 sw-patch-agent[13428]: info Starting sw-patch-agent...done. 2019-11-04T19:13:14.522 controller-1 systemd[1]: info Started TIS Patching Controller Daemon. 2019-11-04T19:13:14.529 controller-1 systemd[1]: info Started TIS Patching Agent. 2019-11-04T19:13:15.280 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 3.8% (avg per cpu); cpus: 36, Platform: 3.6% (Base: 3.6, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:15.283 controller-1 collectd[12276]: info platform memory usage: Usage: 0.4%; Reserved: 126764.9 MiB, Platform: 522.3 MiB (Base: 522.3, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:15.283 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.4%, Anon: 534.3 MiB, cgroup-rss: 526.2 MiB, Avail: 126230.6 MiB, Total: 126764.9 MiB 2019-11-04T19:13:15.283 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.41%, Anon: 260.4 MiB, Avail: 63312.9 MiB, Total: 63573.3 MiB 2019-11-04T19:13:15.283 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.43%, Anon: 273.9 MiB, Avail: 63573.3 MiB, Total: 63847.2 MiB 2019-11-04T19:13:15.000 controller-1 lldpd[12281]: warning error while receiving frame on eno2: Network is down 2019-11-04T19:13:16.000 controller-1 lldpd[12281]: warning error while receiving frame on ens785f0: Network is down 2019-11-04T19:13:16.000 controller-1 lldpd[12281]: warning error while receiving frame on ens785f1: Network is down 2019-11-04T19:13:16.000 controller-1 lldpd[12281]: warning removal request for address of fe80::3efd:feff:fea0:188a%6, but no knowledge of it 2019-11-04T19:13:16.000 controller-1 lldpd[12281]: warning error while receiving frame on ens801f2: Network is down 2019-11-04T19:13:16.000 controller-1 lldpd[12281]: warning error while receiving frame on ens801f3: Network is down 2019-11-04T19:13:25.280 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.5% (avg per cpu); cpus: 36, Platform: 5.4% (Base: 5.4, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:25.282 controller-1 collectd[12276]: info platform memory usage: Usage: 0.4%; Reserved: 126756.8 MiB, Platform: 490.5 MiB (Base: 490.5, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:25.283 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.4%, Anon: 507.3 MiB, cgroup-rss: 494.4 MiB, Avail: 126249.6 MiB, Total: 126756.8 MiB 2019-11-04T19:13:25.283 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.41%, Anon: 262.1 MiB, Avail: 63310.8 MiB, Total: 63572.9 MiB 2019-11-04T19:13:25.283 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.38%, Anon: 245.1 MiB, Avail: 63594.5 MiB, Total: 63839.6 MiB 2019-11-04T19:13:35.280 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 4.9% (avg per cpu); cpus: 36, Platform: 4.8% (Base: 4.8, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:35.283 controller-1 collectd[12276]: info platform memory usage: Usage: 0.4%; Reserved: 126741.9 MiB, Platform: 498.9 MiB (Base: 498.9, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:35.283 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.4%, Anon: 514.3 MiB, cgroup-rss: 502.9 MiB, Avail: 126227.6 MiB, Total: 126741.9 MiB 2019-11-04T19:13:35.283 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.41%, Anon: 262.0 MiB, Avail: 63307.1 MiB, Total: 63569.1 MiB 2019-11-04T19:13:35.283 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.40%, Anon: 252.4 MiB, Avail: 63577.3 MiB, Total: 63829.6 MiB 2019-11-04T19:13:45.280 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.5% (avg per cpu); cpus: 36, Platform: 5.5% (Base: 5.5, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:45.283 controller-1 collectd[12276]: info platform memory usage: Usage: 0.4%; Reserved: 126737.9 MiB, Platform: 513.2 MiB (Base: 513.2, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:45.283 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.4%, Anon: 526.5 MiB, cgroup-rss: 517.5 MiB, Avail: 126211.4 MiB, Total: 126737.9 MiB 2019-11-04T19:13:45.283 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.41%, Anon: 261.1 MiB, Avail: 63308.5 MiB, Total: 63569.6 MiB 2019-11-04T19:13:45.283 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.42%, Anon: 265.4 MiB, Avail: 63561.9 MiB, Total: 63827.3 MiB 2019-11-04T19:13:55.280 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.8% (avg per cpu); cpus: 36, Platform: 5.7% (Base: 5.7, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:55.283 controller-1 collectd[12276]: info platform memory usage: Usage: 0.4%; Reserved: 126736.8 MiB, Platform: 517.1 MiB (Base: 517.1, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:13:55.283 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.4%, Anon: 534.1 MiB, cgroup-rss: 521.0 MiB, Avail: 126202.6 MiB, Total: 126736.8 MiB 2019-11-04T19:13:55.283 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.42%, Anon: 264.8 MiB, Avail: 63305.7 MiB, Total: 63570.5 MiB 2019-11-04T19:13:55.283 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.42%, Anon: 269.4 MiB, Avail: 63556.1 MiB, Total: 63825.5 MiB 2019-11-04T19:14:05.280 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 4.5% (avg per cpu); cpus: 36, Platform: 4.4% (Base: 4.4, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:05.282 controller-1 collectd[12276]: info interface plugin http request exception ; [Errno 111] Connection refused 2019-11-04T19:14:05.284 controller-1 collectd[12276]: info platform memory usage: Usage: 0.4%; Reserved: 126735.5 MiB, Platform: 564.0 MiB (Base: 564.0, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:05.284 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.5%, Anon: 577.7 MiB, cgroup-rss: 567.7 MiB, Avail: 126157.8 MiB, Total: 126735.5 MiB 2019-11-04T19:14:05.284 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.43%, Anon: 274.9 MiB, Avail: 63296.1 MiB, Total: 63571.0 MiB 2019-11-04T19:14:05.284 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.47%, Anon: 302.8 MiB, Avail: 63521.1 MiB, Total: 63823.9 MiB 2019-11-04T19:14:10.654 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:10.720 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:15.280 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 3.1% (avg per cpu); cpus: 36, Platform: 3.0% (Base: 3.0, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:15.283 controller-1 collectd[12276]: info platform memory usage: Usage: 0.5%; Reserved: 126734.9 MiB, Platform: 574.5 MiB (Base: 574.5, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:15.283 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.5%, Anon: 587.7 MiB, cgroup-rss: 577.4 MiB, Avail: 126147.2 MiB, Total: 126734.9 MiB 2019-11-04T19:14:15.283 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.43%, Anon: 271.2 MiB, Avail: 63298.7 MiB, Total: 63569.9 MiB 2019-11-04T19:14:15.283 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.50%, Anon: 316.4 MiB, Avail: 63508.1 MiB, Total: 63824.5 MiB 2019-11-04T19:14:17.478 controller-1 systemd[1]: info Reloading System Logger Daemon. 2019-11-04T19:14:17.503 controller-1 systemd[1]: info Reloaded System Logger Daemon. 2019-11-04T19:14:24.947 controller-1 systemd[1]: info Got automount request for /proc/sys/fs/binfmt_misc, triggered by 82824 (sysctl) 2019-11-04T19:14:24.957 controller-1 systemd[1]: info Mounting Arbitrary Executable File Formats File System... 2019-11-04T19:14:24.967 controller-1 systemd[1]: info Mounted Arbitrary Executable File Formats File System. 2019-11-04T19:14:25.280 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 3.3% (avg per cpu); cpus: 36, Platform: 3.3% (Base: 3.3, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:25.283 controller-1 collectd[12276]: info platform memory usage: Usage: 0.5%; Reserved: 126697.9 MiB, Platform: 575.0 MiB (Base: 575.0, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:25.283 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.5%, Anon: 589.2 MiB, cgroup-rss: 579.1 MiB, Avail: 126108.7 MiB, Total: 126697.9 MiB 2019-11-04T19:14:25.283 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.43%, Anon: 271.8 MiB, Avail: 63263.2 MiB, Total: 63535.0 MiB 2019-11-04T19:14:25.283 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.50%, Anon: 317.4 MiB, Avail: 63506.5 MiB, Total: 63823.9 MiB 2019-11-04T19:14:27.512 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:27.559 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:27.605 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:27.653 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:27.709 controller-1 systemd[1]: info Stopped Set time via NTP. 2019-11-04T19:14:27.000 controller-1 ntpd[13139]: notice ntpd exiting on signal 15 2019-11-04T19:14:27.840 controller-1 systemd[1]: info Stopping Network Time Service... 2019-11-04T19:14:27.852 controller-1 systemd[1]: info Stopped Network Time Service. 2019-11-04T19:14:27.904 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:32.224 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:32.273 controller-1 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2019-11-04T19:14:32.289 controller-1 systemd[1]: info Starting Name Service Cache Daemon... 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 monitoring file `/etc/passwd` (1) 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 monitoring directory `/etc` (2) 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 monitoring file `/etc/group` (3) 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 monitoring directory `/etc` (2) 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 monitoring file `/etc/hosts` (4) 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 monitoring directory `/etc` (2) 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 monitoring file `/etc/resolv.conf` (5) 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 monitoring directory `/etc` (2) 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 monitoring file `/etc/services` (6) 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 monitoring directory `/etc` (2) 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory 2019-11-04T19:14:32.000 controller-1 nscd: notice 84472 stat failed for file `/etc/netgroup'; will try again later: No such file or directory 2019-11-04T19:14:32.314 controller-1 systemd[1]: info Started Name Service Cache Daemon. 2019-11-04T19:14:32.337 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:32.403 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:32.449 controller-1 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2019-11-04T19:14:32.463 controller-1 systemd[1]: info Starting Naming services LDAP client daemon.... 2019-11-04T19:14:32.000 controller-1 nslcd[84559]: info version 0.8.13 starting 2019-11-04T19:14:32.490 controller-1 systemd[1]: info Started Naming services LDAP client daemon.. 2019-11-04T19:14:32.512 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:32.579 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:32.627 controller-1 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2019-11-04T19:14:32.640 controller-1 systemd[1]: info Starting OpenLDAP Server Daemon... 2019-11-04T19:14:35.281 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 2.4% (avg per cpu); cpus: 36, Platform: 2.2% (Base: 2.2, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:35.284 controller-1 collectd[12276]: info platform memory usage: Usage: 0.5%; Reserved: 126697.1 MiB, Platform: 623.3 MiB (Base: 623.3, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:35.284 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.5%, Anon: 636.4 MiB, cgroup-rss: 627.5 MiB, Avail: 126060.6 MiB, Total: 126697.1 MiB 2019-11-04T19:14:35.284 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.43%, Anon: 273.2 MiB, Avail: 63262.4 MiB, Total: 63535.5 MiB 2019-11-04T19:14:35.284 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.57%, Anon: 363.3 MiB, Avail: 63459.4 MiB, Total: 63822.6 MiB 2019-11-04T19:14:37.000 controller-1 nslcd[84559]: info accepting connections 2019-11-04T19:14:37.509 controller-1 check-config.sh[84641]: info Checking configuration file failed: 2019-11-04T19:14:37.509 controller-1 check-config.sh[84641]: info 5dc0789d could not stat config file "/etc/openldap/slapd.conf": Permission denied (13) 2019-11-04T19:14:37.510 controller-1 check-config.sh[84641]: info slaptest: bad configuration file! 2019-11-04T19:14:37.555 controller-1 openldap[84689]: info Starting SLAPD: ● nscd.service - Name Service Cache Daemon 2019-11-04T19:14:37.555 controller-1 openldap[84689]: info Loaded: loaded (/usr/lib/systemd/system/nscd.service; disabled; vendor preset: disabled) 2019-11-04T19:14:37.555 controller-1 openldap[84689]: info Active: active (running) since Mon 2019-11-04 19:14:32 UTC; 5s ago 2019-11-04T19:14:37.555 controller-1 openldap[84689]: info Main PID: 84472 (nscd) 2019-11-04T19:14:37.555 controller-1 openldap[84689]: info Tasks: 11 2019-11-04T19:14:37.555 controller-1 openldap[84689]: info Memory: 888.0K 2019-11-04T19:14:37.555 controller-1 openldap[84689]: info CGroup: /system.slice/nscd.service 2019-11-04T19:14:37.555 controller-1 openldap[84689]: info └─84472 /usr/sbin/nscd 2019-11-04T19:14:37.555 controller-1 openldap[84689]: info . 2019-11-04T19:14:37.555 controller-1 systemd[1]: info Started OpenLDAP Server Daemon. 2019-11-04T19:14:37.582 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:39.199 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:40.862 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:44.228 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:44.650 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:44.731 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:44.801 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:44.856 controller-1 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2019-11-04T19:14:44.880 controller-1 systemd[1]: info Starting Set time via NTP... 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: notice ntpd 4.2.6p5@1.2349-o Mon Oct 21 00:21:18 UTC 2019 (1) 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: notice proto: precision = 0.036 usec 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info 0.0.0.0 c01d 0d kern kernel time sync enabled 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: debug ntp_io: estimated max descriptors: 1024, initial socket boundary: 16 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen and drop on 1 v6wildcard :: UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen normally on 2 lo 127.0.0.1 UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen normally on 3 pxeboot0 192.168.202.4 UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen normally on 4 docker0 172.17.0.1 UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen normally on 5 lo ::1 UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen normally on 6 eno1 2620:10a:a001:a103::233 UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen normally on 7 eno1 fe80::21e:67ff:fefe:f7bb UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen normally on 8 vlan109 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen normally on 9 vlan108 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen normally on 10 pxeboot0 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen normally on 11 vlan109 fd00:205::4 UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listen normally on 12 vlan108 fd00:204::4 UDP 123 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info Listening on routing socket on fd #29 for interface updates 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info 0.0.0.0 c016 06 restart 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info 0.0.0.0 c012 02 freq_set kernel 0.000 PPM 2019-11-04T19:14:44.000 controller-1 ntpd[87546]: info 0.0.0.0 c011 01 freq_not_set 2019-11-04T19:14:45.280 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 2.6% (avg per cpu); cpus: 36, Platform: 2.4% (Base: 2.4, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:45.282 controller-1 collectd[12276]: info platform memory usage: Usage: 0.7%; Reserved: 126680.6 MiB, Platform: 940.9 MiB (Base: 940.9, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:45.283 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.8%, Anon: 953.1 MiB, cgroup-rss: 945.0 MiB, Avail: 125727.5 MiB, Total: 126680.6 MiB 2019-11-04T19:14:45.283 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 0.88%, Anon: 559.5 MiB, Avail: 62973.9 MiB, Total: 63533.4 MiB 2019-11-04T19:14:45.283 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.62%, Anon: 393.5 MiB, Avail: 63416.4 MiB, Total: 63810.0 MiB 2019-11-04T19:14:51.000 controller-1 nscd: notice 84472 checking for monitored file `/etc/netgroup': No such file or directory 2019-11-04T19:14:51.000 controller-1 ntpd[87546]: notice ntpd: time slew -0.000424 s 2019-11-04T19:14:51.949 controller-1 ntpd[87546]: info ntpd: time slew -0.000424s 2019-11-04T19:14:51.959 controller-1 systemd[1]: info Started Set time via NTP. 2019-11-04T19:14:51.998 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:52.051 controller-1 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2019-11-04T19:14:52.069 controller-1 systemd[1]: info Starting Network Time Service... 2019-11-04T19:14:52.000 controller-1 ntpd[87624]: notice ntpd 4.2.6p5@1.2349-o Mon Oct 21 00:21:18 UTC 2019 (1) 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: notice proto: precision = 0.030 usec 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info 0.0.0.0 c01d 0d kern kernel time sync enabled 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: debug ntp_io: estimated max descriptors: 1024, initial socket boundary: 16 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen and drop on 0 v4wildcard 0.0.0.0 UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen and drop on 1 v6wildcard :: UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen normally on 2 lo 127.0.0.1 UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen normally on 3 pxeboot0 192.168.202.4 UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen normally on 4 docker0 172.17.0.1 UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen normally on 5 lo ::1 UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen normally on 6 eno1 2620:10a:a001:a103::233 UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen normally on 7 eno1 fe80::21e:67ff:fefe:f7bb UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen normally on 8 vlan109 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen normally on 9 vlan108 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen normally on 10 pxeboot0 fe80::3efd:feff:fea0:1888 UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen normally on 11 vlan109 fd00:205::4 UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listen normally on 12 vlan108 fd00:204::4 UDP 123 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info Listening on routing socket on fd #29 for interface updates 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info 0.0.0.0 c016 06 restart 2019-11-04T19:14:52.000 controller-1 ntpd[87625]: info 0.0.0.0 c012 02 freq_set kernel -7.511 PPM 2019-11-04T19:14:52.110 controller-1 systemd[1]: info Started Network Time Service. 2019-11-04T19:14:52.146 controller-1 systemd[1]: info Reloading. 2019-11-04T19:14:55.281 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 2.3% (avg per cpu); cpus: 36, Platform: 2.2% (Base: 2.2, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:55.284 controller-1 collectd[12276]: info platform memory usage: Usage: 0.9%; Reserved: 126667.0 MiB, Platform: 1145.8 MiB (Base: 1145.8, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:14:55.284 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.9%, Anon: 1159.0 MiB, cgroup-rss: 1149.9 MiB, Avail: 125508.1 MiB, Total: 126667.0 MiB 2019-11-04T19:14:55.284 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 1.07%, Anon: 676.8 MiB, Avail: 62856.7 MiB, Total: 63533.5 MiB 2019-11-04T19:14:55.284 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.76%, Anon: 482.2 MiB, Avail: 63318.0 MiB, Total: 63800.1 MiB 2019-11-04T19:14:59.686 controller-1 controller_config[13249]: info [DONE] 2019-11-04T19:14:59.780 controller-1 systemd[1]: info Started General TIS config gate. 2019-11-04T19:14:59.803 controller-1 systemd[1]: info Starting Kubernetes Kubelet Server... 2019-11-04T19:14:59.000 controller-1 root: info /usr/bin/kubelet-cgroup-setup.sh(88474): Creating: /sys/fs/cgroup/pids/k8s-infra 2019-11-04T19:14:59.000 controller-1 root: info /usr/bin/kubelet-cgroup-setup.sh(88474): Nothing to do, already configured: /sys/fs/cgroup/cpuset/k8s-infra. 2019-11-04T19:14:59.823 controller-1 systemd[1]: info Starting Titanium Cloud Maintenance Filesystem Monitor... 2019-11-04T19:14:59.843 controller-1 systemd[1]: info Starting Titanium Cloud Maintenance Heartbeat Agent... 2019-11-04T19:14:59.853 controller-1 fsmon[88481]: info Starting fsmond: OK 2019-11-04T19:14:59.865 controller-1 systemd[1]: info Starting Titanium Cloud Maintenance Heartbeat Client... 2019-11-04T19:14:59.888 controller-1 systemd[1]: info Started Getty on tty1. 2019-11-04T19:14:59.897 controller-1 hbsClient[88501]: info Starting hbsClient: OK 2019-11-04T19:14:59.903 controller-1 systemd[1]: info Starting Titanium Cloud Maintenance Alarm Handler Client... 2019-11-04T19:14:59.910 controller-1 hbsAgent[88488]: info Starting hbsAgent: OK 2019-11-04T19:14:59.924 controller-1 systemd[1]: info Started Serial Getty on ttyS0. 2019-11-04T19:14:59.930 controller-1 systemd[1]: info Reached target Login Prompts. 2019-11-04T19:14:59.931 controller-1 mtcalarm[88552]: info Starting mtcalarmd: OK 2019-11-04T19:14:59.948 controller-1 systemd[1]: info Starting Starling-X Maintenance Link Monitor... 2019-11-04T19:14:59.966 controller-1 lmon[88564]: info Starting lmond: OK 2019-11-04T19:14:59.971 controller-1 systemd[1]: info Starting Titanium Cloud Maintenance Goenable Ready... 2019-11-04T19:14:59.977 controller-1 goenabled[88571]: info Goenabled Ready: [ OK ] 2019-11-04T19:14:59.990 controller-1 systemd[1]: info Starting Titanium Cloud Maintenance Logger... 2019-11-04T19:15:00.005 controller-1 systemd[1]: info Starting Service Management Watchdog... 2019-11-04T19:15:00.016 controller-1 mtclog[88576]: info Starting mtclogd: OK 2019-11-04T19:15:00.024 controller-1 sm-watchdog[88582]: info Starting sm-watchdog: OK 2019-11-04T19:15:00.048 controller-1 systemd[1]: info Started Titanium Cloud Maintenance Filesystem Monitor. 2019-11-04T19:15:00.056 controller-1 systemd[1]: info Started Titanium Cloud Maintenance Heartbeat Agent. 2019-11-04T19:15:00.064 controller-1 systemd[1]: info Started Titanium Cloud Maintenance Heartbeat Client. 2019-11-04T19:15:00.072 controller-1 systemd[1]: info Started Titanium Cloud Maintenance Alarm Handler Client. 2019-11-04T19:15:00.081 controller-1 systemd[1]: info Started Starling-X Maintenance Link Monitor. 2019-11-04T19:15:00.098 controller-1 systemd[1]: info Started Titanium Cloud Maintenance Goenable Ready. 2019-11-04T19:15:00.106 controller-1 systemd[1]: info Started Titanium Cloud Maintenance Logger. 2019-11-04T19:15:00.113 controller-1 systemd[1]: info Started Service Management Watchdog. 2019-11-04T19:15:00.120 controller-1 systemd[1]: info Started Kubernetes Kubelet Server. 2019-11-04T19:15:00.134 controller-1 systemd[1]: info Starting Service Management Unit... 2019-11-04T19:15:00.151 controller-1 systemd[1]: info Starting Titanium Cloud Maintenance Command Handler Client... 2019-11-04T19:15:00.179 controller-1 mtcClient[88607]: info Starting mtcClient: OK 2019-11-04T19:15:00.181 controller-1 systemd[1]: info Started Titanium Cloud Maintenance Command Handler Client. 2019-11-04T19:15:00.393 controller-1 sm[88602]: info Starting sm: OK 2019-11-04T19:15:00.393 controller-1 systemd[1]: info PID file /var/run/sm.pid not readable (yet?) after start. 2019-11-04T19:15:00.402 controller-1 systemd[1]: info Started Service Management Unit. 2019-11-04T19:15:00.418 controller-1 systemd[1]: info Starting Service Management API Unit... 2019-11-04T19:15:00.425 controller-1 systemd[1]: info Started Service Management Shutdown Unit. 2019-11-04T19:15:00.445 controller-1 sm-api[88762]: info Starting sm-api: OK 2019-11-04T19:15:00.447 controller-1 systemd[1]: info Started Service Management API Unit. 2019-11-04T19:15:00.464 controller-1 systemd[1]: info Starting Service Management Event Recorder Unit... 2019-11-04T19:15:00.481 controller-1 sm-eru[88771]: info Starting sm-eru: OK 2019-11-04T19:15:00.482 controller-1 systemd[1]: info PID file /var/run/sm-eru.pid not readable (yet?) after start. 2019-11-04T19:15:00.485 controller-1 systemd[1]: info Started Service Management Event Recorder Unit. 2019-11-04T19:15:00.500 controller-1 systemd[1]: info Starting Titanium Cloud Maintenance Process Monitor... 2019-11-04T19:15:00.519 controller-1 pmon[88779]: info Starting pmond: OK 2019-11-04T19:15:00.519 controller-1 systemd[1]: info PID file /var/run/pmond.pid not readable (yet?) after start. 2019-11-04T19:15:00.526 controller-1 systemd[1]: info Started Titanium Cloud Maintenance Process Monitor. 2019-11-04T19:15:00.535 controller-1 kubelet[88595]: info Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2019-11-04T19:15:00.535 controller-1 kubelet[88595]: info Flag --cpu-manager-policy has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2019-11-04T19:15:00.538 controller-1 kubelet[88595]: info Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2019-11-04T19:15:00.538 controller-1 kubelet[88595]: info Flag --cpu-manager-policy has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. 2019-11-04T19:15:00.544 controller-1 systemd[1]: info Starting Titanium Cloud Maintenance Host Watchdog... 2019-11-04T19:15:00.562 controller-1 systemd[1]: info Started Kubernetes systemd probe. 2019-11-04T19:15:00.562 controller-1 hostw[88791]: info Starting hostwd: OK 2019-11-04T19:15:00.569 controller-1 systemd[1]: info Started Titanium Cloud Maintenance Host Watchdog. 2019-11-04T19:15:00.570 controller-1 kubelet[88595]: info I1104 19:15:00.570556 88595 server.go:410] Version: v1.16.2 2019-11-04T19:15:00.571 controller-1 kubelet[88595]: info I1104 19:15:00.571284 88595 plugins.go:100] No cloud provider specified. 2019-11-04T19:15:00.571 controller-1 kubelet[88595]: info I1104 19:15:00.571357 88595 server.go:773] Client rotation is on, will bootstrap in background 2019-11-04T19:15:00.576 controller-1 kubelet[88595]: info I1104 19:15:00.575977 88595 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". 2019-11-04T19:15:00.589 controller-1 systemd[1]: info Reached target Multi-User System. 2019-11-04T19:15:00.604 controller-1 systemd[1]: info Starting Update UTMP about System Runlevel Changes... 2019-11-04T19:15:00.612 controller-1 systemd[1]: info Started Stop Read-Ahead Data Collection 10s After Completed Startup. 2019-11-04T19:15:00.637 controller-1 systemd[1]: info Started Update UTMP about System Runlevel Changes. 2019-11-04T19:15:00.637 controller-1 kubelet[88595]: info I1104 19:15:00.637755 88595 container_manager_linux.go:265] container manager verified user specified cgroup-root exists: [k8s-infra] 2019-11-04T19:15:00.637 controller-1 kubelet[88595]: info I1104 19:15:00.637770 88595 container_manager_linux.go:270] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/k8s-infra CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} 2019-11-04T19:15:00.637 controller-1 kubelet[88595]: info I1104 19:15:00.637859 88595 fake_topology_manager.go:29] [fake topologymanager] NewFakeManager 2019-11-04T19:15:00.637 controller-1 kubelet[88595]: info I1104 19:15:00.637865 88595 container_manager_linux.go:305] Creating device plugin manager: true 2019-11-04T19:15:00.639 controller-1 kubelet[88595]: info I1104 19:15:00.639091 88595 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{kubelet.sock /var/lib/kubelet/device-plugins/ map[] {0 0} {{} [0 0 0]} 0x415c610 0xd0ae3f8 0x415d010 map[] map[] map[] map[] map[] 0xc000993e90 [0 1] 0xd0ae3f8} 2019-11-04T19:15:00.639 controller-1 kubelet[88595]: info I1104 19:15:00.639158 88595 state_mem.go:36] [cpumanager] initializing new in-memory state store 2019-11-04T19:15:00.639 controller-1 kubelet[88595]: info I1104 19:15:00.639776 88595 fake_topology_manager.go:39] [fake topologymanager] AddHintProvider HintProvider: &{{0 0} 0xd0ae3f8 10000000000 0xc000803680 map[memory:{{104857600 0} {} BinarySI}]} 2019-11-04T19:15:00.640 controller-1 kubelet[88595]: info I1104 19:15:00.640454 88595 kubelet.go:287] Adding pod path: /etc/kubernetes/manifests 2019-11-04T19:15:00.641 controller-1 kubelet[88595]: info I1104 19:15:00.641066 88595 kubelet.go:312] Watching apiserver 2019-11-04T19:15:00.641 controller-1 kubelet[88595]: info I1104 19:15:00.641265 88595 kubelet.go:491] IPv6 node IP (fd00:204::4), assume IPv6 operation 2019-11-04T19:15:00.643 controller-1 kubelet[88595]: info I1104 19:15:00.643844 88595 client.go:75] Connecting to docker on unix:///var/run/docker.sock 2019-11-04T19:15:00.644 controller-1 kubelet[88595]: info I1104 19:15:00.644713 88595 client.go:104] Start docker client with request timeout=2m0s 2019-11-04T19:15:00.645 controller-1 systemd[1]: info Startup finished in 4.060s (kernel) + 3.943s (initrd) + 2min 34.624s (userspace) = 2min 42.628s. 2019-11-04T19:15:00.647 controller-1 kubelet[88595]: info W1104 19:15:00.647510 88595 docker_service.go:563] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth" 2019-11-04T19:15:00.647 controller-1 kubelet[88595]: info I1104 19:15:00.647536 88595 docker_service.go:240] Hairpin mode set to "hairpin-veth" 2019-11-04T19:15:00.751 controller-1 kubelet[88595]: info I1104 19:15:00.751051 88595 docker_service.go:255] Docker cri networking managed by cni 2019-11-04T19:15:00.768 controller-1 kubelet[88595]: info I1104 19:15:00.768182 88595 docker_service.go:260] Docker Info: &{ID:BJK4:YCVI:ED6B:6QSR:ZJGT:567O:YJBK:4ORJ:RBQE:SXHN:HWIG:HLTO Containers:31 ContainersRunning:0 ContainersPaused:0 ContainersStopped:31 Images:28 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:false IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:22 OomKillDisable:true NGoroutines:37 SystemTime:2019-11-04T19:15:00.751728872Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-957.21.3.el7.2.tis.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000ad81c0 NCPU:36 MemTotal:134897729536 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy:http://yow-proxomatic.wrs.com:3128 HTTPSProxy: NoProxy:localhost,127.0.0.1,registry.local,[fd00:204::2],[fd00:204::3],[2620:10a:a001:a103::234],[2620:10a:a001:a103::232],[fd00:204::4],[2620:10a:a001:a103::233],tis-lab-registry.cumulus.wrs.com Name:controller-1 Labels:[] ExperimentalBuild:false ServerVersion:18.09.6 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:bb71b10fd8f58240ca47fbb579b9d1028eea7c84 Expected:bb71b10fd8f58240ca47fbb579b9d1028eea7c84} RuncCommit:{ID:2b18fe1d885ee5083ef9f0838fee39b62d653e30 Expected:2b18fe1d885ee5083ef9f0838fee39b62d653e30} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine Warnings:[WARNING: No kernel memory limit support]} 2019-11-04T19:15:00.768 controller-1 kubelet[88595]: info I1104 19:15:00.768283 88595 docker_service.go:273] Setting cgroupDriver to cgroupfs 2019-11-04T19:15:00.785 controller-1 kubelet[88595]: info I1104 19:15:00.785422 88595 remote_runtime.go:59] parsed scheme: "" 2019-11-04T19:15:00.785 controller-1 kubelet[88595]: info I1104 19:15:00.785439 88595 remote_runtime.go:59] scheme "" not registered, fallback to default scheme 2019-11-04T19:15:00.786 controller-1 kubelet[88595]: info I1104 19:15:00.786435 88595 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] } 2019-11-04T19:15:00.786 controller-1 kubelet[88595]: info I1104 19:15:00.786454 88595 clientconn.go:577] ClientConn switching balancer to "pick_first" 2019-11-04T19:15:00.786 controller-1 kubelet[88595]: info I1104 19:15:00.786914 88595 remote_image.go:50] parsed scheme: "" 2019-11-04T19:15:00.786 controller-1 kubelet[88595]: info I1104 19:15:00.786923 88595 remote_image.go:50] scheme "" not registered, fallback to default scheme 2019-11-04T19:15:00.786 controller-1 kubelet[88595]: info I1104 19:15:00.786944 88595 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] } 2019-11-04T19:15:00.786 controller-1 kubelet[88595]: info I1104 19:15:00.786949 88595 clientconn.go:577] ClientConn switching balancer to "pick_first" 2019-11-04T19:15:05.282 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 2.2% (avg per cpu); cpus: 36, Platform: 2.2% (Base: 2.2, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:15:05.284 controller-1 collectd[12276]: info platform memory usage: Usage: 0.8%; Reserved: 126638.5 MiB, Platform: 1010.1 MiB (Base: 1010.1, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:15:05.284 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.8%, Anon: 1023.2 MiB, cgroup-rss: 1014.3 MiB, Avail: 125615.3 MiB, Total: 126638.5 MiB 2019-11-04T19:15:05.285 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 1.10%, Anon: 698.3 MiB, Avail: 62814.0 MiB, Total: 63512.3 MiB 2019-11-04T19:15:05.285 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.51%, Anon: 324.9 MiB, Avail: 63469.3 MiB, Total: 63794.2 MiB 2019-11-04T19:15:15.281 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 0.4% (avg per cpu); cpus: 36, Platform: 0.3% (Base: 0.3, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:15:15.284 controller-1 collectd[12276]: info platform memory usage: Usage: 0.8%; Reserved: 126642.4 MiB, Platform: 1011.1 MiB (Base: 1011.1, k8s-system: 0.0), k8s-addon: 0.0 2019-11-04T19:15:15.285 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.8%, Anon: 1024.3 MiB, cgroup-rss: 1015.2 MiB, Avail: 125618.1 MiB, Total: 126642.4 MiB 2019-11-04T19:15:15.285 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 1.10%, Anon: 698.6 MiB, Avail: 62818.8 MiB, Total: 63517.4 MiB 2019-11-04T19:15:15.285 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.51%, Anon: 325.6 MiB, Avail: 63467.6 MiB, Total: 63793.3 MiB 2019-11-04T19:15:15.297 controller-1 collectd[12276]: info interface plugin Link Status Query Response:2: 2019-11-04T19:15:15.297 controller-1 collectd[12276]: info {u'status': u'pass', u'link_info': [{u'network': u'mgmt', u'links': [{u'state': u'Up', u'name': u'ens801f0', u'time': u'1572894901970795'}, {u'state': u'Up', u'name': u'ens801f1', u'time': u'1572894901970801'}]}, {u'network': u'cluster-host', u'links': [{u'state': u'Up', u'name': u'ens801f0', u'time': u'1572894901970893'}, {u'state': u'Up', u'name': u'ens801f1', u'time': u'1572894901970898'}]}, {u'network': u'oam', u'links': [{u'state': u'Up', u'name': u'eno1', u'time': u'1572894901970950'}]}]} 2019-11-04T19:15:15.297 controller-1 collectd[12276]: info interface plugin mgmt link one 'ens801f0' is Up 2019-11-04T19:15:15.297 controller-1 collectd[12276]: info interface plugin mgmt link two 'ens801f1' is Up 2019-11-04T19:15:15.297 controller-1 collectd[12276]: info interface plugin cluster-host link one 'ens801f0' is Up 2019-11-04T19:15:15.297 controller-1 collectd[12276]: info interface plugin cluster-host link two 'ens801f1' is Up 2019-11-04T19:15:15.298 controller-1 collectd[12276]: info interface plugin mgmt 100% ; link one 'ens801f0' went Up at 2019-11-04 19:15:01 ; link two 'ens801f1' went Up at 2019-11-04 19:15:01 2019-11-04T19:15:15.298 controller-1 collectd[12276]: info interface plugin oam 100% ; link one 'eno1' went Up at 2019-11-04 19:15:01 2019-11-04T19:15:15.298 controller-1 collectd[12276]: info interface plugin cluster-host 100% ; link one 'ens801f0' went Up at 2019-11-04 19:15:01 ; link two 'ens801f1' went Up at 2019-11-04 19:15:01 2019-11-04T19:15:21.089 controller-1 kubelet[88595]: info E1104 19:15:21.089135 88595 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated. 2019-11-04T19:15:21.089 controller-1 kubelet[88595]: info For verbose messaging see aws.Config.CredentialsChainVerboseErrors 2019-11-04T19:15:21.092 controller-1 kubelet[88595]: info I1104 19:15:21.092892 88595 kuberuntime_manager.go:207] Container runtime docker initialized, version: 18.09.6, apiVersion: 1.39.0 2019-11-04T19:15:21.095 controller-1 kubelet[88595]: info I1104 19:15:21.095027 88595 server.go:1065] Started kubelet 2019-11-04T19:15:21.095 controller-1 kubelet[88595]: info I1104 19:15:21.095097 88595 server.go:145] Starting to listen on 0.0.0.0:10250 2019-11-04T19:15:21.095 controller-1 kubelet[88595]: info E1104 19:15:21.095178 88595 kubelet.go:1302] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache 2019-11-04T19:15:21.098 controller-1 kubelet[88595]: info I1104 19:15:21.098538 88595 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer 2019-11-04T19:15:21.098 controller-1 kubelet[88595]: info I1104 19:15:21.098717 88595 status_manager.go:156] Starting to sync pod status with apiserver 2019-11-04T19:15:21.098 controller-1 kubelet[88595]: info I1104 19:15:21.098927 88595 kubelet.go:1822] Starting kubelet main sync loop. 2019-11-04T19:15:21.099 controller-1 kubelet[88595]: info I1104 19:15:21.098984 88595 kubelet.go:1839] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] 2019-11-04T19:15:21.099 controller-1 kubelet[88595]: info I1104 19:15:21.099568 88595 desired_state_of_world_populator.go:131] Desired state populator starts to run 2019-11-04T19:15:21.100 controller-1 kubelet[88595]: info I1104 19:15:21.099802 88595 volume_manager.go:249] Starting Kubelet Volume Manager 2019-11-04T19:15:21.102 controller-1 kubelet[88595]: info I1104 19:15:21.102622 88595 server.go:354] Adding debug handlers to kubelet server. 2019-11-04T19:15:21.104 controller-1 kubelet[88595]: info I1104 19:15:21.104444 88595 clientconn.go:104] parsed scheme: "unix" 2019-11-04T19:15:21.104 controller-1 kubelet[88595]: info I1104 19:15:21.104468 88595 clientconn.go:104] scheme "unix" not registered, fallback to default scheme 2019-11-04T19:15:21.104 controller-1 kubelet[88595]: info I1104 19:15:21.104572 88595 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] } 2019-11-04T19:15:21.104 controller-1 kubelet[88595]: info I1104 19:15:21.104587 88595 clientconn.go:577] ClientConn switching balancer to "pick_first" 2019-11-04T19:15:21.170 controller-1 kubelet[88595]: info W1104 19:15:21.170511 88595 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "mon-filebeat-bppwv_monitor": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" 2019-11-04T19:15:21.172 controller-1 kubelet[88595]: info W1104 19:15:21.172383 88595 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "mon-filebeat-bppwv_monitor": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" 2019-11-04T19:15:21.190 controller-1 kubelet[88595]: info I1104 19:15:21.190382 88595 cpu_manager.go:166] [cpumanager] starting with none policy 2019-11-04T19:15:21.190 controller-1 kubelet[88595]: info I1104 19:15:21.190401 88595 cpu_manager.go:167] [cpumanager] reconciling every 10s 2019-11-04T19:15:21.190 controller-1 kubelet[88595]: info I1104 19:15:21.190411 88595 policy_none.go:42] [cpumanager] none policy: Start 2019-11-04T19:15:21.199 controller-1 kubelet[88595]: info I1104 19:15:21.199355 88595 kubelet.go:1839] skipping pod synchronization - container runtime status check may not have completed yet 2019-11-04T19:15:21.199 controller-1 kubelet[88595]: info I1104 19:15:21.199571 88595 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach 2019-11-04T19:15:21.199 controller-1 kubelet[88595]: info I1104 19:15:21.199611 88595 kuberuntime_manager.go:961] updating runtime config through cri with podcidr fd00:206:0:0:1::/80 2019-11-04T19:15:21.200 controller-1 kubelet[88595]: info I1104 19:15:21.199745 88595 docker_service.go:355] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:fd00:206:0:0:1::/80,},} 2019-11-04T19:15:21.201 controller-1 kubelet[88595]: info I1104 19:15:21.200996 88595 kubelet_network.go:77] Setting Pod CIDR: -> fd00:206:0:0:1::/80 2019-11-04T19:15:21.203 controller-1 kubelet[88595]: info I1104 19:15:21.203586 88595 kubelet_node_status.go:72] Attempting to register node controller-1 2019-11-04T19:15:21.204 controller-1 kubelet[88595]: info W1104 19:15:21.204797 88595 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" 2019-11-04T19:15:21.207 controller-1 kubelet[88595]: info I1104 19:15:21.207406 88595 kubelet_node_status.go:114] Node controller-1 was previously registered 2019-11-04T19:15:21.207 controller-1 kubelet[88595]: info I1104 19:15:21.207846 88595 kubelet_node_status.go:75] Successfully registered node controller-1 2019-11-04T19:15:21.212 controller-1 kubelet[88595]: info I1104 19:15:21.212119 88595 setters.go:539] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2019-11-04 19:15:21.212097282 +0000 UTC m=+21.084787212 LastTransitionTime:2019-11-04 19:15:21.212097282 +0000 UTC m=+21.084787212 Reason:KubeletNotReady Message:container runtime status check may not have completed yet} 2019-11-04T19:15:21.236 controller-1 kubelet[88595]: info I1104 19:15:21.236327 88595 plugin_manager.go:116] Starting Kubelet Plugin Manager 2019-11-04T19:15:21.396 controller-1 kubelet[88595]: info E1104 19:15:21.396109 88595 cni.go:379] Error deleting monitor_mon-filebeat-bppwv/f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436 from network multus/multus-cni-network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable 2019-11-04T19:15:21.396 controller-1 kubelet[88595]: info E1104 19:15:21.396786 88595 remote_runtime.go:128] StopPodSandbox "f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" from runtime service failed: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "mon-filebeat-bppwv_monitor" network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable 2019-11-04T19:15:21.396 controller-1 kubelet[88595]: info E1104 19:15:21.396808 88595 kuberuntime_gc.go:170] Failed to stop sandbox "f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" before removing: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "mon-filebeat-bppwv_monitor" network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable 2019-11-04T19:15:21.400 controller-1 kubelet[88595]: info E1104 19:15:21.400273 88595 remote_runtime.go:295] ContainerStatus "9fbb205ff94b0142dc5f26f2b287a71b089048bb850305f9a3d16528be6cc800" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 9fbb205ff94b0142dc5f26f2b287a71b089048bb850305f9a3d16528be6cc800 2019-11-04T19:15:21.402 controller-1 kubelet[88595]: info I1104 19:15:21.402311 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/ebd94c790b626607a637c536da426b67-ca-certs") pod "kube-apiserver-controller-1" (UID: "ebd94c790b626607a637c536da426b67") 2019-11-04T19:15:21.402 controller-1 kubelet[88595]: info I1104 19:15:21.402369 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/ebd94c790b626607a637c536da426b67-etc-pki") pod "kube-apiserver-controller-1" (UID: "ebd94c790b626607a637c536da426b67") 2019-11-04T19:15:21.402 controller-1 kubelet[88595]: info I1104 19:15:21.402447 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/ebd94c790b626607a637c536da426b67-k8s-certs") pod "kube-apiserver-controller-1" (UID: "ebd94c790b626607a637c536da426b67") 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409025 88595 pod_container_deletor.go:75] Container "8a48cde1943c4f7f2160103b715c84769145784819dc41c1ee2fd2f9176361a0" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409109 88595 pod_container_deletor.go:75] Container "7a008a2f396814ecf5e1f8e63f9cc46fb62fa0deac5166c9ab721d210db7b5db" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409308 88595 pod_container_deletor.go:75] Container "dd3ec6de4fc48f4088e52d7e7f300adbf08db41c5fafbefcce527a1e9c38d94c" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409358 88595 pod_container_deletor.go:75] Container "fdc3bca15c212247175b4571f46b8c8a38dc38a371c80a0d7a7be03ed5c19c2d" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409398 88595 pod_container_deletor.go:75] Container "f7e8c74c86138bec90917a41dacfb8f94a9c2f5043037fb74592a1e22e3306b0" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409420 88595 pod_container_deletor.go:75] Container "7c5dd5b9f49fe00cc80931492a699aaa5df4583ba13af0a5358df9f765ff027f" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409441 88595 pod_container_deletor.go:75] Container "1d31a03a41abba2ed003ce96f00b6dd89c71e7d0c56fac224e6c1e8f2abeb95e" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409571 88595 pod_container_deletor.go:75] Container "acbc4ddc6fb8489789c3a08ac833dc119395db5b82c7bdb0a2a40aedc6f01892" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409605 88595 pod_container_deletor.go:75] Container "0f0e84cd9ed8c7ff69d1176eeea53cfc26857255d423ce89818b076ca8e188c9" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409668 88595 pod_container_deletor.go:75] Container "5527c50c98ceb60ca6916d66ae258b83680fecd2ca3fe43a5e6a818627256a31" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409747 88595 pod_container_deletor.go:75] Container "4641a53cf4bb76bb101f9f0b1a9273a32483a27aeb6871757b062dd420e55b44" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409781 88595 pod_container_deletor.go:75] Container "ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409816 88595 pod_container_deletor.go:75] Container "f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409854 88595 pod_container_deletor.go:75] Container "fdc86871d7895c0a3d407c48d84b73b50f2a31e201e4b658ac0b0b8e489144df" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info W1104 19:15:21.409874 88595 pod_container_deletor.go:75] Container "6be52826841dfadedcbe4b597d8335dd8c3a552c4a294bd8c3dd3d646f4b3251" not found in pod's containers 2019-11-04T19:15:21.409 controller-1 kubelet[88595]: info E1104 19:15:21.409912 88595 remote_runtime.go:295] ContainerStatus "b13fda1349e2bc4b968e0c582198b617dc547cf327bd1412de4729a78b30a330" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: b13fda1349e2bc4b968e0c582198b617dc547cf327bd1412de4729a78b30a330 2019-11-04T19:15:21.411 controller-1 kubelet[88595]: info E1104 19:15:21.411030 88595 remote_runtime.go:295] ContainerStatus "cdfa84f769390795b9c5b6fd50f7a3ddedea3c41e495c91ac60e6af7119db8a9" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: cdfa84f769390795b9c5b6fd50f7a3ddedea3c41e495c91ac60e6af7119db8a9 2019-11-04T19:15:21.412 controller-1 kubelet[88595]: info E1104 19:15:21.412015 88595 remote_runtime.go:295] ContainerStatus "46eadbca4094fef7fa69cc4c451b159a408d952010b74b22fa3ede985450a31d" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 46eadbca4094fef7fa69cc4c451b159a408d952010b74b22fa3ede985450a31d 2019-11-04T19:15:21.502 controller-1 kubelet[88595]: info I1104 19:15:21.502640 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/fa004662b97422f9bda923908ff7217d-ca-certs") pod "kube-controller-manager-controller-1" (UID: "fa004662b97422f9bda923908ff7217d") 2019-11-04T19:15:21.502 controller-1 kubelet[88595]: info I1104 19:15:21.502672 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/05acd7aa-9952-48d4-a3a0-948b42b50d0a-kube-proxy") pod "kube-proxy-4hlww" (UID: "05acd7aa-9952-48d4-a3a0-948b42b50d0a") 2019-11-04T19:15:21.502 controller-1 kubelet[88595]: info I1104 19:15:21.502693 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/05acd7aa-9952-48d4-a3a0-948b42b50d0a-lib-modules") pod "kube-proxy-4hlww" (UID: "05acd7aa-9952-48d4-a3a0-948b42b50d0a") 2019-11-04T19:15:21.502 controller-1 kubelet[88595]: info I1104 19:15:21.502738 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varlog" (UniqueName: "kubernetes.io/host-path/83737bb0-d735-4254-85ce-49f1f454881d-varlog") pod "mon-filebeat-bppwv" (UID: "83737bb0-d735-4254-85ce-49f1f454881d") 2019-11-04T19:15:21.502 controller-1 kubelet[88595]: info I1104 19:15:21.502787 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "filebeat-config" (UniqueName: "kubernetes.io/secret/83737bb0-d735-4254-85ce-49f1f454881d-filebeat-config") pod "mon-filebeat-bppwv" (UID: "83737bb0-d735-4254-85ce-49f1f454881d") 2019-11-04T19:15:21.502 controller-1 kubelet[88595]: info I1104 19:15:21.502849 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "data" (UniqueName: "kubernetes.io/host-path/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-data") pod "mon-metricbeat-tqzgn" (UID: "b21e12f7-bf6b-435b-84f6-955f2ffcbb7c") 2019-11-04T19:15:21.502 controller-1 kubelet[88595]: info I1104 19:15:21.502879 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "proc" (UniqueName: "kubernetes.io/host-path/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-proc") pod "mon-metricbeat-tqzgn" (UID: "b21e12f7-bf6b-435b-84f6-955f2ffcbb7c") 2019-11-04T19:15:21.502 controller-1 kubelet[88595]: info I1104 19:15:21.502951 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/fa004662b97422f9bda923908ff7217d-k8s-certs") pod "kube-controller-manager-controller-1" (UID: "fa004662b97422f9bda923908ff7217d") 2019-11-04T19:15:21.502 controller-1 kubelet[88595]: info I1104 19:15:21.502982 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/fa004662b97422f9bda923908ff7217d-kubeconfig") pod "kube-controller-manager-controller-1" (UID: "fa004662b97422f9bda923908ff7217d") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503006 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "calico-node-token-46p7c" (UniqueName: "kubernetes.io/secret/69cee97f-bae5-47b1-96de-b449df3dfe40-calico-node-token-46p7c") pod "calico-node-jd4qm" (UID: "69cee97f-bae5-47b1-96de-b449df3dfe40") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503031 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "dockersock" (UniqueName: "kubernetes.io/host-path/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-dockersock") pod "mon-metricbeat-tqzgn" (UID: "b21e12f7-bf6b-435b-84f6-955f2ffcbb7c") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503123 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cgroup" (UniqueName: "kubernetes.io/host-path/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-cgroup") pod "mon-metricbeat-tqzgn" (UID: "b21e12f7-bf6b-435b-84f6-955f2ffcbb7c") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503164 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "setupscript" (UniqueName: "kubernetes.io/configmap/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-setupscript") pod "mon-metricbeat-tqzgn" (UID: "b21e12f7-bf6b-435b-84f6-955f2ffcbb7c") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503188 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-9m2nq" (UniqueName: "kubernetes.io/secret/05acd7aa-9952-48d4-a3a0-948b42b50d0a-kube-proxy-token-9m2nq") pod "kube-proxy-4hlww" (UID: "05acd7aa-9952-48d4-a3a0-948b42b50d0a") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503210 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/69cee97f-bae5-47b1-96de-b449df3dfe40-lib-modules") pod "calico-node-jd4qm" (UID: "69cee97f-bae5-47b1-96de-b449df3dfe40") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503230 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-net-dir" (UniqueName: "kubernetes.io/host-path/69cee97f-bae5-47b1-96de-b449df3dfe40-cni-net-dir") pod "calico-node-jd4qm" (UID: "69cee97f-bae5-47b1-96de-b449df3dfe40") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503251 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "data" (UniqueName: "kubernetes.io/host-path/83737bb0-d735-4254-85ce-49f1f454881d-data") pod "mon-filebeat-bppwv" (UID: "83737bb0-d735-4254-85ce-49f1f454881d") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503272 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "modules" (UniqueName: "kubernetes.io/secret/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-modules") pod "mon-metricbeat-tqzgn" (UID: "b21e12f7-bf6b-435b-84f6-955f2ffcbb7c") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503347 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/fa004662b97422f9bda923908ff7217d-etc-pki") pod "kube-controller-manager-controller-1" (UID: "fa004662b97422f9bda923908ff7217d") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503427 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "var-lib-calico" (UniqueName: "kubernetes.io/host-path/69cee97f-bae5-47b1-96de-b449df3dfe40-var-lib-calico") pod "calico-node-jd4qm" (UID: "69cee97f-bae5-47b1-96de-b449df3dfe40") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503458 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config" (UniqueName: "kubernetes.io/secret/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-config") pod "mon-metricbeat-tqzgn" (UID: "b21e12f7-bf6b-435b-84f6-955f2ffcbb7c") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503484 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "varlibdockercontainers" (UniqueName: "kubernetes.io/host-path/83737bb0-d735-4254-85ce-49f1f454881d-varlibdockercontainers") pod "mon-filebeat-bppwv" (UID: "83737bb0-d735-4254-85ce-49f1f454881d") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503512 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/05acd7aa-9952-48d4-a3a0-948b42b50d0a-xtables-lock") pod "kube-proxy-4hlww" (UID: "05acd7aa-9952-48d4-a3a0-948b42b50d0a") 2019-11-04T19:15:21.503 controller-1 kubelet[88595]: info I1104 19:15:21.503553 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "var-run-calico" (UniqueName: "kubernetes.io/host-path/69cee97f-bae5-47b1-96de-b449df3dfe40-var-run-calico") pod "calico-node-jd4qm" (UID: "69cee97f-bae5-47b1-96de-b449df3dfe40") 2019-11-04T19:15:21.504 controller-1 kubelet[88595]: info I1104 19:15:21.504124 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/69cee97f-bae5-47b1-96de-b449df3dfe40-xtables-lock") pod "calico-node-jd4qm" (UID: "69cee97f-bae5-47b1-96de-b449df3dfe40") 2019-11-04T19:15:21.504 controller-1 kubelet[88595]: info I1104 19:15:21.504171 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-bin-dir" (UniqueName: "kubernetes.io/host-path/69cee97f-bae5-47b1-96de-b449df3dfe40-cni-bin-dir") pod "calico-node-jd4qm" (UID: "69cee97f-bae5-47b1-96de-b449df3dfe40") 2019-11-04T19:15:21.504 controller-1 kubelet[88595]: info I1104 19:15:21.504259 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "host-local-net-dir" (UniqueName: "kubernetes.io/host-path/69cee97f-bae5-47b1-96de-b449df3dfe40-host-local-net-dir") pod "calico-node-jd4qm" (UID: "69cee97f-bae5-47b1-96de-b449df3dfe40") 2019-11-04T19:15:21.504 controller-1 kubelet[88595]: info I1104 19:15:21.504326 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "setupscript" (UniqueName: "kubernetes.io/configmap/83737bb0-d735-4254-85ce-49f1f454881d-setupscript") pod "mon-filebeat-bppwv" (UID: "83737bb0-d735-4254-85ce-49f1f454881d") 2019-11-04T19:15:21.504 controller-1 kubelet[88595]: info I1104 19:15:21.504381 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "mon-filebeat-token-z6rf8" (UniqueName: "kubernetes.io/secret/83737bb0-d735-4254-85ce-49f1f454881d-mon-filebeat-token-z6rf8") pod "mon-filebeat-bppwv" (UID: "83737bb0-d735-4254-85ce-49f1f454881d") 2019-11-04T19:15:21.504 controller-1 kubelet[88595]: info I1104 19:15:21.504418 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/a01478acdc49404c8f4dffa14b34b63d-kubeconfig") pod "kube-scheduler-controller-1" (UID: "a01478acdc49404c8f4dffa14b34b63d") 2019-11-04T19:15:21.504 controller-1 kubelet[88595]: info I1104 19:15:21.504580 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "root" (UniqueName: "kubernetes.io/host-path/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-root") pod "mon-metricbeat-tqzgn" (UID: "b21e12f7-bf6b-435b-84f6-955f2ffcbb7c") 2019-11-04T19:15:21.504 controller-1 kubelet[88595]: info I1104 19:15:21.504645 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "mon-metricbeat-token-5vdfc" (UniqueName: "kubernetes.io/secret/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-mon-metricbeat-token-5vdfc") pod "mon-metricbeat-tqzgn" (UID: "b21e12f7-bf6b-435b-84f6-955f2ffcbb7c") 2019-11-04T19:15:21.504 controller-1 kubelet[88595]: info I1104 19:15:21.504704 88595 reconciler.go:154] Reconciler: start to sync state 2019-11-04T19:15:21.620 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/05acd7aa-9952-48d4-a3a0-948b42b50d0a/volumes/kubernetes.io~secret/kube-proxy-token-9m2nq. 2019-11-04T19:15:21.632 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/69cee97f-bae5-47b1-96de-b449df3dfe40/volumes/kubernetes.io~secret/calico-node-token-46p7c. 2019-11-04T19:15:21.722 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/83737bb0-d735-4254-85ce-49f1f454881d/volumes/kubernetes.io~secret/filebeat-config. 2019-11-04T19:15:21.772 controller-1 kubelet[88595]: info W1104 19:15:21.772126 88595 docker_sandbox.go:232] Both sandbox container and checkpoint for id "1d31a03a41abba2ed003ce96f00b6dd89c71e7d0c56fac224e6c1e8f2abeb95e" could not be found. Proceed without further sandbox information. 2019-11-04T19:15:21.772 controller-1 kubelet[88595]: info W1104 19:15:21.772220 88595 docker_sandbox.go:232] Both sandbox container and checkpoint for id "0f0e84cd9ed8c7ff69d1176eeea53cfc26857255d423ce89818b076ca8e188c9" could not be found. Proceed without further sandbox information. 2019-11-04T19:15:21.772 controller-1 kubelet[88595]: info W1104 19:15:21.772366 88595 cni.go:328] CNI failed to retrieve network namespace path: Error: No such container: 1d31a03a41abba2ed003ce96f00b6dd89c71e7d0c56fac224e6c1e8f2abeb95e 2019-11-04T19:15:21.830 controller-1 kubelet[88595]: info E1104 19:15:21.830099 88595 cni.go:379] Error deleting _/1d31a03a41abba2ed003ce96f00b6dd89c71e7d0c56fac224e6c1e8f2abeb95e from network multus/multus-cni-network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable 2019-11-04T19:15:21.830 controller-1 kubelet[88595]: info W1104 19:15:21.830566 88595 cni.go:328] CNI failed to retrieve network namespace path: Error: No such container: 0f0e84cd9ed8c7ff69d1176eeea53cfc26857255d423ce89818b076ca8e188c9 2019-11-04T19:15:21.830 controller-1 kubelet[88595]: info E1104 19:15:21.830631 88595 remote_runtime.go:128] StopPodSandbox "1d31a03a41abba2ed003ce96f00b6dd89c71e7d0c56fac224e6c1e8f2abeb95e" from runtime service failed: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "_" network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable 2019-11-04T19:15:21.830 controller-1 kubelet[88595]: info E1104 19:15:21.830687 88595 kuberuntime_manager.go:878] Failed to stop sandbox {"docker" "1d31a03a41abba2ed003ce96f00b6dd89c71e7d0c56fac224e6c1e8f2abeb95e"} 2019-11-04T19:15:21.830 controller-1 kubelet[88595]: info E1104 19:15:21.830758 88595 kuberuntime_manager.go:658] killPodWithSyncResult failed: failed to "KillPodSandbox" for "05acd7aa-9952-48d4-a3a0-948b42b50d0a" with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"_\" network: Multus: error in invoke Conflist Del - \"chain\": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable" 2019-11-04T19:15:21.830 controller-1 kubelet[88595]: info E1104 19:15:21.830820 88595 pod_workers.go:191] Error syncing pod 05acd7aa-9952-48d4-a3a0-948b42b50d0a ("kube-proxy-4hlww_kube-system(05acd7aa-9952-48d4-a3a0-948b42b50d0a)"), skipping: failed to "KillPodSandbox" for "05acd7aa-9952-48d4-a3a0-948b42b50d0a" with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"_\" network: Multus: error in invoke Conflist Del - \"chain\": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable" 2019-11-04T19:15:21.882 controller-1 kubelet[88595]: info E1104 19:15:21.882195 88595 cni.go:379] Error deleting _/0f0e84cd9ed8c7ff69d1176eeea53cfc26857255d423ce89818b076ca8e188c9 from network multus/multus-cni-network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable 2019-11-04T19:15:21.882 controller-1 kubelet[88595]: info E1104 19:15:21.882669 88595 remote_runtime.go:128] StopPodSandbox "0f0e84cd9ed8c7ff69d1176eeea53cfc26857255d423ce89818b076ca8e188c9" from runtime service failed: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "_" network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable 2019-11-04T19:15:21.882 controller-1 kubelet[88595]: info E1104 19:15:21.882702 88595 kuberuntime_manager.go:878] Failed to stop sandbox {"docker" "0f0e84cd9ed8c7ff69d1176eeea53cfc26857255d423ce89818b076ca8e188c9"} 2019-11-04T19:15:21.882 controller-1 kubelet[88595]: info E1104 19:15:21.882745 88595 kuberuntime_manager.go:658] killPodWithSyncResult failed: failed to "KillPodSandbox" for "69cee97f-bae5-47b1-96de-b449df3dfe40" with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"_\" network: Multus: error in invoke Conflist Del - \"chain\": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable" 2019-11-04T19:15:21.882 controller-1 kubelet[88595]: info E1104 19:15:21.882784 88595 pod_workers.go:191] Error syncing pod 69cee97f-bae5-47b1-96de-b449df3dfe40 ("calico-node-jd4qm_kube-system(69cee97f-bae5-47b1-96de-b449df3dfe40)"), skipping: failed to "KillPodSandbox" for "69cee97f-bae5-47b1-96de-b449df3dfe40" with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"_\" network: Multus: error in invoke Conflist Del - \"chain\": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable" 2019-11-04T19:15:21.922 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/83737bb0-d735-4254-85ce-49f1f454881d/volumes/kubernetes.io~secret/mon-filebeat-token-z6rf8. 2019-11-04T19:15:22.219 controller-1 kubelet[88595]: info W1104 19:15:22.219613 88595 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "mon-filebeat-bppwv_monitor": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" 2019-11-04T19:15:22.221 controller-1 kubelet[88595]: info W1104 19:15:22.221146 88595 docker_sandbox.go:394] failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod "mon-filebeat-bppwv_monitor": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" 2019-11-04T19:15:22.228 controller-1 kubelet[88595]: info W1104 19:15:22.228165 88595 kuberuntime_container.go:696] No ref for container {"docker" "9f7b9d00c498b9312a3a1d34a5760b2a135ecb5070cc0ad53a155ffc3506d218"} 2019-11-04T19:15:22.234 controller-1 kubelet[88595]: info W1104 19:15:22.234851 88595 kuberuntime_container.go:696] No ref for container {"docker" "267e3120a93108b1b4cefa3f11edab9c326ecb9ac22ae5464f8ac8312a91d16f"} 2019-11-04T19:15:22.297 controller-1 containerd[12214]: info time="2019-11-04T19:15:22.297733579Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/04366bc2f82edd18e5b9a0ebd1e79f8d576ccc0593a88ee57994df7d38575850/shim.sock" debug=false pid=89528 2019-11-04T19:15:22.298 controller-1 containerd[12214]: info time="2019-11-04T19:15:22.298149246Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/975c91862ebce5a9c65da02f399c37c1c2957fe5407c97442b28d1c042215a92/shim.sock" debug=false pid=89529 2019-11-04T19:15:22.324 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c/volumes/kubernetes.io~secret/mon-metricbeat-token-5vdfc. 2019-11-04T19:15:22.372 controller-1 kubelet[88595]: info W1104 19:15:22.372652 88595 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" 2019-11-04T19:15:22.427 controller-1 kubelet[88595]: info E1104 19:15:22.427728 88595 cni.go:379] Error deleting monitor_mon-filebeat-bppwv/ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41 from network multus/multus-cni-network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable 2019-11-04T19:15:22.428 controller-1 kubelet[88595]: info E1104 19:15:22.428268 88595 remote_runtime.go:128] StopPodSandbox "ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" from runtime service failed: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "mon-filebeat-bppwv_monitor" network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable 2019-11-04T19:15:22.428 controller-1 kubelet[88595]: info E1104 19:15:22.428321 88595 kuberuntime_manager.go:878] Failed to stop sandbox {"docker" "ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41"} 2019-11-04T19:15:22.429 controller-1 kubelet[88595]: info W1104 19:15:22.429817 88595 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" 2019-11-04T19:15:22.485 controller-1 kubelet[88595]: info E1104 19:15:22.485494 88595 cni.go:379] Error deleting monitor_mon-filebeat-bppwv/f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436 from network multus/multus-cni-network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable 2019-11-04T19:15:22.485 controller-1 kubelet[88595]: info E1104 19:15:22.485860 88595 remote_runtime.go:128] StopPodSandbox "f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" from runtime service failed: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "mon-filebeat-bppwv_monitor" network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable 2019-11-04T19:15:22.485 controller-1 kubelet[88595]: info E1104 19:15:22.485898 88595 kuberuntime_manager.go:878] Failed to stop sandbox {"docker" "f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436"} 2019-11-04T19:15:22.485 controller-1 kubelet[88595]: info E1104 19:15:22.485957 88595 kuberuntime_manager.go:658] killPodWithSyncResult failed: failed to "KillPodSandbox" for "83737bb0-d735-4254-85ce-49f1f454881d" with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"mon-filebeat-bppwv_monitor\" network: Multus: error in invoke Conflist Del - \"chain\": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable" 2019-11-04T19:15:22.485 controller-1 kubelet[88595]: info E1104 19:15:22.485983 88595 pod_workers.go:191] Error syncing pod 83737bb0-d735-4254-85ce-49f1f454881d ("mon-filebeat-bppwv_monitor(83737bb0-d735-4254-85ce-49f1f454881d)"), skipping: failed to "KillPodSandbox" for "83737bb0-d735-4254-85ce-49f1f454881d" with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"mon-filebeat-bppwv_monitor\" network: Multus: error in invoke Conflist Del - \"chain\": error in getting result from DelNetworkList: error getting ClusterInformation: Get https://[fd00:207::1]:443/apis/crd.projectcalico.org/v1/clusterinformations/default: dial tcp [fd00:207::1]:443: connect: network is unreachable" 2019-11-04T19:15:22.525 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c/volumes/kubernetes.io~secret/modules. 2019-11-04T19:15:22.550 controller-1 kubelet[88595]: info W1104 19:15:22.550484 88595 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-89711.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-89711.scope: no such file or directory 2019-11-04T19:15:22.550 controller-1 kubelet[88595]: info W1104 19:15:22.550514 88595 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-89711.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-89711.scope: no such file or directory 2019-11-04T19:15:22.557 controller-1 kubelet[88595]: info W1104 19:15:22.557492 88595 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-89711.scope": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent: no such file or directory 2019-11-04T19:15:22.557 controller-1 kubelet[88595]: info W1104 19:15:22.557555 88595 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-89711.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-89711.scope: no such file or directory 2019-11-04T19:15:22.557 controller-1 kubelet[88595]: info W1104 19:15:22.557599 88595 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-89711.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-89711.scope: no such file or directory 2019-11-04T19:15:22.609 controller-1 kubelet[88595]: info E1104 19:15:22.609604 88595 secret.go:198] Couldn't get secret monitor/mon-metricbeat-daemonset-config: failed to sync secret cache: timed out waiting for the condition 2019-11-04T19:15:22.609 controller-1 kubelet[88595]: info E1104 19:15:22.609703 88595 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/secret/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-config\" (\"b21e12f7-bf6b-435b-84f6-955f2ffcbb7c\")" failed. No retries permitted until 2019-11-04 19:15:23.109665331 +0000 UTC m=+22.982355263 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config\" (UniqueName: \"kubernetes.io/secret/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-config\") pod \"mon-metricbeat-tqzgn\" (UID: \"b21e12f7-bf6b-435b-84f6-955f2ffcbb7c\") : failed to sync secret cache: timed out waiting for the condition" 2019-11-04T19:15:22.610 controller-1 kubelet[88595]: info E1104 19:15:22.610075 88595 configmap.go:203] Couldn't get configMap monitor/mon-metricbeat: failed to sync configmap cache: timed out waiting for the condition 2019-11-04T19:15:22.610 controller-1 kubelet[88595]: info E1104 19:15:22.610154 88595 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/configmap/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-setupscript\" (\"b21e12f7-bf6b-435b-84f6-955f2ffcbb7c\")" failed. No retries permitted until 2019-11-04 19:15:23.110124264 +0000 UTC m=+22.982814195 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"setupscript\" (UniqueName: \"kubernetes.io/configmap/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c-setupscript\") pod \"mon-metricbeat-tqzgn\" (UID: \"b21e12f7-bf6b-435b-84f6-955f2ffcbb7c\") : failed to sync configmap cache: timed out waiting for the condition" 2019-11-04T19:15:22.620 controller-1 containerd[12214]: info time="2019-11-04T19:15:22.620435985Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8413de6d119f74c4146b9a44ae1d7d40ae4e60408d4699a1f92bdf7d37b58f60/shim.sock" debug=false pid=89747 2019-11-04T19:15:22.627 controller-1 containerd[12214]: info time="2019-11-04T19:15:22.627003935Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e83febd98b2ab2d495f8d6e98a5f6526ecae7883f7baab5b7f3b53c26c7acd1b/shim.sock" debug=false pid=89763 2019-11-04T19:15:22.990 controller-1 containerd[12214]: info time="2019-11-04T19:15:22.990270051Z" level=info msg="shim reaped" id=e83febd98b2ab2d495f8d6e98a5f6526ecae7883f7baab5b7f3b53c26c7acd1b 2019-11-04T19:15:23.000 controller-1 dockerd[12332]: info time="2019-11-04T19:15:23.000242871Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:15:23.103 controller-1 kubelet[88595]: info E1104 19:15:23.103372 88595 kubelet.go:1664] Failed creating a mirror pod for "kube-controller-manager-controller-1_kube-system(fa004662b97422f9bda923908ff7217d)": pods "kube-controller-manager-controller-1" already exists 2019-11-04T19:15:23.131 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c/volumes/kubernetes.io~secret/config. 2019-11-04T19:15:23.153 controller-1 containerd[12214]: info time="2019-11-04T19:15:23.153146373Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ab412a43fefab551e2dad224bd0244d696eadbb3d3a633fbcac057f0df6dbdf8/shim.sock" debug=false pid=89927 2019-11-04T19:15:23.280 controller-1 kubelet[88595]: info W1104 19:15:23.280426 88595 kuberuntime_container.go:696] No ref for container {"docker" "182a6c758215a090c08eda8a23ca05c9cd0a0e48c4c3f509d9652c220ccb59da"} 2019-11-04T19:15:23.289 controller-1 containerd[12214]: info time="2019-11-04T19:15:23.289689712Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dc372e0e03d00485a56db70fb4135d6979286038ee54ed9e9cf0cbfdd73b3d5e/shim.sock" debug=false pid=89966 2019-11-04T19:15:23.290 controller-1 kubelet[88595]: info W1104 19:15:23.290384 88595 pod_container_deletor.go:75] Container "ab412a43fefab551e2dad224bd0244d696eadbb3d3a633fbcac057f0df6dbdf8" not found in pod's containers 2019-11-04T19:15:23.297 controller-1 kubelet[88595]: info W1104 19:15:23.297945 88595 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" 2019-11-04T19:15:23.303 controller-1 kubelet[88595]: info E1104 19:15:23.303214 88595 kubelet.go:1664] Failed creating a mirror pod for "kube-scheduler-controller-1_kube-system(a01478acdc49404c8f4dffa14b34b63d)": pods "kube-scheduler-controller-1" already exists 2019-11-04T19:15:23.335 controller-1 containerd[12214]: info time="2019-11-04T19:15:23.335330374Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3e364742b3b626b2dcb96429df1d5865721a4f14252fa135ddedeba48ea256b6/shim.sock" debug=false pid=90012 2019-11-04T19:15:23.344 controller-1 containerd[12214]: info time="2019-11-04T19:15:23.344200630Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5f47254ffa37aba58ba32f4fa4ac4721026dcaa6cbbb32a454173de7781217ed/shim.sock" debug=false pid=90038 2019-11-04T19:15:23.351 controller-1 containerd[12214]: info time="2019-11-04T19:15:23.351099206Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9b81834007a998ad66d147775c3af35f6c85fca8a5652fdeb36071844d9d1dfd/shim.sock" debug=false pid=90056 2019-11-04T19:15:23.388 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.387 [INFO][90086] plugin.go 442: Extracted identifiers ContainerID="ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:23.395 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.395 [WARNING][90086] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T19:15:23.395 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.395 [INFO][90086] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--filebeat--bppwv-eth0", GenerateName:"mon-filebeat-", Namespace:"monitor", SelfLink:"", UID:"83737bb0-d735-4254-85ce-49f1f454881d", ResourceVersion:"8160120", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63707628354, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-revision-hash":"84998b4cf7", "pod-template-generation":"1", "release":"mon-filebeat", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-filebeat", "app":"filebeat"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-filebeat-bppwv", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e300/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-filebeat"}, InterfaceName:"calid0285410ad8", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:15:23.395 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.395 [INFO][90086] k8s.go 477: Releasing IP address(es) ContainerID="ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" 2019-11-04T19:15:23.395 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.395 [INFO][90086] utils.go 171: Calico CNI releasing IP address ContainerID="ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" 2019-11-04T19:15:23.503 controller-1 kubelet[88595]: info E1104 19:15:23.503464 88595 kubelet.go:1664] Failed creating a mirror pod for "kube-apiserver-controller-1_kube-system(ebd94c790b626607a637c536da426b67)": pods "kube-apiserver-controller-1" already exists 2019-11-04T19:15:23.505 controller-1 kubelet[88595]: info W1104 19:15:23.504994 88595 docker_sandbox.go:232] Both sandbox container and checkpoint for id "fdc3bca15c212247175b4571f46b8c8a38dc38a371c80a0d7a7be03ed5c19c2d" could not be found. Proceed without further sandbox information. 2019-11-04T19:15:23.505 controller-1 kubelet[88595]: info W1104 19:15:23.505217 88595 cni.go:328] CNI failed to retrieve network namespace path: Error: No such container: fdc3bca15c212247175b4571f46b8c8a38dc38a371c80a0d7a7be03ed5c19c2d 2019-11-04T19:15:23.547 controller-1 kubelet[88595]: info E1104 19:15:23.546969 88595 cni.go:379] Error deleting _/fdc3bca15c212247175b4571f46b8c8a38dc38a371c80a0d7a7be03ed5c19c2d from network multus/multus-cni-network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: fork/exec /usr/libexec/cni/calico: text file busy 2019-11-04T19:15:23.547 controller-1 kubelet[88595]: info E1104 19:15:23.547509 88595 remote_runtime.go:128] StopPodSandbox "fdc3bca15c212247175b4571f46b8c8a38dc38a371c80a0d7a7be03ed5c19c2d" from runtime service failed: rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod "_" network: Multus: error in invoke Conflist Del - "chain": error in getting result from DelNetworkList: fork/exec /usr/libexec/cni/calico: text file busy 2019-11-04T19:15:23.547 controller-1 kubelet[88595]: info E1104 19:15:23.547542 88595 kuberuntime_manager.go:878] Failed to stop sandbox {"docker" "fdc3bca15c212247175b4571f46b8c8a38dc38a371c80a0d7a7be03ed5c19c2d"} 2019-11-04T19:15:23.547 controller-1 kubelet[88595]: info E1104 19:15:23.547583 88595 kuberuntime_manager.go:658] killPodWithSyncResult failed: failed to "KillPodSandbox" for "ebd94c790b626607a637c536da426b67" with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"_\" network: Multus: error in invoke Conflist Del - \"chain\": error in getting result from DelNetworkList: fork/exec /usr/libexec/cni/calico: text file busy" 2019-11-04T19:15:23.547 controller-1 kubelet[88595]: info E1104 19:15:23.547603 88595 pod_workers.go:191] Error syncing pod ebd94c790b626607a637c536da426b67 ("kube-apiserver-controller-1_kube-system(ebd94c790b626607a637c536da426b67)"), skipping: failed to "KillPodSandbox" for "ebd94c790b626607a637c536da426b67" with KillPodSandboxError: "rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"_\" network: Multus: error in invoke Conflist Del - \"chain\": error in getting result from DelNetworkList: fork/exec /usr/libexec/cni/calico: text file busy" 2019-11-04T19:15:23.552 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.550 [INFO][90125] ipam_plugin.go 299: Releasing address using handleID ContainerID="ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" HandleID="chain.ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" Workload="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:23.552 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.552 [INFO][90125] ipam.go 1145: Releasing all IPs with handle 'chain.ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41' 2019-11-04T19:15:23.597 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.597 [INFO][90125] ipam_plugin.go 308: Released address using handleID ContainerID="ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" HandleID="chain.ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" Workload="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:23.597 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.597 [INFO][90125] ipam_plugin.go 317: Releasing address using workloadID ContainerID="ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" HandleID="chain.ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" Workload="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:23.597 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.597 [INFO][90125] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-filebeat-bppwv' 2019-11-04T19:15:23.602 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.602 [INFO][90086] k8s.go 481: Cleaning up netns ContainerID="ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" 2019-11-04T19:15:23.602 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.602 [INFO][90086] k8s.go 493: Teardown processing complete. ContainerID="ed25049ba30fedf59286fcfc8a869fa3d90f295936a11ca2f7c100358b44dc41" 2019-11-04T19:15:23.608 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c/volume-subpaths/setupscript/setup-script/0. 2019-11-04T19:15:23.609 controller-1 kubelet[88595]: info W1104 19:15:23.609435 88595 cni.go:328] CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container "f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" 2019-11-04T19:15:23.647 controller-1 containerd[12214]: info time="2019-11-04T19:15:23.647110366Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6e72b5c8cfa4fecca90ca6e5d9aaa9f017d04a11fe7ab493ada71eb21745657c/shim.sock" debug=false pid=90329 2019-11-04T19:15:23.680 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.680 [INFO][90352] plugin.go 442: Extracted identifiers ContainerID="f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:23.686 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.686 [WARNING][90352] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T19:15:23.686 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.686 [INFO][90352] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--filebeat--bppwv-eth0", GenerateName:"mon-filebeat-", Namespace:"monitor", SelfLink:"", UID:"83737bb0-d735-4254-85ce-49f1f454881d", ResourceVersion:"8160120", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63707628354, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"filebeat", "controller-revision-hash":"84998b4cf7", "pod-template-generation":"1", "release":"mon-filebeat", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-filebeat"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-filebeat-bppwv", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e300/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-filebeat"}, InterfaceName:"calid0285410ad8", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:15:23.686 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.686 [INFO][90352] k8s.go 477: Releasing IP address(es) ContainerID="f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" 2019-11-04T19:15:23.686 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.686 [INFO][90352] utils.go 171: Calico CNI releasing IP address ContainerID="f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" 2019-11-04T19:15:23.706 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.706 [INFO][90385] ipam_plugin.go 299: Releasing address using handleID ContainerID="f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" HandleID="chain.f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" Workload="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:23.706 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.706 [INFO][90385] ipam.go 1145: Releasing all IPs with handle 'chain.f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436' 2019-11-04T19:15:23.712 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.711 [WARNING][90385] ipam_plugin.go 306: Asked to release address but it doesn't exist. Ignoring ContainerID="f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" HandleID="chain.f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" Workload="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:23.712 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.712 [INFO][90385] ipam_plugin.go 317: Releasing address using workloadID ContainerID="f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" HandleID="chain.f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" Workload="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:23.712 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.712 [INFO][90385] ipam.go 1145: Releasing all IPs with handle 'monitor.mon-filebeat-bppwv' 2019-11-04T19:15:23.714 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.714 [INFO][90352] k8s.go 481: Cleaning up netns ContainerID="f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" 2019-11-04T19:15:23.714 controller-1 kubelet[88595]: info 2019-11-04 19:15:23.714 [INFO][90352] k8s.go 493: Teardown processing complete. ContainerID="f96e192a17db1d95be8ed6e646759f74174d83410dab43bfa399a0702f609436" 2019-11-04T19:15:23.725 controller-1 kubelet[88595]: info W1104 19:15:23.725892 88595 kuberuntime_container.go:696] No ref for container {"docker" "2812d8e520fb02886010ffcf9f4d4d0f50d27de97e0a063232a5e804f7d6acef"} 2019-11-04T19:15:23.732 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c/volume-subpaths/setupscript/setup-script/0. 2019-11-04T19:15:23.786 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c/volume-subpaths/config/setup-script/1. 2019-11-04T19:15:23.794 controller-1 dockerd[12332]: info time="2019-11-04T19:15:23.794215325Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:15:23.803 controller-1 containerd[12214]: info time="2019-11-04T19:15:23.803134463Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602/shim.sock" debug=false pid=90479 2019-11-04T19:15:23.822 controller-1 containerd[12214]: info time="2019-11-04T19:15:23.822927123Z" level=info msg="shim reaped" id=dc372e0e03d00485a56db70fb4135d6979286038ee54ed9e9cf0cbfdd73b3d5e 2019-11-04T19:15:23.831 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c/volume-subpaths/config/setup-script/1. 2019-11-04T19:15:23.833 controller-1 dockerd[12332]: info time="2019-11-04T19:15:23.833036281Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:15:23.878 controller-1 kubelet[88595]: info W1104 19:15:23.878134 88595 watcher.go:87] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/run-90452.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/run-90452.scope: no such file or directory 2019-11-04T19:15:23.878 controller-1 kubelet[88595]: info W1104 19:15:23.878163 88595 watcher.go:87] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/run-90452.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/run-90452.scope: no such file or directory 2019-11-04T19:15:23.883 controller-1 kubelet[88595]: info W1104 19:15:23.883445 88595 watcher.go:87] Error while processing event ("/sys/fs/cgroup/memory/system.slice/run-90452.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/run-90452.scope: no such file or directory 2019-11-04T19:15:23.883 controller-1 kubelet[88595]: info W1104 19:15:23.883490 88595 watcher.go:87] Error while processing event ("/sys/fs/cgroup/devices/system.slice/run-90452.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/run-90452.scope: no such file or directory 2019-11-04T19:15:23.883 controller-1 kubelet[88595]: info W1104 19:15:23.883504 88595 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-90452.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-90452.scope: no such file or directory 2019-11-04T19:15:23.911 controller-1 containerd[12214]: info time="2019-11-04T19:15:23.911099584Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f22578c679e4939aa2d37d08169f6063b2d3974bd8c6f82c71d626c81c177f0e/shim.sock" debug=false pid=90543 2019-11-04T19:15:24.423 controller-1 containerd[12214]: info time="2019-11-04T19:15:24.423002175Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d64dccdb3d14c8515d498fc865c7eb84ff732186a420707343a13d320c635a6f/shim.sock" debug=false pid=90674 2019-11-04T19:15:24.433 controller-1 containerd[12214]: info time="2019-11-04T19:15:24.433805068Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1c85124078b1534f2287257ae80105aab6005341578c5c186e0b71372aae87a7/shim.sock" debug=false pid=90694 2019-11-04T19:15:24.657 controller-1 containerd[12214]: info time="2019-11-04T19:15:24.656941698Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/95a855bae8841e4a72e16624ec30ed6ca166e0a6dbfd5e1da82348070ee5ce10/shim.sock" debug=false pid=90787 2019-11-04T19:15:25.413 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 2.9% (avg per cpu); cpus: 36, Platform: 2.8% (Base: 1.8, k8s-system: 0.9), k8s-addon: 0.1 2019-11-04T19:15:25.449 controller-1 collectd[12276]: info platform memory usage: Usage: 0.9%; Reserved: 126574.7 MiB, Platform: 1116.1 MiB (Base: 1060.0, k8s-system: 56.0), k8s-addon: 11.7 2019-11-04T19:15:25.449 controller-1 collectd[12276]: info 4K memory usage: Anon: 0.9%, Anon: 1135.1 MiB, cgroup-rss: 1131.2 MiB, Avail: 125439.6 MiB, Total: 126574.7 MiB 2019-11-04T19:15:25.449 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 1.19%, Anon: 757.4 MiB, Avail: 62737.8 MiB, Total: 63495.2 MiB 2019-11-04T19:15:25.449 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.59%, Anon: 377.7 MiB, Avail: 63377.3 MiB, Total: 63755.0 MiB 2019-11-04T19:15:29.767 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.766 [INFO][91378] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"monitor", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"mon-filebeat-bppwv", ContainerID:"ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602"}} 2019-11-04T19:15:29.784 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.783 [INFO][91378] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-mon--filebeat--bppwv-eth0 mon-filebeat- monitor 83737bb0-d735-4254-85ce-49f1f454881d 8162831 0 2019-10-25 19:25:54 +0000 UTC map[pod-template-generation:1 release:mon-filebeat projectcalico.org/namespace:monitor projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:mon-filebeat app:filebeat controller-revision-hash:84998b4cf7] map[] [] nil [] } {k8s controller-1 mon-filebeat-bppwv eth0 [fd00:206::a4ce:fec1:5423:e300/128] [] [kns.monitor ksa.monitor.mon-filebeat] calid0285410ad8 []}} ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Namespace="monitor" Pod="mon-filebeat-bppwv" WorkloadEndpoint="controller--1-k8s-mon--filebeat--bppwv-" 2019-11-04T19:15:29.784 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.783 [INFO][91378] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Namespace="monitor" Pod="mon-filebeat-bppwv" WorkloadEndpoint="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:29.786 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.786 [INFO][91378] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:monitor,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/monitor,UID:85834187-55ff-4686-b97c-c3f524d37f83,ResourceVersion:46120,Generation:0,CreationTimestamp:2019-10-25 19:07:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:15:29.788 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.788 [INFO][91378] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:mon-filebeat-bppwv,GenerateName:mon-filebeat-,Namespace:monitor,SelfLink:/api/v1/namespaces/monitor/pods/mon-filebeat-bppwv,UID:83737bb0-d735-4254-85ce-49f1f454881d,ResourceVersion:8162831,Generation:0,CreationTimestamp:2019-10-25 19:25:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: filebeat,controller-revision-hash: 84998b4cf7,pod-template-generation: 1,release: mon-filebeat,},Annotations:map[string]string{checksum/secret: 1cd06be13381f1efe215f5ef7f092472c18fcd9f6cc4d362b2856a704aed2c29,cni.projectcalico.org/podIP: fd00:206::a4ce:fec1:5423:e300/128,k8s.v1.cni.cncf.io/networks-status: [{ 2019-11-04T19:15:29.788 controller-1 kubelet[88595]: info "name": "chain", 2019-11-04T19:15:29.788 controller-1 kubelet[88595]: info "ips": [ 2019-11-04T19:15:29.788 controller-1 kubelet[88595]: info "fd00:206::a4ce:fec1:5423:e300" 2019-11-04T19:15:29.788 controller-1 kubelet[88595]: info ], 2019-11-04T19:15:29.788 controller-1 kubelet[88595]: info "default": true, 2019-11-04T19:15:29.788 controller-1 kubelet[88595]: info "dns": {} 2019-11-04T19:15:29.788 controller-1 kubelet[88595]: info }],},OwnerReferences:[{apps/v1 DaemonSet mon-filebeat 751bb8d2-e9db-43af-a38c-60271946311c 0xc00004495c 0xc00004495d}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{varlog {HostPathVolumeSource{Path:/var/log,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {varlibdockercontainers {&HostPathVolumeSource{Path:/var/lib/docker/containers,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {filebeat-config {nil nil nil nil nil &SecretVolumeSource{SecretName:mon-filebeat,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {data {&HostPathVolumeSource{Path:/var/lib/filebeat,Type:*DirectoryOrCreate,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {setupscript {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:mon-filebeat,},Items:[],DefaultMode:*493,Optional:nil,} nil nil nil nil nil nil nil nil}} {mon-filebeat-token-z6rf8 {nil nil nil nil nil &SecretVolumeSource{SecretName:mon-filebeat-token-z6rf8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{filebeat docker.elastic.co/beats/filebeat-oss:7.4.0 [] [-e] [{ 0 5066 TCP }] [] [{POD_NAMESPACE EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {NODE_NAME &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {OUTPUT_ELASTICSEARCH_ENABLED false nil} {OUTPUT_ELASTICSEARCH_HOSTS [http://mon-elasticsearch-client:9200] nil} {OUTPUT_ELASTICSEARCH_ILM.PATTERN 000001 nil} {OUTPUT_ELASTICSEARCH_INDEX ${INDEX_NAME}-%{+yyyy.MM.dd} nil} {SYSTEM_NAME_FOR_INDEX -yow-cgcs-wildcat-35-60 nil} {INDEX_PATTERN filebeat-%{[agent.version]}-yow-cgcs-wildcat-35-60-* nil} {INDEX_NAME filebeat-%{[agent.version]}-yow-cgcs-wildcat-35-60 nil}] {map[cpu:{{80 -3} {} 80m DecimalSI} memory:{{268435456 0} {} BinarySI}] map[cpu:{{40 -3} {} 40m DecimalSI} memory:{{268435456 0} {} BinarySI}]} [{filebeat-config true /usr/share/filebeat/filebeat.yml filebeat.yml } {data false /usr/share/filebeat/data } {varlog true /var/log } {varlibdockercontainers true /var/lib/docker/containers } {setupscript false /usr/share/filebeat/setup-script.sh setup-script.sh } {mon-filebeat-token-z6rf8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false} {mon-filebeat-prometheus-exporter trustpilot/beat-exporter:0.1.1 [] [] [{ 0 9479 TCP }] [] [] {map[] map[]} [{mon-filebeat-token-z6rf8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*60,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:mon-filebeat,DeprecatedServiceAccount:mon-filebeat,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[{[] [{metadata.name In [controller-1]}]}],},PreferredDuringSchedulingIgnoredDuringExecution:[],},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:default-scheduler,InitContainers:[{setup-script docker.elastic.co/beats/filebeat-oss:7.4.0 [/bin/bash -c /usr/share/filebeat/setup-script.sh] [] [] [] [{POD_NAMESPACE EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {NODE_NAME &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {OUTPUT_ELASTICSEARCH_ENABLED false nil} {OUTPUT_ELASTICSEARCH_HOSTS [http://mon-elasticsearch-client:9200] nil} {OUTPUT_ELASTICSEARCH_ILM.PATTERN 000001 nil} {OUTPUT_ELASTICSEARCH_INDEX ${INDEX_NAME}-%{+yyyy.MM.dd} nil} {SYSTEM_NAME_FOR_INDEX -yow-cgcs-wildcat-35-60 nil} {INDEX_PATTERN filebeat-%{[agent.version]}-yow-cgcs-wildcat-35-60-* nil} {INDEX_NAME filebeat-%{[agent.version]}-yow-cgcs-wildcat-35-60 nil}] {map[] map[]} [{setupscript false /usr/share/filebeat/setup-script.sh setup-script.sh } {filebeat-config true /usr/share/filebeat/filebeat.yml filebeat.yml } {mon-filebeat-token-z6rf8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],AutomountServiceAccountToken:nil,Tolerations:[{services Equal disabled NoExecute } {node.kubernetes.io/not-ready Exists NoExecute } {node.kubernetes.io/unreachable Exists NoExecute } {node.kubernetes.io/disk-pressure Exists NoSchedule } {node.kubernetes.io/memory-pressure Exists NoSchedule } {node.kubernetes.io/pid-pressure Exists NoSchedule } {node.kubernetes.io/unschedulable Exists NoSchedule }],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:25:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 18:31:29 +0000 UTC ContainersNotReady containers with unready status: [filebeat mon-filebeat-prometheus-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:21 +0000 UTC ContainersNotReady containers with unready status: [filebeat mon-filebeat-prometheus-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-10-25 19:25:54 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-10-25 19:25:54 +0000 UTC,ContainerStatuses:[{filebeat {nil nil ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2019-11-04 18:31:29 +0000 UTC,FinishedAt:2019-11-04 19:10:35 +0000 UTC,ContainerID:docker://5e492954af5b8adb96a1f47d5121f7cece06db7ecec2ca58b65b0aadbe8896d1,}} {nil nil nil} false 109 docker.elastic.co/beats/filebeat-oss:7.4.0 docker-pullable://docker.elastic.co/beats/filebeat-oss@sha256:ba7b786c8372ed18b58bea4c9c2e6192997bc3251b4c9d42eb1e2a60e7bb02d8 docker://5e492954af5b8adb96a1f47d5121f7cece06db7ecec2ca58b65b0aadbe8896d1} {mon-filebeat-prometheus-exporter {nil nil &ContainerStateTerminated{ExitCode:2,Signal:0,Reason:Error,Message:,StartedAt:2019-11-04 18:25:33 +0000 UTC,FinishedAt:2019-11-04 19:10:34 +0000 UTC,ContainerID:docker://92dc553c15ea21e4ff78c1437005ab191d5e554b8c6f0fc9567d7c92b67d4e75,}} {nil nil nil} false 25 trustpilot/beat-exporter:0.1.1 docker-pullable://trustpilot/beat-exporter@sha256:78640014debdeed14867b4dbd8d081e38df37f438e35994624458c80c7681eb7 docker://92dc553c15ea21e4ff78c1437005ab191d5e554b8c6f0fc9567d7c92b67d4e75}],QOSClass:Burstable,InitContainerStatuses:[{setup-script {nil nil ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2019-11-04 18:24:41 +0000 UTC,FinishedAt:2019-11-04 18:25:32 +0000 UTC,ContainerID:docker://2812d8e520fb02886010ffcf9f4d4d0f50d27de97e0a063232a5e804f7d6acef,}} {nil nil nil} true 26 docker.elastic.co/beats/filebeat-oss:7.4.0 docker-pullable://docker.elastic.co/beats/filebeat-oss@sha256:ba7b786c8372ed18b58bea4c9c2e6192997bc3251b4c9d42eb1e2a60e7bb02d8 docker://2812d8e520fb02886010ffcf9f4d4d0f50d27de97e0a063232a5e804f7d6acef}],NominatedNodeName:,},} 2019-11-04T19:15:29.805 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.805 [INFO][91409] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" HandleID="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Workload="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:29.814 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.813 [INFO][91409] ipam_plugin.go 220: Calico CNI IPAM handle=chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602 ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" HandleID="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Workload="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:29.814 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.813 [INFO][91409] ipam_plugin.go 230: Auto assigning IP ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" HandleID="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Workload="controller--1-k8s-mon--filebeat--bppwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc00002b800), Attrs:map[string]string{"node":"controller-1", "pod":"mon-filebeat-bppwv", "namespace":"monitor"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:15:29.814 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.814 [INFO][91409] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:15:29.818 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.818 [INFO][91409] ipam.go 309: Looking up existing affinities for host handle="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" host="controller-1" 2019-11-04T19:15:29.822 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.822 [INFO][91409] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" host="controller-1" 2019-11-04T19:15:29.823 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.823 [INFO][91409] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:15:29.825 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.825 [INFO][91409] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:15:29.826 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.825 [INFO][91409] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" host="controller-1" 2019-11-04T19:15:29.827 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.827 [INFO][91409] ipam.go 1244: Creating new handle: chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602 2019-11-04T19:15:29.830 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.830 [INFO][91409] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" host="controller-1" 2019-11-04T19:15:29.832 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.832 [INFO][91409] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e312/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" host="controller-1" 2019-11-04T19:15:29.832 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.832 [INFO][91409] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e312/122] handle="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" host="controller-1" 2019-11-04T19:15:29.833 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.833 [INFO][91409] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e312/122] handle="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" host="controller-1" 2019-11-04T19:15:29.833 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.833 [INFO][91409] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e312/122] ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" HandleID="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Workload="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:29.833 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.833 [INFO][91409] ipam_plugin.go 258: IPAM Result ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" HandleID="chain.ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Workload="controller--1-k8s-mon--filebeat--bppwv-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc0003a6180)} 2019-11-04T19:15:29.835 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.835 [INFO][91378] k8s.go 361: Populated endpoint ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Namespace="monitor" Pod="mon-filebeat-bppwv" WorkloadEndpoint="controller--1-k8s-mon--filebeat--bppwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--filebeat--bppwv-eth0", GenerateName:"mon-filebeat-", Namespace:"monitor", SelfLink:"", UID:"83737bb0-d735-4254-85ce-49f1f454881d", ResourceVersion:"8162831", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63707628354, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"pod-template-generation":"1", "release":"mon-filebeat", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-filebeat", "app":"filebeat", "controller-revision-hash":"84998b4cf7"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-filebeat-bppwv", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e312/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-filebeat"}, InterfaceName:"calid0285410ad8", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:15:29.835 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.835 [INFO][91378] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e312/128] ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Namespace="monitor" Pod="mon-filebeat-bppwv" WorkloadEndpoint="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:29.835 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.835 [INFO][91378] network_linux.go 76: Setting the host side veth name to calid0285410ad8 ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Namespace="monitor" Pod="mon-filebeat-bppwv" WorkloadEndpoint="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:29.841 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.841 [INFO][91378] network_linux.go 411: Disabling IPv6 forwarding ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Namespace="monitor" Pod="mon-filebeat-bppwv" WorkloadEndpoint="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:29.875 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.875 [INFO][91378] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Namespace="monitor" Pod="mon-filebeat-bppwv" WorkloadEndpoint="controller--1-k8s-mon--filebeat--bppwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--filebeat--bppwv-eth0", GenerateName:"mon-filebeat-", Namespace:"monitor", SelfLink:"", UID:"83737bb0-d735-4254-85ce-49f1f454881d", ResourceVersion:"8162831", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63707628354, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"filebeat", "controller-revision-hash":"84998b4cf7", "pod-template-generation":"1", "release":"mon-filebeat", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-filebeat"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602", Pod:"mon-filebeat-bppwv", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e312/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-filebeat"}, InterfaceName:"calid0285410ad8", MAC:"7e:e2:5f:62:d9:b6", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:15:29.878 controller-1 kubelet[88595]: info 2019-11-04 19:15:29.878 [INFO][91378] k8s.go 420: Wrote updated endpoint to datastore ContainerID="ed1532a1d118474069b586c2ab31c235fdfe7c69134c821596d399b28d134602" Namespace="monitor" Pod="mon-filebeat-bppwv" WorkloadEndpoint="controller--1-k8s-mon--filebeat--bppwv-eth0" 2019-11-04T19:15:29.943 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/83737bb0-d735-4254-85ce-49f1f454881d/volume-subpaths/setupscript/setup-script/0. 2019-11-04T19:15:30.015 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/83737bb0-d735-4254-85ce-49f1f454881d/volume-subpaths/setupscript/setup-script/0. 2019-11-04T19:15:30.049 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/83737bb0-d735-4254-85ce-49f1f454881d/volume-subpaths/filebeat-config/setup-script/1. 2019-11-04T19:15:30.068 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/83737bb0-d735-4254-85ce-49f1f454881d/volume-subpaths/filebeat-config/setup-script/1. 2019-11-04T19:15:30.111 controller-1 containerd[12214]: info time="2019-11-04T19:15:30.110975078Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9cd0ffaa22161cd546ba04e1ea18b6c9ec34fe0b867699e3ea86115428470926/shim.sock" debug=false pid=91495 2019-11-04T19:15:30.850 controller-1 systemd[1]: info Starting Stop Read-Ahead Data Collection... 2019-11-04T19:15:30.873 controller-1 systemd[1]: info Started Stop Read-Ahead Data Collection. 2019-11-04T19:15:32.742 controller-1 containerd[12214]: info time="2019-11-04T19:15:32.742318096Z" level=info msg="shim reaped" id=f22578c679e4939aa2d37d08169f6063b2d3974bd8c6f82c71d626c81c177f0e 2019-11-04T19:15:32.752 controller-1 dockerd[12332]: info time="2019-11-04T19:15:32.752466713Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:15:33.000 controller-1 ntpd[87625]: info Listen normally on 13 calid0285410ad8 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:15:33.000 controller-1 ntpd[87625]: debug new interface(s) found: waking up resolver 2019-11-04T19:15:33.460 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c/volume-subpaths/config/metricbeat/0. 2019-11-04T19:15:33.479 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c/volume-subpaths/config/metricbeat/0. 2019-11-04T19:15:33.508 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c/volume-subpaths/setupscript/metricbeat/6. 2019-11-04T19:15:33.533 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b21e12f7-bf6b-435b-84f6-955f2ffcbb7c/volume-subpaths/setupscript/metricbeat/6. 2019-11-04T19:15:33.567 controller-1 containerd[12214]: info time="2019-11-04T19:15:33.567836073Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8cbf16a0d57c48ee215c523b4c776f7414afa513ffbc47d65dcb9cf6697c68be/shim.sock" debug=false pid=91637 2019-11-04T19:15:35.281 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 4.0% (avg per cpu); cpus: 36, Platform: 3.7% (Base: 0.8, k8s-system: 2.9), k8s-addon: 0.2 2019-11-04T19:15:35.287 controller-1 collectd[12276]: info platform memory usage: Usage: 1.1%; Reserved: 126563.3 MiB, Platform: 1435.6 MiB (Base: 1065.3, k8s-system: 370.3), k8s-addon: 27.9 2019-11-04T19:15:35.287 controller-1 collectd[12276]: info 4K memory usage: Anon: 1.2%, Anon: 1476.4 MiB, cgroup-rss: 1467.6 MiB, Avail: 125086.9 MiB, Total: 126563.3 MiB 2019-11-04T19:15:35.287 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 1.33%, Anon: 846.9 MiB, Avail: 62642.7 MiB, Total: 63489.6 MiB 2019-11-04T19:15:35.288 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 0.99%, Anon: 629.6 MiB, Avail: 63119.2 MiB, Total: 63748.8 MiB 2019-11-04T19:15:45.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 1.1% (avg per cpu); cpus: 36, Platform: 1.0% (Base: 0.6, k8s-system: 0.4), k8s-addon: 0.1 2019-11-04T19:15:45.287 controller-1 collectd[12276]: info platform memory usage: Usage: 1.1%; Reserved: 126567.7 MiB, Platform: 1442.3 MiB (Base: 1068.0, k8s-system: 374.3), k8s-addon: 31.0 2019-11-04T19:15:45.287 controller-1 collectd[12276]: info 4K memory usage: Anon: 1.2%, Anon: 1486.1 MiB, cgroup-rss: 1477.4 MiB, Avail: 125081.7 MiB, Total: 126567.7 MiB 2019-11-04T19:15:45.287 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 1.34%, Anon: 851.2 MiB, Avail: 62644.3 MiB, Total: 63495.5 MiB 2019-11-04T19:15:45.287 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 1.00%, Anon: 634.9 MiB, Avail: 63112.6 MiB, Total: 63747.5 MiB 2019-11-04T19:15:53.084 controller-1 systemd[1]: info Stopping Name Service Cache Daemon... 2019-11-04T19:15:53.108 controller-1 systemd[1]: info Stopped Name Service Cache Daemon. 2019-11-04T19:15:55.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 2.2% (avg per cpu); cpus: 36, Platform: 2.1% (Base: 1.7, k8s-system: 0.4), k8s-addon: 0.0 2019-11-04T19:15:55.287 controller-1 collectd[12276]: info platform memory usage: Usage: 1.1%; Reserved: 126537.0 MiB, Platform: 1441.5 MiB (Base: 1064.7, k8s-system: 376.8), k8s-addon: 31.0 2019-11-04T19:15:55.288 controller-1 collectd[12276]: info 4K memory usage: Anon: 1.2%, Anon: 1490.3 MiB, cgroup-rss: 1477.8 MiB, Avail: 125046.7 MiB, Total: 126537.0 MiB 2019-11-04T19:15:55.288 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 1.34%, Anon: 850.3 MiB, Avail: 62643.1 MiB, Total: 63493.4 MiB 2019-11-04T19:15:55.288 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 1.00%, Anon: 640.0 MiB, Avail: 63079.8 MiB, Total: 63719.8 MiB 2019-11-04T19:15:59.693 controller-1 kubelet[88595]: info I1104 19:15:59.692963 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/2dde05d3-e220-4b56-8d60-3effcca9323f-cni") pod "kube-multus-ds-amd64-bj6h2" (UID: "2dde05d3-e220-4b56-8d60-3effcca9323f") 2019-11-04T19:15:59.693 controller-1 kubelet[88595]: info I1104 19:15:59.693023 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "multus-token-dtj6m" (UniqueName: "kubernetes.io/secret/2dde05d3-e220-4b56-8d60-3effcca9323f-multus-token-dtj6m") pod "kube-multus-ds-amd64-bj6h2" (UID: "2dde05d3-e220-4b56-8d60-3effcca9323f") 2019-11-04T19:15:59.693 controller-1 kubelet[88595]: info I1104 19:15:59.693102 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/11b61b97-dcf7-4a5e-a445-c131f5febd44-config-volume") pod "coredns-6bc668cd76-6dtt6" (UID: "11b61b97-dcf7-4a5e-a445-c131f5febd44") 2019-11-04T19:15:59.693 controller-1 kubelet[88595]: info I1104 19:15:59.693130 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "multus-cfg" (UniqueName: "kubernetes.io/configmap/2dde05d3-e220-4b56-8d60-3effcca9323f-multus-cfg") pod "kube-multus-ds-amd64-bj6h2" (UID: "2dde05d3-e220-4b56-8d60-3effcca9323f") 2019-11-04T19:15:59.693 controller-1 kubelet[88595]: info I1104 19:15:59.693199 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cnibin" (UniqueName: "kubernetes.io/host-path/2dde05d3-e220-4b56-8d60-3effcca9323f-cnibin") pod "kube-multus-ds-amd64-bj6h2" (UID: "2dde05d3-e220-4b56-8d60-3effcca9323f") 2019-11-04T19:15:59.693 controller-1 kubelet[88595]: info I1104 19:15:59.693302 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-x97rb" (UniqueName: "kubernetes.io/secret/11b61b97-dcf7-4a5e-a445-c131f5febd44-coredns-token-x97rb") pod "coredns-6bc668cd76-6dtt6" (UID: "11b61b97-dcf7-4a5e-a445-c131f5febd44") 2019-11-04T19:15:59.806 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/11b61b97-dcf7-4a5e-a445-c131f5febd44/volumes/kubernetes.io~secret/coredns-token-x97rb. 2019-11-04T19:16:00.098 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/2dde05d3-e220-4b56-8d60-3effcca9323f/volumes/kubernetes.io~secret/multus-token-dtj6m. 2019-11-04T19:16:00.101 controller-1 dockerd[12332]: info time="2019-11-04T19:16:00.101641249Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:16:00.108 controller-1 containerd[12214]: info time="2019-11-04T19:16:00.107920073Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213/shim.sock" debug=false pid=94812 2019-11-04T19:16:00.397 controller-1 containerd[12214]: info time="2019-11-04T19:16:00.397090218Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4861f39edd8da7ed2d7d4171624a0ef77811268567783a37dcbd354f2f5c4146/shim.sock" debug=false pid=94905 2019-11-04T19:16:00.572 controller-1 containerd[12214]: info time="2019-11-04T19:16:00.572429648Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/37743c722e27cff4892bb89e42cfd403e46c80975d5ba3cfd7c577a0283d1be7/shim.sock" debug=false pid=94957 2019-11-04T19:16:01.060 controller-1 containerd[12214]: info time="2019-11-04T19:16:01.060651013Z" level=info msg="shim reaped" id=9cd0ffaa22161cd546ba04e1ea18b6c9ec34fe0b867699e3ea86115428470926 2019-11-04T19:16:01.070 controller-1 dockerd[12332]: info time="2019-11-04T19:16:01.070623916Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:16:01.096 controller-1 kubelet[88595]: info I1104 19:16:01.096708 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/89ee0fbc-6074-4195-82e0-63e4a478fa96-default-token-88gsr") pod "mon-elasticsearch-master-1" (UID: "89ee0fbc-6074-4195-82e0-63e4a478fa96") 2019-11-04T19:16:01.096 controller-1 kubelet[88595]: info I1104 19:16:01.096772 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/3a65782c-1e27-4980-a189-6abd58ee8c6a-default-token-88gsr") pod "mon-logstash-0" (UID: "3a65782c-1e27-4980-a189-6abd58ee8c6a") 2019-11-04T19:16:01.096 controller-1 kubelet[88595]: info I1104 19:16:01.096880 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "files" (UniqueName: "kubernetes.io/configmap/3a65782c-1e27-4980-a189-6abd58ee8c6a-files") pod "mon-logstash-0" (UID: "3a65782c-1e27-4980-a189-6abd58ee8c6a") 2019-11-04T19:16:01.096 controller-1 kubelet[88595]: info I1104 19:16:01.096941 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "patterns" (UniqueName: "kubernetes.io/configmap/3a65782c-1e27-4980-a189-6abd58ee8c6a-patterns") pod "mon-logstash-0" (UID: "3a65782c-1e27-4980-a189-6abd58ee8c6a") 2019-11-04T19:16:01.097 controller-1 kubelet[88595]: info I1104 19:16:01.097036 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-master-1" (UID: "89ee0fbc-6074-4195-82e0-63e4a478fa96") 2019-11-04T19:16:01.097 controller-1 kubelet[88595]: info I1104 19:16:01.097060 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pipeline" (UniqueName: "kubernetes.io/configmap/3a65782c-1e27-4980-a189-6abd58ee8c6a-pipeline") pod "mon-logstash-0" (UID: "3a65782c-1e27-4980-a189-6abd58ee8c6a") 2019-11-04T19:16:01.097 controller-1 kubelet[88595]: info I1104 19:16:01.097081 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "data" (UniqueName: "kubernetes.io/empty-dir/3a65782c-1e27-4980-a189-6abd58ee8c6a-data") pod "mon-logstash-0" (UID: "3a65782c-1e27-4980-a189-6abd58ee8c6a") 2019-11-04T19:16:01.097 controller-1 kubelet[88595]: info I1104 19:16:01.097104 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-jxtxx" (UniqueName: "kubernetes.io/secret/f2481e58-4e3e-4bb0-9c43-bb61da3fb58a-default-token-jxtxx") pod "kube-sriov-cni-ds-amd64-8wpqw" (UID: "f2481e58-4e3e-4bb0-9c43-bb61da3fb58a") 2019-11-04T19:16:01.097 controller-1 kubelet[88595]: info E1104 19:16:01.097132 88595 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1\"" failed. No retries permitted until 2019-11-04 19:16:01.597104239 +0000 UTC m=+61.469794220 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722\" (UniqueName: \"kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1\") pod \"mon-elasticsearch-master-1\" (UID: \"89ee0fbc-6074-4195-82e0-63e4a478fa96\") " 2019-11-04T19:16:01.097 controller-1 kubelet[88595]: info I1104 19:16:01.097162 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/bb773882-c177-45f0-b9fa-9ef3216ba1a7-default-token-88gsr") pod "mon-elasticsearch-client-1" (UID: "bb773882-c177-45f0-b9fa-9ef3216ba1a7") 2019-11-04T19:16:01.097 controller-1 kubelet[88595]: info I1104 19:16:01.097191 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "cnibin" (UniqueName: "kubernetes.io/host-path/f2481e58-4e3e-4bb0-9c43-bb61da3fb58a-cnibin") pod "kube-sriov-cni-ds-amd64-8wpqw" (UID: "f2481e58-4e3e-4bb0-9c43-bb61da3fb58a") 2019-11-04T19:16:01.192 controller-1 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2019-11-04T19:16:01.205 controller-1 systemd[1]: info Starting Name Service Cache Daemon... 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 monitoring file `/etc/passwd` (1) 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 monitoring directory `/etc` (2) 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 monitoring file `/etc/group` (3) 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 monitoring directory `/etc` (2) 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 monitoring file `/etc/hosts` (4) 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 monitoring directory `/etc` (2) 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 monitoring file `/etc/resolv.conf` (5) 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 monitoring directory `/etc` (2) 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 monitoring file `/etc/services` (6) 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 monitoring directory `/etc` (2) 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 disabled inotify-based monitoring for file `/etc/netgroup': No such file or directory 2019-11-04T19:16:01.000 controller-1 nscd: notice 95151 stat failed for file `/etc/netgroup'; will try again later: No such file or directory 2019-11-04T19:16:01.275 controller-1 systemd[1]: info Started Name Service Cache Daemon. 2019-11-04T19:16:01.294 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/bb773882-c177-45f0-b9fa-9ef3216ba1a7/volumes/kubernetes.io~secret/default-token-88gsr. 2019-11-04T19:16:01.308 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/3a65782c-1e27-4980-a189-6abd58ee8c6a/volumes/kubernetes.io~secret/default-token-88gsr. 2019-11-04T19:16:01.323 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/89ee0fbc-6074-4195-82e0-63e4a478fa96/volumes/kubernetes.io~secret/default-token-88gsr. 2019-11-04T19:16:01.335 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/f2481e58-4e3e-4bb0-9c43-bb61da3fb58a/volumes/kubernetes.io~secret/default-token-jxtxx. 2019-11-04T19:16:01.598 controller-1 kubelet[88595]: info I1104 19:16:01.598939 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-master-1" (UID: "89ee0fbc-6074-4195-82e0-63e4a478fa96") 2019-11-04T19:16:01.610 controller-1 containerd[12214]: info time="2019-11-04T19:16:01.610698687Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f3748f9fc1a8efb8ae00e14dddc07d65e12f1f68d60716597b6fd53ed7f6c004/shim.sock" debug=false pid=95306 2019-11-04T19:16:01.618 controller-1 containerd[12214]: info time="2019-11-04T19:16:01.618464883Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6895b458caef485a4d7bc20510d57ad913f5f51e4de07d5422b215503ff4639d/shim.sock" debug=false pid=95320 2019-11-04T19:16:01.619 controller-1 dockerd[12332]: info time="2019-11-04T19:16:01.619345727Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:16:01.623 controller-1 containerd[12214]: info time="2019-11-04T19:16:01.623952691Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5/shim.sock" debug=false pid=95340 2019-11-04T19:16:01.874 controller-1 containerd[12214]: info time="2019-11-04T19:16:01.874849235Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ebaadd42aa54ce0b32bf9036723031229d73f4bc0f58019b9e79c0657c2c640e/shim.sock" debug=false pid=95559 2019-11-04T19:16:01.889 controller-1 containerd[12214]: info time="2019-11-04T19:16:01.889573305Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d4e63fc1a9a4a1e59ead231bc8a9fa5bb71eb9693390c65c263fec80ddf031ac/shim.sock" debug=false pid=95585 2019-11-04T19:16:01.899 controller-1 kubelet[88595]: info W1104 19:16:01.899568 88595 pod_container_deletor.go:75] Container "8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" not found in pod's containers 2019-11-04T19:16:01.899 controller-1 kubelet[88595]: info I1104 19:16:01.899694 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-data-1" (UID: "694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7") 2019-11-04T19:16:01.899 controller-1 kubelet[88595]: info I1104 19:16:01.899735 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7-default-token-88gsr") pod "mon-elasticsearch-data-1" (UID: "694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7") 2019-11-04T19:16:01.899 controller-1 kubelet[88595]: info E1104 19:16:01.899820 88595 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1\"" failed. No retries permitted until 2019-11-04 19:16:02.399795967 +0000 UTC m=+62.272485913 (durationBeforeRetry 500ms). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3\" (UniqueName: \"kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1\") pod \"mon-elasticsearch-data-1\" (UID: \"694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7\") " 2019-11-04T19:16:02.000 controller-1 kubelet[88595]: info I1104 19:16:02.000044 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "mon-nginx-ingress-token-dgbmq" (UniqueName: "kubernetes.io/secret/3140ecb8-e94b-413e-ac03-9d8a02d862d3-mon-nginx-ingress-token-dgbmq") pod "mon-nginx-ingress-controller-b8l4k" (UID: "3140ecb8-e94b-413e-ac03-9d8a02d862d3") 2019-11-04T19:16:02.017 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7/volumes/kubernetes.io~secret/default-token-88gsr. 2019-11-04T19:16:02.030 controller-1 kubelet[88595]: info W1104 19:16:02.030884 88595 pod_container_deletor.go:75] Container "6895b458caef485a4d7bc20510d57ad913f5f51e4de07d5422b215503ff4639d" not found in pod's containers 2019-11-04T19:16:02.077 controller-1 kubelet[88595]: info W1104 19:16:02.077753 88595 pod_container_deletor.go:75] Container "f3748f9fc1a8efb8ae00e14dddc07d65e12f1f68d60716597b6fd53ed7f6c004" not found in pod's containers 2019-11-04T19:16:02.077 controller-1 kubelet[88595]: info I1104 19:16:02.077965 88595 operation_generator.go:1422] Controller attach succeeded for volume "pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-master-1" (UID: "89ee0fbc-6074-4195-82e0-63e4a478fa96") device path: "" 2019-11-04T19:16:02.187 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/3140ecb8-e94b-413e-ac03-9d8a02d862d3/volumes/kubernetes.io~secret/mon-nginx-ingress-token-dgbmq. 2019-11-04T19:16:02.204 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/83737bb0-d735-4254-85ce-49f1f454881d/volume-subpaths/filebeat-config/filebeat/0. 2019-11-04T19:16:02.252 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/83737bb0-d735-4254-85ce-49f1f454881d/volume-subpaths/filebeat-config/filebeat/0. 2019-11-04T19:16:02.303 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/83737bb0-d735-4254-85ce-49f1f454881d/volume-subpaths/setupscript/filebeat/4. 2019-11-04T19:16:02.337 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/83737bb0-d735-4254-85ce-49f1f454881d/volume-subpaths/setupscript/filebeat/4. 2019-11-04T19:16:02.383 controller-1 containerd[12214]: info time="2019-11-04T19:16:02.383568252Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/66b018de07acc25a40d8650bf78c9016fc5a55d9c3e94989953acce12087d786/shim.sock" debug=false pid=95915 2019-11-04T19:16:02.401 controller-1 kubelet[88595]: info I1104 19:16:02.401691 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-data-1" (UID: "694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7") 2019-11-04T19:16:02.401 controller-1 kubelet[88595]: info E1104 19:16:02.401792 88595 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1\"" failed. No retries permitted until 2019-11-04 19:16:03.401764358 +0000 UTC m=+63.274454297 (durationBeforeRetry 1s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3\" (UniqueName: \"kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1\") pod \"mon-elasticsearch-data-1\" (UID: \"694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7\") " 2019-11-04T19:16:02.477 controller-1 kubelet[88595]: info I1104 19:16:02.477667 88595 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-master-1" (UID: "89ee0fbc-6074-4195-82e0-63e4a478fa96") DevicePath "" 2019-11-04T19:16:02.498 controller-1 dockerd[12332]: info time="2019-11-04T19:16:02.498515386Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:16:02.503 controller-1 containerd[12214]: info time="2019-11-04T19:16:02.503477595Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0/shim.sock" debug=false pid=95980 2019-11-04T19:16:02.574 controller-1 containerd[12214]: info time="2019-11-04T19:16:02.574760771Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fa16094117fdf20bee2e0e5edd0ff4b776f42f2fa06a175b849c74236c3e814d/shim.sock" debug=false pid=96036 2019-11-04T19:16:02.967 controller-1 kubelet[88595]: info W1104 19:16:02.967209 88595 rbd_util.go:794] rbd: no watchers on kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1 2019-11-04T19:16:03.404 controller-1 kubelet[88595]: info I1104 19:16:03.404697 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-data-1" (UID: "694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7") 2019-11-04T19:16:03.404 controller-1 kubelet[88595]: info E1104 19:16:03.404772 88595 nestedpendingoperations.go:270] Operation for "\"kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1\"" failed. No retries permitted until 2019-11-04 19:16:05.404751262 +0000 UTC m=+65.277441204 (durationBeforeRetry 2s). Error: "Volume has not been added to the list of VolumesInUse in the node's volume status for volume \"pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3\" (UniqueName: \"kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1\") pod \"mon-elasticsearch-data-1\" (UID: \"694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7\") " 2019-11-04T19:16:05.410 controller-1 kubelet[88595]: info I1104 19:16:05.410238 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-data-1" (UID: "694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7") 2019-11-04T19:16:05.412 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.3% (avg per cpu); cpus: 36, Platform: 4.6% (Base: 4.1, k8s-system: 0.5), k8s-addon: 0.6 2019-11-04T19:16:05.472 controller-1 collectd[12276]: info platform memory usage: Usage: 1.3%; Reserved: 126450.8 MiB, Platform: 1664.0 MiB (Base: 1285.1, k8s-system: 378.9), k8s-addon: 117.7 2019-11-04T19:16:05.472 controller-1 collectd[12276]: info 4K memory usage: Anon: 1.4%, Anon: 1795.1 MiB, cgroup-rss: 1785.7 MiB, Avail: 124655.7 MiB, Total: 126450.8 MiB 2019-11-04T19:16:05.472 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 1.54%, Anon: 978.8 MiB, Avail: 62493.6 MiB, Total: 63472.4 MiB 2019-11-04T19:16:05.472 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 1.28%, Anon: 816.3 MiB, Avail: 62851.5 MiB, Total: 63667.7 MiB 2019-11-04T19:16:05.678 controller-1 kubelet[88595]: info I1104 19:16:05.678147 88595 operation_generator.go:1422] Controller attach succeeded for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-data-1" (UID: "694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7") device path: "" 2019-11-04T19:16:06.016 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.016 [INFO][96878] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"coredns-6bc668cd76-6dtt6", ContainerID:"80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213"}} 2019-11-04T19:16:06.031 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.031 [INFO][96878] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0 coredns-6bc668cd76- kube-system 11b61b97-dcf7-4a5e-a445-c131f5febd44 8163232 0 2019-11-04 18:55:27 +0000 UTC map[projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns k8s-app:kube-dns pod-template-hash:6bc668cd76 projectcalico.org/namespace:kube-system] map[] [] nil [] } {k8s controller-1 coredns-6bc668cd76-6dtt6 eth0 [] [] [kns.kube-system ksa.kube-system.coredns] cali23ca0d564d4 [{dns UDP 53} {dns-tcp TCP 53} {metrics TCP 9153}]}} ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Namespace="kube-system" Pod="coredns-6bc668cd76-6dtt6" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--6dtt6-" 2019-11-04T19:16:06.031 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.031 [INFO][96878] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Namespace="kube-system" Pod="coredns-6bc668cd76-6dtt6" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" 2019-11-04T19:16:06.035 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.035 [INFO][96878] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kube-system,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/kube-system,UID:5d016a6c-19e8-4b97-88a9-b6113a3cb736,ResourceVersion:5,Generation:0,CreationTimestamp:2019-10-25 15:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:16:06.036 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.036 [INFO][96878] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:coredns-6bc668cd76-6dtt6,GenerateName:coredns-6bc668cd76-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/coredns-6bc668cd76-6dtt6,UID:11b61b97-dcf7-4a5e-a445-c131f5febd44,ResourceVersion:8163232,Generation:0,CreationTimestamp:2019-11-04 18:55:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{k8s-app: kube-dns,pod-template-hash: 6bc668cd76,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet coredns-6bc668cd76 a5d8df09-9b63-4615-a0e9-5f4c684232cb 0xc0003d6d37 0xc0003d6d38}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{config-volume {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:coredns,},Items:[{Corefile Corefile }],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil}} {coredns-token-x97rb {nil nil nil nil nil &SecretVolumeSource{SecretName:coredns-token-x97rb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{coredns registry.local:9001/k8s.gcr.io/coredns:1.6.2 [] [-conf /etc/coredns/Corefile] [{dns 0 53 UDP } {dns-tcp 0 53 TCP } {metrics 0 9153 TCP }] [] [] {map[memory:{{178257920 0} {} 170Mi BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{73400320 0} {} 70Mi BinarySI}]} [{config-volume true /etc/coredns } {coredns-token-x97rb true /var/run/secrets/kubernetes.io/serviceaccount }] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:8181,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[all],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:Default,NodeSelector:map[string]string{beta.kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},ServiceAccountName:coredns,DeprecatedServiceAccount:coredns,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[{LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[{k8s-app In [kube-dns]}],} [] kubernetes.io/hostname}],PreferredDuringSchedulingIgnoredDuringExecution:[],},},SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{CriticalAddonsOnly Exists } {node-role.kubernetes.io/master NoSchedule } {node.kubernetes.io/not-ready Exists NoExecute 0xc0003d7390} {node.kubernetes.io/unreachable Exists NoExecute 0xc0003d73c0}],HostAliases:[],PriorityClassName:system-cluster-critical,Priority:*2000000000,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:15:59 +0000 UTC,ContainerStatuses:[{coredns {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 registry.local:9001/k8s.gcr.io/coredns:1.6.2 }],QOSClass:Burstable,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T19:16:06.054 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.054 [INFO][96908] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" HandleID="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Workload="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" 2019-11-04T19:16:06.062 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.062 [INFO][96908] ipam_plugin.go 220: Calico CNI IPAM handle=chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213 ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" HandleID="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Workload="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" 2019-11-04T19:16:06.062 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.062 [INFO][96908] ipam_plugin.go 230: Auto assigning IP ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" HandleID="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Workload="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002e4560), Attrs:map[string]string{"node":"controller-1", "pod":"coredns-6bc668cd76-6dtt6", "namespace":"kube-system"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:16:06.062 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.062 [INFO][96908] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:16:06.066 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.066 [INFO][96908] ipam.go 309: Looking up existing affinities for host handle="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" host="controller-1" 2019-11-04T19:16:06.070 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.070 [INFO][96908] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" host="controller-1" 2019-11-04T19:16:06.072 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.072 [INFO][96908] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:06.073 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.073 [INFO][96908] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:06.073 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.073 [INFO][96908] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" host="controller-1" 2019-11-04T19:16:06.075 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.075 [INFO][96908] ipam.go 1244: Creating new handle: chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213 2019-11-04T19:16:06.077 controller-1 kubelet[88595]: info I1104 19:16:06.077515 88595 operation_generator.go:661] MountVolume.WaitForAttach entering for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-data-1" (UID: "694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7") DevicePath "" 2019-11-04T19:16:06.078 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.078 [INFO][96908] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" host="controller-1" 2019-11-04T19:16:06.080 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.080 [INFO][96908] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e321/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" host="controller-1" 2019-11-04T19:16:06.080 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.080 [INFO][96908] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e321/122] handle="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" host="controller-1" 2019-11-04T19:16:06.081 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.081 [INFO][96908] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e321/122] handle="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" host="controller-1" 2019-11-04T19:16:06.081 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.081 [INFO][96908] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e321/122] ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" HandleID="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Workload="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" 2019-11-04T19:16:06.081 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.081 [INFO][96908] ipam_plugin.go 258: IPAM Result ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" HandleID="chain.80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Workload="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc00036a120)} 2019-11-04T19:16:06.083 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.082 [INFO][96878] k8s.go 361: Populated endpoint ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Namespace="kube-system" Pod="coredns-6bc668cd76-6dtt6" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0", GenerateName:"coredns-6bc668cd76-", Namespace:"kube-system", SelfLink:"", UID:"11b61b97-dcf7-4a5e-a445-c131f5febd44", ResourceVersion:"8163232", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490527, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6bc668cd76", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"coredns-6bc668cd76-6dtt6", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e321/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23ca0d564d4", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} 2019-11-04T19:16:06.083 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.083 [INFO][96878] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e321/128] ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Namespace="kube-system" Pod="coredns-6bc668cd76-6dtt6" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" 2019-11-04T19:16:06.083 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.083 [INFO][96878] network_linux.go 76: Setting the host side veth name to cali23ca0d564d4 ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Namespace="kube-system" Pod="coredns-6bc668cd76-6dtt6" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" 2019-11-04T19:16:06.085 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.085 [INFO][96878] network_linux.go 411: Disabling IPv6 forwarding ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Namespace="kube-system" Pod="coredns-6bc668cd76-6dtt6" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" 2019-11-04T19:16:06.121 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.121 [INFO][96878] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Namespace="kube-system" Pod="coredns-6bc668cd76-6dtt6" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0", GenerateName:"coredns-6bc668cd76-", Namespace:"kube-system", SelfLink:"", UID:"11b61b97-dcf7-4a5e-a445-c131f5febd44", ResourceVersion:"8163232", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490527, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6bc668cd76", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213", Pod:"coredns-6bc668cd76-6dtt6", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e321/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali23ca0d564d4", MAC:"5a:52:a3:80:2c:b6", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35}, v3.EndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35}, v3.EndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1}}}} 2019-11-04T19:16:06.124 controller-1 kubelet[88595]: info 2019-11-04 19:16:06.124 [INFO][96878] k8s.go 420: Wrote updated endpoint to datastore ContainerID="80a20ecd10e85f0d835b11e0ec6528bf254407aece83e8f540f09550173ee213" Namespace="kube-system" Pod="coredns-6bc668cd76-6dtt6" WorkloadEndpoint="controller--1-k8s-coredns--6bc668cd76--6dtt6-eth0" 2019-11-04T19:16:06.209 controller-1 containerd[12214]: info time="2019-11-04T19:16:06.209623165Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/67e047f87befc0fa84f547e764955237a416fb9a668171f5c1a805e122ac5854/shim.sock" debug=false pid=96960 2019-11-04T19:16:06.536 controller-1 kubelet[88595]: info W1104 19:16:06.536145 88595 rbd_util.go:794] rbd: no watchers on kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1 2019-11-04T19:16:06.698 controller-1 kubelet[88595]: info I1104 19:16:06.698418 88595 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-master-1" (UID: "89ee0fbc-6074-4195-82e0-63e4a478fa96") DevicePath "/dev/rbd0" 2019-11-04T19:16:06.703 controller-1 kubelet[88595]: info I1104 19:16:06.703604 88595 operation_generator.go:670] MountVolume.WaitForAttach succeeded for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-data-1" (UID: "694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7") DevicePath "/dev/rbd1" 2019-11-04T19:16:06.729 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/kube-rbd-image-kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1. 2019-11-04T19:16:06.737 controller-1 kubelet[88595]: info I1104 19:16:06.737720 88595 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-master-1" (UID: "89ee0fbc-6074-4195-82e0-63e4a478fa96") device mount path "/var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/kube-rbd-image-kubernetes-dynamic-pvc-5cf36411-f75b-11e9-a9b9-f67fa4c26db1" 2019-11-04T19:16:06.820 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/kube-rbd-image-kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1. 2019-11-04T19:16:06.835 controller-1 kubelet[88595]: info I1104 19:16:06.835382 88595 operation_generator.go:697] MountVolume.MountDevice succeeded for volume "pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3" (UniqueName: "kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1") pod "mon-elasticsearch-data-1" (UID: "694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7") device mount path "/var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/kube-rbd-image-kubernetes-dynamic-pvc-d70d38e3-f75b-11e9-a9b9-f67fa4c26db1" 2019-11-04T19:16:07.086 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/89ee0fbc-6074-4195-82e0-63e4a478fa96/volumes/kubernetes.io~rbd/pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722. 2019-11-04T19:16:07.123 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/89ee0fbc-6074-4195-82e0-63e4a478fa96/volumes/kubernetes.io~rbd/pvc-ad4321f4-62df-4e6d-afe8-d8b053ed0722. 2019-11-04T19:16:07.292 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7/volumes/kubernetes.io~rbd/pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3. 2019-11-04T19:16:07.295 controller-1 dockerd[12332]: info time="2019-11-04T19:16:07.294971126Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:16:07.300 controller-1 containerd[12214]: info time="2019-11-04T19:16:07.300412395Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee/shim.sock" debug=false pid=97145 2019-11-04T19:16:07.329 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7/volumes/kubernetes.io~rbd/pvc-6f8484ce-6f2b-4695-86c6-99c93bb037c3. 2019-11-04T19:16:07.592 controller-1 dockerd[12332]: info time="2019-11-04T19:16:07.592788898Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:16:07.598 controller-1 containerd[12214]: info time="2019-11-04T19:16:07.598018382Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813/shim.sock" debug=false pid=97259 2019-11-04T19:16:07.600 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.600 [INFO][97232] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"monitor", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"mon-elasticsearch-client-1", ContainerID:"8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5"}} 2019-11-04T19:16:07.616 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.616 [INFO][97232] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-mon--elasticsearch--client--1-eth0 mon-elasticsearch-client- monitor bb773882-c177-45f0-b9fa-9ef3216ba1a7 8163314 0 2019-11-04 18:55:31 +0000 UTC map[chart:elasticsearch controller-revision-hash:mon-elasticsearch-client-7c64d4f4fd statefulset.kubernetes.io/pod-name:mon-elasticsearch-client-1 projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default app:mon-elasticsearch-client heritage:Tiller release:mon-elasticsearch-client projectcalico.org/namespace:monitor] map[] [] nil [] } {k8s controller-1 mon-elasticsearch-client-1 eth0 [] [] [kns.monitor ksa.monitor.default] cali7eb1b3c61b4 [{http TCP 9200} {transport TCP 9300}]}} ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-" 2019-11-04T19:16:07.616 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.616 [INFO][97232] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T19:16:07.618 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.618 [INFO][97232] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:monitor,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/monitor,UID:85834187-55ff-4686-b97c-c3f524d37f83,ResourceVersion:46120,Generation:0,CreationTimestamp:2019-10-25 19:07:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.620 [INFO][97232] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:mon-elasticsearch-client-1,GenerateName:mon-elasticsearch-client-,Namespace:monitor,SelfLink:/api/v1/namespaces/monitor/pods/mon-elasticsearch-client-1,UID:bb773882-c177-45f0-b9fa-9ef3216ba1a7,ResourceVersion:8163314,Generation:0,CreationTimestamp:2019-11-04 18:55:31 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: mon-elasticsearch-client,chart: elasticsearch,controller-revision-hash: mon-elasticsearch-client-7c64d4f4fd,heritage: Tiller,release: mon-elasticsearch-client,statefulset.kubernetes.io/pod-name: mon-elasticsearch-client-1,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 StatefulSet mon-elasticsearch-client 01fc0ff7-b1ba-467a-b465-ac381c76be1a 0xc00003d017 0xc00003d018}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-88gsr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-88gsr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{elasticsearch docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 [] [] [{http 0 9200 TCP } {transport 0 9300 TCP }] [] [{node.name EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {discovery.seed_hosts mon-elasticsearch-data-headless, mon-elasticsearch-master-headless nil} {cluster.name mon-elasticsearch nil} {network.host 0.0.0.0 nil} {ES_JAVA_OPTS -Djava.net.preferIPv6Addresses=true -Xmx1024m -Xms1024m nil} {node.data false nil} {node.ingest true nil} {node.master false nil} {DATA_PRESTOP_SLEEP 100 nil}] {map[cpu:{{1 0} {} 1 DecimalSI} memory:{{2147483648 0} {} 2Gi BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{2147483648 0} {} 2Gi BinarySI}]} [{default-token-88gsr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil &Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c #!/usr/bin/env bash -e 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info # If the node is starting up wait for the cluster to be ready (request params: '' ) 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info # Once it has started only check that the node itself is responding 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info START_FILE=/tmp/.es_start_file 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info http () { 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info local path="${1}" 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}" 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info else 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info BASIC_AUTH='' 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info fi 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path} 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info } 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info if [ -f "${START_FILE}" ]; then 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info echo 'Elasticsearch is already running, lets check the node is healthy' 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info http "/" 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info else 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "" )' 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info if http "/_cluster/health?" ; then 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info touch ${START_FILE} 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info exit 0 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info else 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info echo 'Cluster is not yet ready (request params: "" )' 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info exit 1 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info fi 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info fi 2019-11-04T19:16:07.620 controller-1 kubelet[88595]: info ],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:60,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:3,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*120,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{elastic-client: enabled,},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:mon-elasticsearch-client-1,Subdomain:mon-elasticsearch-client-headless,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[{LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[{app In [mon-elasticsearch-client]}],} [] kubernetes.io/hostname}],PreferredDuringSchedulingIgnoredDuringExecution:[],},},SchedulerName:default-scheduler,InitContainers:[{configure-sysctl docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 [sysctl -w vm.max_map_count=262144] [] [] [] [] {map[] map[]} [{default-token-88gsr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0001caf80} {node.kubernetes.io/unreachable Exists NoExecute 0xc0001cb090}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotInitialized containers with incomplete status: [configure-sysctl]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotReady containers with unready status: [elasticsearch]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotReady containers with unready status: [elasticsearch]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:15:59 +0000 UTC,ContainerStatuses:[{elasticsearch {ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 }],QOSClass:Burstable,InitContainerStatuses:[{configure-sysctl {ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 }],NominatedNodeName:,},} 2019-11-04T19:16:07.639 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.639 [INFO][97292] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" HandleID="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T19:16:07.649 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.649 [INFO][97292] ipam_plugin.go 220: Calico CNI IPAM handle=chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5 ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" HandleID="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T19:16:07.649 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.649 [INFO][97292] ipam_plugin.go 230: Auto assigning IP ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" HandleID="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002de8d0), Attrs:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-elasticsearch-client-1"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:16:07.649 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.649 [INFO][97292] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:16:07.653 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.653 [INFO][97292] ipam.go 309: Looking up existing affinities for host handle="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" host="controller-1" 2019-11-04T19:16:07.657 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.657 [INFO][97292] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" host="controller-1" 2019-11-04T19:16:07.659 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.659 [INFO][97292] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:07.661 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.661 [INFO][97292] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:07.661 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.661 [INFO][97292] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" host="controller-1" 2019-11-04T19:16:07.662 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.662 [INFO][97292] ipam.go 1244: Creating new handle: chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5 2019-11-04T19:16:07.664 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.664 [INFO][97292] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" host="controller-1" 2019-11-04T19:16:07.667 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.666 [INFO][97292] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e339/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" host="controller-1" 2019-11-04T19:16:07.667 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.667 [INFO][97292] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e339/122] handle="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" host="controller-1" 2019-11-04T19:16:07.667 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.667 [INFO][97292] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e339/122] handle="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" host="controller-1" 2019-11-04T19:16:07.667 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.667 [INFO][97292] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e339/122] ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" HandleID="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T19:16:07.668 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.667 [INFO][97292] ipam_plugin.go 258: IPAM Result ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" HandleID="chain.8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Workload="controller--1-k8s-mon--elasticsearch--client--1-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc00042c6c0)} 2019-11-04T19:16:07.669 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.669 [INFO][97232] k8s.go 361: Populated endpoint ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--client--1-eth0", GenerateName:"mon-elasticsearch-client-", Namespace:"monitor", SelfLink:"", UID:"bb773882-c177-45f0-b9fa-9ef3216ba1a7", ResourceVersion:"8163314", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490531, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"controller-revision-hash":"mon-elasticsearch-client-7c64d4f4fd", "heritage":"Tiller", "release":"mon-elasticsearch-client", "app":"mon-elasticsearch-client", "chart":"elasticsearch", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-client-1", "projectcalico.org/namespace":"monitor"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-elasticsearch-client-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e339/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"cali7eb1b3c61b4", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T19:16:07.669 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.669 [INFO][97232] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e339/128] ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T19:16:07.669 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.669 [INFO][97232] network_linux.go 76: Setting the host side veth name to cali7eb1b3c61b4 ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T19:16:07.672 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.672 [INFO][97232] network_linux.go 411: Disabling IPv6 forwarding ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T19:16:07.717 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.716 [INFO][97232] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--client--1-eth0", GenerateName:"mon-elasticsearch-client-", Namespace:"monitor", SelfLink:"", UID:"bb773882-c177-45f0-b9fa-9ef3216ba1a7", ResourceVersion:"8163314", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490531, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"mon-elasticsearch-client", "chart":"elasticsearch", "controller-revision-hash":"mon-elasticsearch-client-7c64d4f4fd", "heritage":"Tiller", "release":"mon-elasticsearch-client", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-client-1", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5", Pod:"mon-elasticsearch-client-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e339/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"cali7eb1b3c61b4", MAC:"42:0e:9c:22:b0:36", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T19:16:07.720 controller-1 kubelet[88595]: info 2019-11-04 19:16:07.720 [INFO][97232] k8s.go 420: Wrote updated endpoint to datastore ContainerID="8931386417ad6dbe400a155acec626c18a50537c1e390e76fe2c5d942a37bae5" Namespace="monitor" Pod="mon-elasticsearch-client-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--client--1-eth0" 2019-11-04T19:16:07.815 controller-1 containerd[12214]: info time="2019-11-04T19:16:07.815708788Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/12a594d5bbbc326298d5c9b8c69affd8d6fe393bd78f5f377394c21792222521/shim.sock" debug=false pid=97611 2019-11-04T19:16:08.055 controller-1 containerd[12214]: info time="2019-11-04T19:16:08.055606729Z" level=info msg="shim reaped" id=12a594d5bbbc326298d5c9b8c69affd8d6fe393bd78f5f377394c21792222521 2019-11-04T19:16:08.065 controller-1 dockerd[12332]: info time="2019-11-04T19:16:08.065755447Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:16:08.244 controller-1 containerd[12214]: info time="2019-11-04T19:16:08.244280350Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ea0a63c4e0da92f970cc48bef675720d3ac696b2c9d4cbb894647c58748ca7f5/shim.sock" debug=false pid=97702 2019-11-04T19:16:08.425 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.425 [INFO][97760] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"monitor", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"mon-nginx-ingress-controller-b8l4k", ContainerID:"6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0"}} 2019-11-04T19:16:08.440 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.440 [INFO][97760] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0 mon-nginx-ingress-controller- monitor 3140ecb8-e94b-413e-ac03-9d8a02d862d3 8163339 0 2019-11-04 19:15:59 +0000 UTC map[app:nginx-ingress component:controller controller-revision-hash:866b74fd9d pod-template-generation:1 release:mon-nginx-ingress projectcalico.org/namespace:monitor projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:mon-nginx-ingress] map[] [] nil [] } {k8s controller-1 mon-nginx-ingress-controller-b8l4k eth0 [] [] [kns.monitor ksa.monitor.mon-nginx-ingress] cali98e23353dbb [{http TCP 80} {https TCP 443}]}} ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Namespace="monitor" Pod="mon-nginx-ingress-controller-b8l4k" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-" 2019-11-04T19:16:08.440 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.440 [INFO][97760] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Namespace="monitor" Pod="mon-nginx-ingress-controller-b8l4k" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" 2019-11-04T19:16:08.443 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.443 [INFO][97760] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:monitor,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/monitor,UID:85834187-55ff-4686-b97c-c3f524d37f83,ResourceVersion:46120,Generation:0,CreationTimestamp:2019-10-25 19:07:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:16:08.445 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.445 [INFO][97760] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:mon-nginx-ingress-controller-b8l4k,GenerateName:mon-nginx-ingress-controller-,Namespace:monitor,SelfLink:/api/v1/namespaces/monitor/pods/mon-nginx-ingress-controller-b8l4k,UID:3140ecb8-e94b-413e-ac03-9d8a02d862d3,ResourceVersion:8163339,Generation:0,CreationTimestamp:2019-11-04 19:15:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: nginx-ingress,component: controller,controller-revision-hash: 866b74fd9d,pod-template-generation: 1,release: mon-nginx-ingress,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 DaemonSet mon-nginx-ingress-controller 4f92bc9f-671a-423c-ae73-f67862da850c 0xc0003c4ba0 0xc0003c4ba1}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{mon-nginx-ingress-token-dgbmq {nil nil nil nil nil SecretVolumeSource{SecretName:mon-nginx-ingress-token-dgbmq,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx-ingress-controller quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 [] [/nginx-ingress-controller --default-backend-service=monitor/mon-nginx-ingress-default-backend --election-id=ingress-controller-leader --ingress-class=nginx --configmap=monitor/mon-nginx-ingress-controller --watch-namespace=monitor] [{http 0 80 TCP } {https 0 443 TCP }] [] [{POD_NAME EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {POD_NAMESPACE &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] {map[cpu:{{200 -3} {} 200m DecimalSI} memory:{{268435456 0} {} BinarySI}] map[cpu:{{200 -3} {} 200m DecimalSI} memory:{{268435456 0} {} BinarySI}]} [{mon-nginx-ingress-token-dgbmq true /var/run/secrets/kubernetes.io/serviceaccount }] [] &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10254,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:10254,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*33,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*60,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{elastic-controller: enabled,},ServiceAccountName:mon-nginx-ingress,DeprecatedServiceAccount:mon-nginx-ingress,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:&NodeAffinity{RequiredDuringSchedulingIgnoredDuringExecution:&NodeSelector{NodeSelectorTerms:[{[] [{metadata.name In [controller-1]}]}],},PreferredDuringSchedulingIgnoredDuringExecution:[],},PodAffinity:nil,PodAntiAffinity:nil,},SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute } {node.kubernetes.io/unreachable Exists NoExecute } {node.kubernetes.io/disk-pressure Exists NoSchedule } {node.kubernetes.io/memory-pressure Exists NoSchedule } {node.kubernetes.io/pid-pressure Exists NoSchedule } {node.kubernetes.io/unschedulable Exists NoSchedule }],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotReady containers with unready status: [nginx-ingress-controller]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotReady containers with unready status: [nginx-ingress-controller]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:15:59 +0000 UTC,ContainerStatuses:[{nginx-ingress-controller {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0 }],QOSClass:Guaranteed,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T19:16:08.464 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.463 [INFO][97787] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" HandleID="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Workload="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" 2019-11-04T19:16:08.472 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.472 [INFO][97787] ipam_plugin.go 220: Calico CNI IPAM handle=chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0 ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" HandleID="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Workload="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" 2019-11-04T19:16:08.472 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.472 [INFO][97787] ipam_plugin.go 230: Auto assigning IP ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" HandleID="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Workload="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002ec560), Attrs:map[string]string{"node":"controller-1", "pod":"mon-nginx-ingress-controller-b8l4k", "namespace":"monitor"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:16:08.472 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.472 [INFO][97787] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:16:08.476 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.476 [INFO][97787] ipam.go 309: Looking up existing affinities for host handle="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" host="controller-1" 2019-11-04T19:16:08.480 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.480 [INFO][97787] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" host="controller-1" 2019-11-04T19:16:08.481 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.481 [INFO][97787] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:08.483 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.483 [INFO][97787] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:08.483 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.483 [INFO][97787] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" host="controller-1" 2019-11-04T19:16:08.484 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.484 [INFO][97787] ipam.go 1244: Creating new handle: chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0 2019-11-04T19:16:08.486 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.486 [INFO][97787] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" host="controller-1" 2019-11-04T19:16:08.489 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.489 [INFO][97787] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e329/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" host="controller-1" 2019-11-04T19:16:08.489 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.489 [INFO][97787] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e329/122] handle="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" host="controller-1" 2019-11-04T19:16:08.492 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.491 [INFO][97787] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e329/122] handle="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" host="controller-1" 2019-11-04T19:16:08.492 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.491 [INFO][97787] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e329/122] ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" HandleID="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Workload="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" 2019-11-04T19:16:08.492 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.492 [INFO][97787] ipam_plugin.go 258: IPAM Result ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" HandleID="chain.6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Workload="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc0000c8f60)} 2019-11-04T19:16:08.493 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.493 [INFO][97760] k8s.go 361: Populated endpoint ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Namespace="monitor" Pod="mon-nginx-ingress-controller-b8l4k" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0", GenerateName:"mon-nginx-ingress-controller-", Namespace:"monitor", SelfLink:"", UID:"3140ecb8-e94b-413e-ac03-9d8a02d862d3", ResourceVersion:"8163339", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708491759, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"component":"controller", "controller-revision-hash":"866b74fd9d", "pod-template-generation":"1", "release":"mon-nginx-ingress", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-nginx-ingress", "app":"nginx-ingress"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-nginx-ingress-controller-b8l4k", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e329/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-nginx-ingress"}, InterfaceName:"cali98e23353dbb", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x50}, v3.EndpointPort{Name:"https", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1bb}}}} 2019-11-04T19:16:08.493 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.493 [INFO][97760] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e329/128] ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Namespace="monitor" Pod="mon-nginx-ingress-controller-b8l4k" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" 2019-11-04T19:16:08.493 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.493 [INFO][97760] network_linux.go 76: Setting the host side veth name to cali98e23353dbb ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Namespace="monitor" Pod="mon-nginx-ingress-controller-b8l4k" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" 2019-11-04T19:16:08.495 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.495 [INFO][97760] network_linux.go 411: Disabling IPv6 forwarding ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Namespace="monitor" Pod="mon-nginx-ingress-controller-b8l4k" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" 2019-11-04T19:16:08.533 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.533 [INFO][97760] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Namespace="monitor" Pod="mon-nginx-ingress-controller-b8l4k" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0", GenerateName:"mon-nginx-ingress-controller-", Namespace:"monitor", SelfLink:"", UID:"3140ecb8-e94b-413e-ac03-9d8a02d862d3", ResourceVersion:"8163339", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708491759, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"release":"mon-nginx-ingress", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-nginx-ingress", "app":"nginx-ingress", "component":"controller", "controller-revision-hash":"866b74fd9d", "pod-template-generation":"1"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0", Pod:"mon-nginx-ingress-controller-b8l4k", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e329/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-nginx-ingress"}, InterfaceName:"cali98e23353dbb", MAC:"6e:c3:a5:9f:f0:78", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x50}, v3.EndpointPort{Name:"https", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1bb}}}} 2019-11-04T19:16:08.536 controller-1 kubelet[88595]: info 2019-11-04 19:16:08.536 [INFO][97760] k8s.go 420: Wrote updated endpoint to datastore ContainerID="6bead7adb3aa124b358f575d373c883fe9ee0e65f1305d43c760184f239fb0c0" Namespace="monitor" Pod="mon-nginx-ingress-controller-b8l4k" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--controller--b8l4k-eth0" 2019-11-04T19:16:08.627 controller-1 containerd[12214]: info time="2019-11-04T19:16:08.627190560Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/95a70a7c03f7e30d702a01e5df6c9c7c8005ea67822220204891567395f68e1f/shim.sock" debug=false pid=97858 2019-11-04T19:16:11.000 controller-1 ntpd[87625]: info Listen normally on 14 cali98e23353dbb fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:16:11.000 controller-1 ntpd[87625]: info Listen normally on 15 cali7eb1b3c61b4 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:16:11.000 controller-1 ntpd[87625]: info Listen normally on 16 cali23ca0d564d4 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:16:11.000 controller-1 ntpd[87625]: debug new interface(s) found: waking up resolver 2019-11-04T19:16:13.253 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.253 [INFO][98270] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"monitor", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"mon-elasticsearch-master-1", ContainerID:"3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee"}} 2019-11-04T19:16:13.268 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.268 [INFO][98270] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-mon--elasticsearch--master--1-eth0 mon-elasticsearch-master- monitor 89ee0fbc-6074-4195-82e0-63e4a478fa96 8163272 0 2019-11-04 19:00:38 +0000 UTC map[release:mon-elasticsearch-master statefulset.kubernetes.io/pod-name:mon-elasticsearch-master-1 projectcalico.org/orchestrator:k8s app:mon-elasticsearch-master chart:elasticsearch controller-revision-hash:mon-elasticsearch-master-6fbc49c65b heritage:Tiller projectcalico.org/namespace:monitor projectcalico.org/serviceaccount:default] map[] [] nil [] } {k8s controller-1 mon-elasticsearch-master-1 eth0 [] [] [kns.monitor ksa.monitor.default] calif772c92d8f9 [{http TCP 9200} {transport TCP 9300}]}} ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Namespace="monitor" Pod="mon-elasticsearch-master-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--master--1-" 2019-11-04T19:16:13.268 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.268 [INFO][98270] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Namespace="monitor" Pod="mon-elasticsearch-master-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T19:16:13.271 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.271 [INFO][98270] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:monitor,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/monitor,UID:85834187-55ff-4686-b97c-c3f524d37f83,ResourceVersion:46120,Generation:0,CreationTimestamp:2019-10-25 19:07:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.272 [INFO][98270] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:mon-elasticsearch-master-1,GenerateName:mon-elasticsearch-master-,Namespace:monitor,SelfLink:/api/v1/namespaces/monitor/pods/mon-elasticsearch-master-1,UID:89ee0fbc-6074-4195-82e0-63e4a478fa96,ResourceVersion:8163272,Generation:0,CreationTimestamp:2019-11-04 19:00:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: mon-elasticsearch-master,chart: elasticsearch,controller-revision-hash: mon-elasticsearch-master-6fbc49c65b,heritage: Tiller,release: mon-elasticsearch-master,statefulset.kubernetes.io/pod-name: mon-elasticsearch-master-1,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 StatefulSet mon-elasticsearch-master 1db6b6a6-439b-44f4-8131-c5d2cefc3cc7 0xc0001cd107 0xc0001cd108}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{mon-elasticsearch-master {nil nil nil nil nil nil nil nil nil PersistentVolumeClaimVolumeSource{ClaimName:mon-elasticsearch-master-mon-elasticsearch-master-1,ReadOnly:false,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {default-token-88gsr {nil nil nil nil nil &SecretVolumeSource{SecretName:default-token-88gsr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{elasticsearch docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 [] [] [{http 0 9200 TCP } {transport 0 9300 TCP }] [] [{node.name EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {cluster.initial_master_nodes mon-elasticsearch-master-0 nil} {discovery.seed_hosts mon-elasticsearch-data-headless, mon-elasticsearch-master-headless nil} {cluster.name mon-elasticsearch nil} {network.host 0.0.0.0 nil} {ES_JAVA_OPTS -Djava.net.preferIPv6Addresses=true -Xmx512m -Xms512m nil} {node.data false nil} {node.ingest false nil} {node.master true nil} {DATA_PRESTOP_SLEEP 100 nil}] {map[cpu:{{1 0} {} 1 DecimalSI} memory:{{1073741824 0} {} 1Gi BinarySI}] map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{536870912 0} {} BinarySI}]} [{mon-elasticsearch-master false /usr/share/elasticsearch/data } {default-token-88gsr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil &Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c #!/usr/bin/env bash -e 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info # If the node is starting up wait for the cluster to be ready (request params: '' ) 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info # Once it has started only check that the node itself is responding 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info START_FILE=/tmp/.es_start_file 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info http () { 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info local path="${1}" 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}" 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info else 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info BASIC_AUTH='' 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info fi 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path} 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info } 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info if [ -f "${START_FILE}" ]; then 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info echo 'Elasticsearch is already running, lets check the node is healthy' 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info http "/" 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info else 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "" )' 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info if http "/_cluster/health?" ; then 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info touch ${START_FILE} 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info exit 0 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info else 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info echo 'Cluster is not yet ready (request params: "" )' 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info exit 1 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info fi 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info fi 2019-11-04T19:16:13.273 controller-1 kubelet[88595]: info ],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:3,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*120,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{elastic-master: enabled,},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:mon-elasticsearch-master-1,Subdomain:mon-elasticsearch-master-headless,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[{LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[{app In [mon-elasticsearch-master]}],} [] kubernetes.io/hostname}],PreferredDuringSchedulingIgnoredDuringExecution:[],},},SchedulerName:default-scheduler,InitContainers:[{configure-sysctl docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 [sysctl -w vm.max_map_count=262144] [] [] [] [] {map[] map[]} [{default-token-88gsr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0003d9190} {node.kubernetes.io/unreachable Exists NoExecute 0xc0003d91b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotInitialized containers with incomplete status: [configure-sysctl]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotReady containers with unready status: [elasticsearch]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotReady containers with unready status: [elasticsearch]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:15:59 +0000 UTC,ContainerStatuses:[{elasticsearch {ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 }],QOSClass:Burstable,InitContainerStatuses:[{configure-sysctl {ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 }],NominatedNodeName:,},} 2019-11-04T19:16:13.291 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.291 [INFO][98297] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" HandleID="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Workload="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T19:16:13.299 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.298 [INFO][98297] ipam_plugin.go 220: Calico CNI IPAM handle=chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" HandleID="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Workload="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T19:16:13.299 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.298 [INFO][98297] ipam_plugin.go 230: Auto assigning IP ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" HandleID="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Workload="controller--1-k8s-mon--elasticsearch--master--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc00035e440), Attrs:map[string]string{"node":"controller-1", "pod":"mon-elasticsearch-master-1", "namespace":"monitor"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:16:13.299 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.299 [INFO][98297] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:16:13.302 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.302 [INFO][98297] ipam.go 309: Looking up existing affinities for host handle="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" host="controller-1" 2019-11-04T19:16:13.306 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.306 [INFO][98297] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" host="controller-1" 2019-11-04T19:16:13.308 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.308 [INFO][98297] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:13.310 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.310 [INFO][98297] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:13.310 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.310 [INFO][98297] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" host="controller-1" 2019-11-04T19:16:13.311 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.311 [INFO][98297] ipam.go 1244: Creating new handle: chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee 2019-11-04T19:16:13.314 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.314 [INFO][98297] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" host="controller-1" 2019-11-04T19:16:13.317 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.317 [INFO][98297] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e31d/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" host="controller-1" 2019-11-04T19:16:13.317 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.317 [INFO][98297] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e31d/122] handle="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" host="controller-1" 2019-11-04T19:16:13.318 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.318 [INFO][98297] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e31d/122] handle="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" host="controller-1" 2019-11-04T19:16:13.318 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.318 [INFO][98297] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e31d/122] ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" HandleID="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Workload="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T19:16:13.318 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.318 [INFO][98297] ipam_plugin.go 258: IPAM Result ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" HandleID="chain.3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Workload="controller--1-k8s-mon--elasticsearch--master--1-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc0002d6420)} 2019-11-04T19:16:13.319 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.319 [INFO][98270] k8s.go 361: Populated endpoint ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Namespace="monitor" Pod="mon-elasticsearch-master-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--master--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--master--1-eth0", GenerateName:"mon-elasticsearch-master-", Namespace:"monitor", SelfLink:"", UID:"89ee0fbc-6074-4195-82e0-63e4a478fa96", ResourceVersion:"8163272", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490838, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/serviceaccount":"default", "heritage":"Tiller", "release":"mon-elasticsearch-master", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-master-1", "projectcalico.org/orchestrator":"k8s", "app":"mon-elasticsearch-master", "chart":"elasticsearch", "controller-revision-hash":"mon-elasticsearch-master-6fbc49c65b", "projectcalico.org/namespace":"monitor"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-elasticsearch-master-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e31d/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"calif772c92d8f9", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T19:16:13.319 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.319 [INFO][98270] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e31d/128] ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Namespace="monitor" Pod="mon-elasticsearch-master-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T19:16:13.319 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.319 [INFO][98270] network_linux.go 76: Setting the host side veth name to calif772c92d8f9 ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Namespace="monitor" Pod="mon-elasticsearch-master-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T19:16:13.322 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.322 [INFO][98270] network_linux.go 411: Disabling IPv6 forwarding ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Namespace="monitor" Pod="mon-elasticsearch-master-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T19:16:13.364 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.364 [INFO][98270] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Namespace="monitor" Pod="mon-elasticsearch-master-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--master--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--master--1-eth0", GenerateName:"mon-elasticsearch-master-", Namespace:"monitor", SelfLink:"", UID:"89ee0fbc-6074-4195-82e0-63e4a478fa96", ResourceVersion:"8163272", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490838, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"release":"mon-elasticsearch-master", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-master-1", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "heritage":"Tiller", "chart":"elasticsearch", "controller-revision-hash":"mon-elasticsearch-master-6fbc49c65b", "projectcalico.org/namespace":"monitor", "app":"mon-elasticsearch-master"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee", Pod:"mon-elasticsearch-master-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e31d/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"calif772c92d8f9", MAC:"5e:83:56:ec:49:54", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T19:16:13.367 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.367 [INFO][98270] k8s.go 420: Wrote updated endpoint to datastore ContainerID="3064bb3afb143875107f1c4b7720e9f1baf6036409df70432da7dd9645f743ee" Namespace="monitor" Pod="mon-elasticsearch-master-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--master--1-eth0" 2019-11-04T19:16:13.454 controller-1 containerd[12214]: info time="2019-11-04T19:16:13.454048955Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/69dbe344085bcb9f718db3c76642a61a5ddfb398c8a75cd381daad98d36e9563/shim.sock" debug=false pid=98364 2019-11-04T19:16:13.552 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.552 [INFO][98414] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"monitor", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"mon-elasticsearch-data-1", ContainerID:"9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813"}} 2019-11-04T19:16:13.568 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.568 [INFO][98414] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-mon--elasticsearch--data--1-eth0 mon-elasticsearch-data- monitor 694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7 8163256 0 2019-11-04 19:00:38 +0000 UTC map[projectcalico.org/serviceaccount:default chart:elasticsearch controller-revision-hash:mon-elasticsearch-data-dc668b5cf release:mon-elasticsearch-data projectcalico.org/namespace:monitor app:mon-elasticsearch-data heritage:Tiller statefulset.kubernetes.io/pod-name:mon-elasticsearch-data-1 projectcalico.org/orchestrator:k8s] map[] [] nil [] } {k8s controller-1 mon-elasticsearch-data-1 eth0 [] [] [kns.monitor ksa.monitor.default] calibfabab83f74 [{http TCP 9200} {transport TCP 9300}]}} ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Namespace="monitor" Pod="mon-elasticsearch-data-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--data--1-" 2019-11-04T19:16:13.568 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.568 [INFO][98414] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Namespace="monitor" Pod="mon-elasticsearch-data-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T19:16:13.571 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.571 [INFO][98414] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:monitor,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/monitor,UID:85834187-55ff-4686-b97c-c3f524d37f83,ResourceVersion:46120,Generation:0,CreationTimestamp:2019-10-25 19:07:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:16:13.572 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.572 [INFO][98414] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:mon-elasticsearch-data-1,GenerateName:mon-elasticsearch-data-,Namespace:monitor,SelfLink:/api/v1/namespaces/monitor/pods/mon-elasticsearch-data-1,UID:694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7,ResourceVersion:8163256,Generation:0,CreationTimestamp:2019-11-04 19:00:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: mon-elasticsearch-data,chart: elasticsearch,controller-revision-hash: mon-elasticsearch-data-dc668b5cf,heritage: Tiller,release: mon-elasticsearch-data,statefulset.kubernetes.io/pod-name: mon-elasticsearch-data-1,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 StatefulSet mon-elasticsearch-data 1c6cc709-0204-42dc-9d1d-05954c052ef6 0xc00003d187 0xc00003d188}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{mon-elasticsearch-data {nil nil nil nil nil nil nil nil nil PersistentVolumeClaimVolumeSource{ClaimName:mon-elasticsearch-data-mon-elasticsearch-data-1,ReadOnly:false,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {default-token-88gsr {nil nil nil nil nil &SecretVolumeSource{SecretName:default-token-88gsr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{elasticsearch docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 [] [] [{http 0 9200 TCP } {transport 0 9300 TCP }] [] [{node.name EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {discovery.seed_hosts mon-elasticsearch-data-headless, mon-elasticsearch-master-headless nil} {cluster.name mon-elasticsearch nil} {network.host 0.0.0.0 nil} {ES_JAVA_OPTS -Djava.net.preferIPv6Addresses=true -Xmx4096m -Xms4096m nil} {node.data true nil} {node.ingest false nil} {node.master false nil} {DATA_PRESTOP_SLEEP 100 nil}] {map[cpu:{{2 0} {} 2 DecimalSI} memory:{{6442450944 0} {} 6Gi BinarySI}] map[cpu:{{500 -3} {} 500m DecimalSI} memory:{{6442450944 0} {} 6Gi BinarySI}]} [{mon-elasticsearch-data false /usr/share/elasticsearch/data } {default-token-88gsr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil &Probe{Handler:Handler{Exec:&ExecAction{Command:[sh -c #!/usr/bin/env bash -e 2019-11-04T19:16:13.572 controller-1 kubelet[88595]: info # If the node is starting up wait for the cluster to be ready (request params: '' ) 2019-11-04T19:16:13.572 controller-1 kubelet[88595]: info # Once it has started only check that the node itself is responding 2019-11-04T19:16:13.572 controller-1 kubelet[88595]: info START_FILE=/tmp/.es_start_file 2019-11-04T19:16:13.572 controller-1 kubelet[88595]: info http () { 2019-11-04T19:16:13.572 controller-1 kubelet[88595]: info local path="${1}" 2019-11-04T19:16:13.572 controller-1 kubelet[88595]: info if [ -n "${ELASTIC_USERNAME}" ] && [ -n "${ELASTIC_PASSWORD}" ]; then 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info BASIC_AUTH="-u ${ELASTIC_USERNAME}:${ELASTIC_PASSWORD}" 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info else 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info BASIC_AUTH='' 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info fi 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info curl -XGET -s -k --fail ${BASIC_AUTH} http://127.0.0.1:9200${path} 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info } 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info if [ -f "${START_FILE}" ]; then 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info echo 'Elasticsearch is already running, lets check the node is healthy' 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info http "/" 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info else 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info echo 'Waiting for elasticsearch cluster to become cluster to be ready (request params: "" )' 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info if http "/_cluster/health?" ; then 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info touch ${START_FILE} 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info exit 0 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info else 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info echo 'Cluster is not yet ready (request params: "" )' 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info exit 1 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info fi 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info fi 2019-11-04T19:16:13.573 controller-1 kubelet[88595]: info ],},HTTPGet:nil,TCPSocket:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:3,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*120,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{elastic-data: enabled,},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*1000,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:mon-elasticsearch-data-1,Subdomain:mon-elasticsearch-data-headless,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[{LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[{app In [mon-elasticsearch-data]}],} [] kubernetes.io/hostname}],PreferredDuringSchedulingIgnoredDuringExecution:[],},},SchedulerName:default-scheduler,InitContainers:[{configure-sysctl docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 [sysctl -w vm.max_map_count=262144] [] [] [] [] {map[] map[]} [{default-token-88gsr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00003d770} {node.kubernetes.io/unreachable Exists NoExecute 0xc00003d790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotInitialized containers with incomplete status: [configure-sysctl]} {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotReady containers with unready status: [elasticsearch]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC ContainersNotReady containers with unready status: [elasticsearch]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:15:59 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:15:59 +0000 UTC,ContainerStatuses:[{elasticsearch {ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 }],QOSClass:Burstable,InitContainerStatuses:[{configure-sysctl {ContainerStateWaiting{Reason:PodInitializing,Message:,} nil nil} {nil nil nil} false 0 docker.elastic.co/elasticsearch/elasticsearch-oss:7.4.0 }],NominatedNodeName:,},} 2019-11-04T19:16:13.591 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.591 [INFO][98452] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" HandleID="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Workload="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T19:16:13.599 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.599 [INFO][98452] ipam_plugin.go 220: Calico CNI IPAM handle=chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813 ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" HandleID="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Workload="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T19:16:13.599 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.599 [INFO][98452] ipam_plugin.go 230: Auto assigning IP ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" HandleID="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Workload="controller--1-k8s-mon--elasticsearch--data--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002e8560), Attrs:map[string]string{"pod":"mon-elasticsearch-data-1", "namespace":"monitor", "node":"controller-1"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:16:13.599 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.599 [INFO][98452] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:16:13.603 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.603 [INFO][98452] ipam.go 309: Looking up existing affinities for host handle="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" host="controller-1" 2019-11-04T19:16:13.608 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.607 [INFO][98452] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" host="controller-1" 2019-11-04T19:16:13.609 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.609 [INFO][98452] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:13.611 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.611 [INFO][98452] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:13.611 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.611 [INFO][98452] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" host="controller-1" 2019-11-04T19:16:13.613 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.612 [INFO][98452] ipam.go 1244: Creating new handle: chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813 2019-11-04T19:16:13.615 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.615 [INFO][98452] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" host="controller-1" 2019-11-04T19:16:13.617 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.617 [INFO][98452] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e323/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" host="controller-1" 2019-11-04T19:16:13.617 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.617 [INFO][98452] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e323/122] handle="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" host="controller-1" 2019-11-04T19:16:13.618 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.618 [INFO][98452] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e323/122] handle="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" host="controller-1" 2019-11-04T19:16:13.618 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.618 [INFO][98452] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e323/122] ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" HandleID="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Workload="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T19:16:13.618 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.618 [INFO][98452] ipam_plugin.go 258: IPAM Result ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" HandleID="chain.9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Workload="controller--1-k8s-mon--elasticsearch--data--1-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc000372240)} 2019-11-04T19:16:13.619 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.619 [INFO][98414] k8s.go 361: Populated endpoint ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Namespace="monitor" Pod="mon-elasticsearch-data-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--data--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--data--1-eth0", GenerateName:"mon-elasticsearch-data-", Namespace:"monitor", SelfLink:"", UID:"694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7", ResourceVersion:"8163256", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490838, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"chart":"elasticsearch", "controller-revision-hash":"mon-elasticsearch-data-dc668b5cf", "heritage":"Tiller", "projectcalico.org/serviceaccount":"default", "app":"mon-elasticsearch-data", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-data-1", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "release":"mon-elasticsearch-data"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-elasticsearch-data-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e323/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"calibfabab83f74", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T19:16:13.619 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.619 [INFO][98414] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e323/128] ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Namespace="monitor" Pod="mon-elasticsearch-data-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T19:16:13.619 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.619 [INFO][98414] network_linux.go 76: Setting the host side veth name to calibfabab83f74 ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Namespace="monitor" Pod="mon-elasticsearch-data-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T19:16:13.622 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.622 [INFO][98414] network_linux.go 411: Disabling IPv6 forwarding ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Namespace="monitor" Pod="mon-elasticsearch-data-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T19:16:13.658 controller-1 containerd[12214]: info time="2019-11-04T19:16:13.658783892Z" level=info msg="shim reaped" id=69dbe344085bcb9f718db3c76642a61a5ddfb398c8a75cd381daad98d36e9563 2019-11-04T19:16:13.668 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.667 [INFO][98414] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Namespace="monitor" Pod="mon-elasticsearch-data-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--data--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--elasticsearch--data--1-eth0", GenerateName:"mon-elasticsearch-data-", Namespace:"monitor", SelfLink:"", UID:"694ef0c0-ddae-46b2-a9a0-cc1ee86bb0c7", ResourceVersion:"8163256", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708490838, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"mon-elasticsearch-data", "chart":"elasticsearch", "controller-revision-hash":"mon-elasticsearch-data-dc668b5cf", "heritage":"Tiller", "projectcalico.org/serviceaccount":"default", "release":"mon-elasticsearch-data", "statefulset.kubernetes.io/pod-name":"mon-elasticsearch-data-1", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813", Pod:"mon-elasticsearch-data-1", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e323/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"calibfabab83f74", MAC:"ee:5f:ec:45:b8:42", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23f0}, v3.EndpointPort{Name:"transport", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x2454}}}} 2019-11-04T19:16:13.668 controller-1 dockerd[12332]: info time="2019-11-04T19:16:13.668737848Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:16:13.671 controller-1 kubelet[88595]: info 2019-11-04 19:16:13.671 [INFO][98414] k8s.go 420: Wrote updated endpoint to datastore ContainerID="9bf18ecf90b2462ba63bbc341b08e725ca0e65a46cc449dc043f8e34c0ce8813" Namespace="monitor" Pod="mon-elasticsearch-data-1" WorkloadEndpoint="controller--1-k8s-mon--elasticsearch--data--1-eth0" 2019-11-04T19:16:13.757 controller-1 containerd[12214]: info time="2019-11-04T19:16:13.757897555Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bacbd0e9232a7a405d46118b681078a6a3573854dd7f4fd112cf2e506b31149a/shim.sock" debug=false pid=98523 2019-11-04T19:16:13.970 controller-1 containerd[12214]: info time="2019-11-04T19:16:13.970285652Z" level=info msg="shim reaped" id=bacbd0e9232a7a405d46118b681078a6a3573854dd7f4fd112cf2e506b31149a 2019-11-04T19:16:13.980 controller-1 dockerd[12332]: info time="2019-11-04T19:16:13.980086548Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:16:14.347 controller-1 containerd[12214]: info time="2019-11-04T19:16:14.347252370Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fb49fe7dd0e2a62a0373ee337a76b9d7009b93cae5388d0783f1d23813f70a7a/shim.sock" debug=false pid=98665 2019-11-04T19:16:14.361 controller-1 containerd[12214]: info time="2019-11-04T19:16:14.361294039Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/45eebf89f3d68c9567036351df47385f4dd4c55635311da3c45a2bf53a7a2e54/shim.sock" debug=false pid=98679 2019-11-04T19:16:15.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 9.3% (avg per cpu); cpus: 36, Platform: 5.3% (Base: 4.7, k8s-system: 0.6), k8s-addon: 4.0 2019-11-04T19:16:15.288 controller-1 collectd[12276]: info platform memory usage: Usage: 1.4%; Reserved: 126390.4 MiB, Platform: 1771.5 MiB (Base: 1384.7, k8s-system: 386.8), k8s-addon: 3199.2 2019-11-04T19:16:15.288 controller-1 collectd[12276]: info 4K memory usage: Anon: 3.9%, Anon: 4969.9 MiB, cgroup-rss: 4964.1 MiB, Avail: 121420.4 MiB, Total: 126390.4 MiB 2019-11-04T19:16:15.288 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 2.57%, Anon: 1629.7 MiB, Avail: 61817.1 MiB, Total: 63446.8 MiB 2019-11-04T19:16:15.288 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 5.25%, Anon: 3341.1 MiB, Avail: 60297.7 MiB, Total: 63638.9 MiB 2019-11-04T19:16:17.000 controller-1 ntpd[87625]: info Listen normally on 17 calibfabab83f74 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:16:17.000 controller-1 ntpd[87625]: info Listen normally on 18 calif772c92d8f9 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:16:17.000 controller-1 ntpd[87625]: debug new interface(s) found: waking up resolver 2019-11-04T19:16:20.000 controller-1 nscd: notice 95151 checking for monitored file `/etc/netgroup': No such file or directory 2019-11-04T19:16:23.755 controller-1 kubelet[88595]: info I1104 19:16:23.755526 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume-rbd-provisioner" (UniqueName: "kubernetes.io/configmap/b9c93a47-55c1-47b4-8bdb-edcd3d549b8f-config-volume-rbd-provisioner") pod "storage-init-rbd-provisioner-dkhpx" (UID: "b9c93a47-55c1-47b4-8bdb-edcd3d549b8f") 2019-11-04T19:16:23.755 controller-1 kubelet[88595]: info I1104 19:16:23.755573 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "rbd-provisioner-token-587hn" (UniqueName: "kubernetes.io/secret/b9c93a47-55c1-47b4-8bdb-edcd3d549b8f-rbd-provisioner-token-587hn") pod "storage-init-rbd-provisioner-dkhpx" (UID: "b9c93a47-55c1-47b4-8bdb-edcd3d549b8f") 2019-11-04T19:16:23.867 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b9c93a47-55c1-47b4-8bdb-edcd3d549b8f/volumes/kubernetes.io~secret/rbd-provisioner-token-587hn. 2019-11-04T19:16:24.071 controller-1 dockerd[12332]: info time="2019-11-04T19:16:24.071007272Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:16:24.077 controller-1 containerd[12214]: info time="2019-11-04T19:16:24.077742950Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652/shim.sock" debug=false pid=100296 2019-11-04T19:16:25.446 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 15.8% (avg per cpu); cpus: 36, Platform: 4.7% (Base: 4.3, k8s-system: 0.4), k8s-addon: 11.0 2019-11-04T19:16:25.453 controller-1 collectd[12276]: info alarm notifier reading: 15.84 % usage - Platform CPU 2019-11-04T19:16:25.478 controller-1 collectd[12276]: info platform memory usage: Usage: 1.4%; Reserved: 126361.1 MiB, Platform: 1812.3 MiB (Base: 1423.1, k8s-system: 389.2), k8s-addon: 6511.2 2019-11-04T19:16:25.479 controller-1 collectd[12276]: info 4K memory usage: Anon: 6.6%, Anon: 8329.2 MiB, cgroup-rss: 8324.3 MiB, Avail: 118032.0 MiB, Total: 126361.1 MiB 2019-11-04T19:16:25.479 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 1.74%, Anon: 1106.4 MiB, Avail: 62342.6 MiB, Total: 63449.0 MiB 2019-11-04T19:16:25.479 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 11.36%, Anon: 7223.7 MiB, Avail: 56387.0 MiB, Total: 63610.7 MiB 2019-11-04T19:16:25.479 controller-1 collectd[12276]: info alarm notifier reading: 11.36 % usage - Platform Memory node1 2019-11-04T19:16:30.001 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.001 [INFO][100725] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"storage-init-rbd-provisioner-dkhpx", ContainerID:"76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652"}} 2019-11-04T19:16:30.016 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.016 [INFO][100725] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0 storage-init-rbd-provisioner- kube-system b9c93a47-55c1-47b4-8bdb-edcd3d549b8f 8163595 0 2019-11-04 19:16:23 +0000 UTC map[job-name:storage-init-rbd-provisioner release:stx-rbd-provisioner projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:rbd-provisioner chart:rbd-provisioner-0.1.0 controller-uid:f997b336-7942-4ac5-bfe2-659497591ea5 heritage:Tiller] map[] [] nil [] } {k8s controller-1 storage-init-rbd-provisioner-dkhpx eth0 [] [] [kns.kube-system ksa.kube-system.rbd-provisioner] cali26d366578dc []}} ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Namespace="kube-system" Pod="storage-init-rbd-provisioner-dkhpx" WorkloadEndpoint="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-" 2019-11-04T19:16:30.017 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.016 [INFO][100725] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Namespace="kube-system" Pod="storage-init-rbd-provisioner-dkhpx" WorkloadEndpoint="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:16:30.019 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.019 [INFO][100725] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kube-system,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/kube-system,UID:5d016a6c-19e8-4b97-88a9-b6113a3cb736,ResourceVersion:5,Generation:0,CreationTimestamp:2019-10-25 15:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:16:30.021 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.020 [INFO][100725] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:storage-init-rbd-provisioner-dkhpx,GenerateName:storage-init-rbd-provisioner-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/storage-init-rbd-provisioner-dkhpx,UID:b9c93a47-55c1-47b4-8bdb-edcd3d549b8f,ResourceVersion:8163595,Generation:0,CreationTimestamp:2019-11-04 19:16:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{chart: rbd-provisioner-0.1.0,controller-uid: f997b336-7942-4ac5-bfe2-659497591ea5,heritage: Tiller,job-name: storage-init-rbd-provisioner,release: stx-rbd-provisioner,},Annotations:map[string]string{},OwnerReferences:[{batch/v1 Job storage-init-rbd-provisioner f997b336-7942-4ac5-bfe2-659497591ea5 0xc000569c53 0xc000569c54}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{config-volume-rbd-provisioner {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:config-rbd-provisioner,},Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil}} {rbd-provisioner-token-587hn {nil nil nil nil nil &SecretVolumeSource{SecretName:rbd-provisioner-token-587hn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{storage-init-general registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 [/bin/bash /tmp/mount/check_ceph.sh] [] [] [] [{NAMESPACE kube-system nil} {ADDITIONAL_NAMESPACES default,kube-public nil} {CEPH_ADMIN_SECRET ceph-admin nil} {CEPH_USER_SECRET ceph-pool-kube-rbd nil} {USER_ID ceph-pool-kube-rbd nil} {POOL_NAME kube-rbd nil} {POOL_REPLICATION 2 nil} {POOL_CRUSH_RULE_NAME storage_tier_ruleset nil} {POOL_CHUNK_SIZE 64 nil}] {map[] map[]} [{config-volume-rbd-provisioner false /tmp/mount } {rbd-provisioner-token-587hn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:OnFailure,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:rbd-provisioner,DeprecatedServiceAccount:rbd-provisioner,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[{default-registry-key}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000569e70} {node.kubernetes.io/unreachable Exists NoExecute 0xc000569e90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:16:23 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:16:23 +0000 UTC ContainersNotReady containers with unready status: [storage-init-general]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:16:23 +0000 UTC ContainersNotReady containers with unready status: [storage-init-general]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:16:23 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:16:23 +0000 UTC,ContainerStatuses:[{storage-init-general {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T19:16:30.039 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.038 [INFO][100749] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" HandleID="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Workload="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:16:30.048 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.048 [INFO][100749] ipam_plugin.go 220: Calico CNI IPAM handle=chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652 ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" HandleID="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Workload="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:16:30.048 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.048 [INFO][100749] ipam_plugin.go 230: Auto assigning IP ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" HandleID="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Workload="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0001e3b60), Attrs:map[string]string{"pod":"storage-init-rbd-provisioner-dkhpx", "namespace":"kube-system", "node":"controller-1"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:16:30.048 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.048 [INFO][100749] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:16:30.052 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.052 [INFO][100749] ipam.go 309: Looking up existing affinities for host handle="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" host="controller-1" 2019-11-04T19:16:30.056 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.056 [INFO][100749] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" host="controller-1" 2019-11-04T19:16:30.058 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.058 [INFO][100749] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:30.059 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.059 [INFO][100749] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:16:30.059 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.059 [INFO][100749] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" host="controller-1" 2019-11-04T19:16:30.061 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.061 [INFO][100749] ipam.go 1244: Creating new handle: chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652 2019-11-04T19:16:30.063 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.063 [INFO][100749] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" host="controller-1" 2019-11-04T19:16:30.066 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.066 [INFO][100749] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e325/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" host="controller-1" 2019-11-04T19:16:30.066 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.066 [INFO][100749] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e325/122] handle="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" host="controller-1" 2019-11-04T19:16:30.067 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.067 [INFO][100749] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e325/122] handle="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" host="controller-1" 2019-11-04T19:16:30.067 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.067 [INFO][100749] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e325/122] ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" HandleID="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Workload="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:16:30.067 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.067 [INFO][100749] ipam_plugin.go 258: IPAM Result ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" HandleID="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Workload="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc0003aa1e0)} 2019-11-04T19:16:30.068 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.068 [INFO][100725] k8s.go 361: Populated endpoint ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Namespace="kube-system" Pod="storage-init-rbd-provisioner-dkhpx" WorkloadEndpoint="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0", GenerateName:"storage-init-rbd-provisioner-", Namespace:"kube-system", SelfLink:"", UID:"b9c93a47-55c1-47b4-8bdb-edcd3d549b8f", ResourceVersion:"8163595", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708491783, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"release":"stx-rbd-provisioner", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"rbd-provisioner", "chart":"rbd-provisioner-0.1.0", "controller-uid":"f997b336-7942-4ac5-bfe2-659497591ea5", "heritage":"Tiller", "job-name":"storage-init-rbd-provisioner"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"storage-init-rbd-provisioner-dkhpx", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e325/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.rbd-provisioner"}, InterfaceName:"cali26d366578dc", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:16:30.068 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.068 [INFO][100725] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e325/128] ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Namespace="kube-system" Pod="storage-init-rbd-provisioner-dkhpx" WorkloadEndpoint="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:16:30.068 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.068 [INFO][100725] network_linux.go 76: Setting the host side veth name to cali26d366578dc ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Namespace="kube-system" Pod="storage-init-rbd-provisioner-dkhpx" WorkloadEndpoint="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:16:30.071 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.071 [INFO][100725] network_linux.go 411: Disabling IPv6 forwarding ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Namespace="kube-system" Pod="storage-init-rbd-provisioner-dkhpx" WorkloadEndpoint="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:16:30.114 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.113 [INFO][100725] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Namespace="kube-system" Pod="storage-init-rbd-provisioner-dkhpx" WorkloadEndpoint="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0", GenerateName:"storage-init-rbd-provisioner-", Namespace:"kube-system", SelfLink:"", UID:"b9c93a47-55c1-47b4-8bdb-edcd3d549b8f", ResourceVersion:"8163595", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708491783, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"rbd-provisioner", "chart":"rbd-provisioner-0.1.0", "controller-uid":"f997b336-7942-4ac5-bfe2-659497591ea5", "heritage":"Tiller", "job-name":"storage-init-rbd-provisioner", "release":"stx-rbd-provisioner"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652", Pod:"storage-init-rbd-provisioner-dkhpx", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e325/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.rbd-provisioner"}, InterfaceName:"cali26d366578dc", MAC:"c2:79:cb:ec:04:3a", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:16:30.117 controller-1 kubelet[88595]: info 2019-11-04 19:16:30.117 [INFO][100725] k8s.go 420: Wrote updated endpoint to datastore ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Namespace="kube-system" Pod="storage-init-rbd-provisioner-dkhpx" WorkloadEndpoint="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:16:30.201 controller-1 containerd[12214]: info time="2019-11-04T19:16:30.200995305Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/05d2e36720b41db45f93bdd04e6008494ff607e4cdc298d3014b2dcb556913d5/shim.sock" debug=false pid=100811 2019-11-04T19:16:33.000 controller-1 ntpd[87625]: info Listen normally on 19 cali26d366578dc fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:16:33.000 controller-1 ntpd[87625]: debug new interface(s) found: waking up resolver 2019-11-04T19:16:35.278 controller-1 collectd[12276]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T19:16:35.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 15.6% (avg per cpu); cpus: 36, Platform: 6.9% (Base: 6.5, k8s-system: 0.4), k8s-addon: 8.3 2019-11-04T19:16:35.288 controller-1 collectd[12276]: info platform memory usage: Usage: 2.5%; Reserved: 126321.9 MiB, Platform: 3189.6 MiB (Base: 2787.2, k8s-system: 402.4), k8s-addon: 6597.4 2019-11-04T19:16:35.288 controller-1 collectd[12276]: info 4K memory usage: Anon: 7.8%, Anon: 9803.6 MiB, cgroup-rss: 9791.2 MiB, Avail: 116518.3 MiB, Total: 126321.9 MiB 2019-11-04T19:16:35.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 2.81%, Anon: 1782.0 MiB, Avail: 61665.3 MiB, Total: 63447.2 MiB 2019-11-04T19:16:35.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 12.62%, Anon: 8021.7 MiB, Avail: 55558.6 MiB, Total: 63580.2 MiB 2019-11-04T19:16:45.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 9.0% (avg per cpu); cpus: 36, Platform: 2.1% (Base: 1.7, k8s-system: 0.4), k8s-addon: 6.5 2019-11-04T19:16:45.288 controller-1 collectd[12276]: info platform memory usage: Usage: 2.5%; Reserved: 126310.8 MiB, Platform: 3178.0 MiB (Base: 2770.9, k8s-system: 407.1), k8s-addon: 6706.8 2019-11-04T19:16:45.288 controller-1 collectd[12276]: info 4K memory usage: Anon: 7.8%, Anon: 9899.1 MiB, cgroup-rss: 9889.0 MiB, Avail: 116411.7 MiB, Total: 126310.8 MiB 2019-11-04T19:16:45.288 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 2.90%, Anon: 1841.7 MiB, Avail: 61607.7 MiB, Total: 63449.4 MiB 2019-11-04T19:16:45.288 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 12.67%, Anon: 8057.4 MiB, Avail: 55517.3 MiB, Total: 63574.8 MiB 2019-11-04T19:16:55.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 9.0% (avg per cpu); cpus: 36, Platform: 1.9% (Base: 1.5, k8s-system: 0.4), k8s-addon: 6.8 2019-11-04T19:16:55.287 controller-1 collectd[12276]: info platform memory usage: Usage: 2.5%; Reserved: 126296.6 MiB, Platform: 3185.3 MiB (Base: 2773.1, k8s-system: 412.2), k8s-addon: 6762.7 2019-11-04T19:16:55.287 controller-1 collectd[12276]: info 4K memory usage: Anon: 7.9%, Anon: 9962.9 MiB, cgroup-rss: 9952.2 MiB, Avail: 116333.7 MiB, Total: 126296.6 MiB 2019-11-04T19:16:55.287 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 2.96%, Anon: 1880.4 MiB, Avail: 61564.4 MiB, Total: 63444.7 MiB 2019-11-04T19:16:55.287 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 12.72%, Anon: 8082.6 MiB, Avail: 55483.6 MiB, Total: 63566.2 MiB 2019-11-04T19:17:04.650 controller-1 containerd[12214]: info time="2019-11-04T19:17:04.650240143Z" level=info msg="shim reaped" id=05d2e36720b41db45f93bdd04e6008494ff607e4cdc298d3014b2dcb556913d5 2019-11-04T19:17:04.660 controller-1 dockerd[12332]: info time="2019-11-04T19:17:04.660381232Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:17:05.032 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.032 [INFO][105252] plugin.go 442: Extracted identifiers ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:17:05.038 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.038 [WARNING][105252] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T19:17:05.038 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.038 [INFO][105252] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0", GenerateName:"storage-init-rbd-provisioner-", Namespace:"kube-system", SelfLink:"", UID:"b9c93a47-55c1-47b4-8bdb-edcd3d549b8f", ResourceVersion:"8164026", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708491783, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"release":"stx-rbd-provisioner", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"rbd-provisioner", "chart":"rbd-provisioner-0.1.0", "controller-uid":"f997b336-7942-4ac5-bfe2-659497591ea5", "heritage":"Tiller", "job-name":"storage-init-rbd-provisioner"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"storage-init-rbd-provisioner-dkhpx", Endpoint:"eth0", IPNetworks:[]string(nil), IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.rbd-provisioner"}, InterfaceName:"cali26d366578dc", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:17:05.038 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.038 [INFO][105252] k8s.go 477: Releasing IP address(es) ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" 2019-11-04T19:17:05.038 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.038 [INFO][105252] utils.go 171: Calico CNI releasing IP address ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" 2019-11-04T19:17:05.055 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.055 [INFO][105270] ipam_plugin.go 299: Releasing address using handleID ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" HandleID="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Workload="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:17:05.055 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.055 [INFO][105270] ipam.go 1145: Releasing all IPs with handle 'chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652' 2019-11-04T19:17:05.055 controller-1 kubelet[88595]: info I1104 19:17:05.055649 88595 reconciler.go:181] operationExecutor.UnmountVolume started for volume "rbd-provisioner-token-587hn" (UniqueName: "kubernetes.io/secret/b9c93a47-55c1-47b4-8bdb-edcd3d549b8f-rbd-provisioner-token-587hn") pod "b9c93a47-55c1-47b4-8bdb-edcd3d549b8f" (UID: "b9c93a47-55c1-47b4-8bdb-edcd3d549b8f") 2019-11-04T19:17:05.055 controller-1 kubelet[88595]: info I1104 19:17:05.055708 88595 reconciler.go:181] operationExecutor.UnmountVolume started for volume "config-volume-rbd-provisioner" (UniqueName: "kubernetes.io/configmap/b9c93a47-55c1-47b4-8bdb-edcd3d549b8f-config-volume-rbd-provisioner") pod "b9c93a47-55c1-47b4-8bdb-edcd3d549b8f" (UID: "b9c93a47-55c1-47b4-8bdb-edcd3d549b8f") 2019-11-04T19:17:05.055 controller-1 kubelet[88595]: info W1104 19:17:05.055905 88595 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/b9c93a47-55c1-47b4-8bdb-edcd3d549b8f/volumes/kubernetes.io~configmap/config-volume-rbd-provisioner: ClearQuota called, but quotas disabled 2019-11-04T19:17:05.056 controller-1 kubelet[88595]: info I1104 19:17:05.056215 88595 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9c93a47-55c1-47b4-8bdb-edcd3d549b8f-config-volume-rbd-provisioner" (OuterVolumeSpecName: "config-volume-rbd-provisioner") pod "b9c93a47-55c1-47b4-8bdb-edcd3d549b8f" (UID: "b9c93a47-55c1-47b4-8bdb-edcd3d549b8f"). InnerVolumeSpecName "config-volume-rbd-provisioner". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T19:17:05.062 controller-1 kubelet[88595]: info I1104 19:17:05.062656 88595 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9c93a47-55c1-47b4-8bdb-edcd3d549b8f-rbd-provisioner-token-587hn" (OuterVolumeSpecName: "rbd-provisioner-token-587hn") pod "b9c93a47-55c1-47b4-8bdb-edcd3d549b8f" (UID: "b9c93a47-55c1-47b4-8bdb-edcd3d549b8f"). InnerVolumeSpecName "rbd-provisioner-token-587hn". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T19:17:05.078 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.078 [INFO][105270] ipam_plugin.go 308: Released address using handleID ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" HandleID="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Workload="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:17:05.078 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.078 [INFO][105270] ipam_plugin.go 317: Releasing address using workloadID ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" HandleID="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Workload="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:17:05.078 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.078 [INFO][105270] ipam.go 1145: Releasing all IPs with handle 'kube-system.storage-init-rbd-provisioner-dkhpx' 2019-11-04T19:17:05.081 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.081 [INFO][105252] k8s.go 481: Cleaning up netns ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" 2019-11-04T19:17:05.082 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.082 [INFO][105252] network_linux.go 450: Calico CNI deleting device in netns /proc/100329/ns/net ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" 2019-11-04T19:17:05.000 controller-1 lldpd[12281]: warning removal request for address of fe80::ecee:eeff:feee:eeee%21, but no knowledge of it 2019-11-04T19:17:05.153 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.153 [INFO][105252] network_linux.go 467: Calico CNI deleted device in netns /proc/100329/ns/net ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" 2019-11-04T19:17:05.153 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.153 [INFO][105252] k8s.go 493: Teardown processing complete. ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" 2019-11-04T19:17:05.156 controller-1 kubelet[88595]: info I1104 19:17:05.156075 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "rbd-provisioner-token-587hn" (UniqueName: "kubernetes.io/secret/dc73653f-0080-41a3-83b7-78a1b5a282bb-rbd-provisioner-token-587hn") pod "rbd-provisioner-7484d49cf6-w6dzr" (UID: "dc73653f-0080-41a3-83b7-78a1b5a282bb") 2019-11-04T19:17:05.156 controller-1 kubelet[88595]: info I1104 19:17:05.156122 88595 reconciler.go:301] Volume detached for volume "config-volume-rbd-provisioner" (UniqueName: "kubernetes.io/configmap/b9c93a47-55c1-47b4-8bdb-edcd3d549b8f-config-volume-rbd-provisioner") on node "controller-1" DevicePath "" 2019-11-04T19:17:05.156 controller-1 kubelet[88595]: info I1104 19:17:05.156133 88595 reconciler.go:301] Volume detached for volume "rbd-provisioner-token-587hn" (UniqueName: "kubernetes.io/secret/b9c93a47-55c1-47b4-8bdb-edcd3d549b8f-rbd-provisioner-token-587hn") on node "controller-1" DevicePath "" 2019-11-04T19:17:05.234 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.234 [INFO][105376] plugin.go 442: Extracted identifiers ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:17:05.241 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.241 [WARNING][105376] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T19:17:05.241 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.241 [INFO][105376] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0", GenerateName:"storage-init-rbd-provisioner-", Namespace:"kube-system", SelfLink:"", UID:"b9c93a47-55c1-47b4-8bdb-edcd3d549b8f", ResourceVersion:"8164026", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708491783, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"rbd-provisioner", "chart":"rbd-provisioner-0.1.0", "controller-uid":"f997b336-7942-4ac5-bfe2-659497591ea5", "heritage":"Tiller", "job-name":"storage-init-rbd-provisioner", "release":"stx-rbd-provisioner"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"storage-init-rbd-provisioner-dkhpx", Endpoint:"eth0", IPNetworks:[]string(nil), IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.rbd-provisioner"}, InterfaceName:"cali26d366578dc", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:17:05.241 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.241 [INFO][105376] k8s.go 477: Releasing IP address(es) ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" 2019-11-04T19:17:05.241 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.241 [INFO][105376] utils.go 171: Calico CNI releasing IP address ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" 2019-11-04T19:17:05.244 controller-1 containerd[12214]: info time="2019-11-04T19:17:05.244893757Z" level=info msg="shim reaped" id=76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652 2019-11-04T19:17:05.254 controller-1 dockerd[12332]: info time="2019-11-04T19:17:05.254902325Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:17:05.259 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.258 [INFO][105413] ipam_plugin.go 299: Releasing address using handleID ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" HandleID="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Workload="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:17:05.259 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.259 [INFO][105413] ipam.go 1145: Releasing all IPs with handle 'chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652' 2019-11-04T19:17:05.265 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.265 [WARNING][105413] ipam_plugin.go 306: Asked to release address but it doesn't exist. Ignoring ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" HandleID="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Workload="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:17:05.265 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.265 [INFO][105413] ipam_plugin.go 317: Releasing address using workloadID ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" HandleID="chain.76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" Workload="controller--1-k8s-storage--init--rbd--provisioner--dkhpx-eth0" 2019-11-04T19:17:05.265 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.265 [INFO][105413] ipam.go 1145: Releasing all IPs with handle 'kube-system.storage-init-rbd-provisioner-dkhpx' 2019-11-04T19:17:05.267 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.267 [INFO][105376] k8s.go 481: Cleaning up netns ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" 2019-11-04T19:17:05.267 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.267 [INFO][105376] network_linux.go 473: veth does not exist, no need to clean up. ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" ifName="eth0" 2019-11-04T19:17:05.267 controller-1 kubelet[88595]: info 2019-11-04 19:17:05.267 [INFO][105376] k8s.go 493: Teardown processing complete. ContainerID="76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" 2019-11-04T19:17:05.277 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/dc73653f-0080-41a3-83b7-78a1b5a282bb/volumes/kubernetes.io~secret/rbd-provisioner-token-587hn. 2019-11-04T19:17:05.360 controller-1 dockerd[12332]: info time="2019-11-04T19:17:05.360410580Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:17:05.365 controller-1 containerd[12214]: info time="2019-11-04T19:17:05.365887970Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b/shim.sock" debug=false pid=105435 2019-11-04T19:17:05.434 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 9.1% (avg per cpu); cpus: 36, Platform: 1.7% (Base: 1.3, k8s-system: 0.4), k8s-addon: 6.7 2019-11-04T19:17:05.483 controller-1 collectd[12276]: info platform memory usage: Usage: 2.5%; Reserved: 126286.5 MiB, Platform: 3182.2 MiB (Base: 2773.7, k8s-system: 408.5), k8s-addon: 6787.8 2019-11-04T19:17:05.483 controller-1 collectd[12276]: info 4K memory usage: Anon: 7.9%, Anon: 9984.8 MiB, cgroup-rss: 9974.1 MiB, Avail: 116301.7 MiB, Total: 126286.5 MiB 2019-11-04T19:17:05.483 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 3.01%, Anon: 1910.1 MiB, Avail: 61530.6 MiB, Total: 63440.7 MiB 2019-11-04T19:17:05.483 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 12.70%, Anon: 8074.8 MiB, Avail: 55488.2 MiB, Total: 63563.0 MiB 2019-11-04T19:17:05.957 controller-1 kubelet[88595]: info W1104 19:17:05.956945 88595 pod_container_deletor.go:75] Container "76db528ef6c4a8e7d62708ec8693d99f24e10c673bcda52630a2d0435d9b3652" not found in pod's containers 2019-11-04T19:17:07.000 controller-1 ntpd[87625]: info Deleting interface #19 cali26d366578dc, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=34 secs 2019-11-04T19:17:15.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 8.7% (avg per cpu); cpus: 36, Platform: 2.2% (Base: 1.9, k8s-system: 0.3), k8s-addon: 6.4 2019-11-04T19:17:15.288 controller-1 collectd[12276]: info platform memory usage: Usage: 2.5%; Reserved: 126281.3 MiB, Platform: 3198.8 MiB (Base: 2785.1, k8s-system: 413.7), k8s-addon: 6831.6 2019-11-04T19:17:15.288 controller-1 collectd[12276]: info 4K memory usage: Anon: 8.0%, Anon: 10046.3 MiB, cgroup-rss: 10034.5 MiB, Avail: 116235.1 MiB, Total: 126281.3 MiB 2019-11-04T19:17:15.288 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 3.09%, Anon: 1961.9 MiB, Avail: 61479.6 MiB, Total: 63441.5 MiB 2019-11-04T19:17:15.288 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 12.72%, Anon: 8084.4 MiB, Avail: 55476.1 MiB, Total: 63560.4 MiB 2019-11-04T19:17:17.514 controller-1 kubelet[88595]: info E1104 19:17:17.514325 88595 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "controller-1": Get https://[fd00:205::2]:6443/api/v1/nodes/controller-1?resourceVersion=0&timeout=4s: context deadline exceeded 2019-11-04T19:17:21.514 controller-1 kubelet[88595]: info E1104 19:17:21.514765 88595 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "controller-1": Get https://[fd00:205::2]:6443/api/v1/nodes/controller-1?timeout=4s: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 2019-11-04T19:17:21.515 controller-1 kubelet[88595]: info W1104 19:17:21.515009 88595 status_manager.go:529] Failed to get status for pod "mon-metricbeat-tqzgn_monitor(b21e12f7-bf6b-435b-84f6-955f2ffcbb7c)": Get https://[fd00:205::2]:6443/api/v1/namespaces/monitor/pods/mon-metricbeat-tqzgn: read tcp [fd00:205::4]:50717->[fd00:205::2]:6443: use of closed network connection 2019-11-04T19:17:21.515 controller-1 kubelet[88595]: info E1104 19:17:21.515220 88595 controller.go:170] failed to update node lease, error: Put https://[fd00:205::2]:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/controller-1?timeout=4s: read tcp [fd00:205::4]:50717->[fd00:205::2]:6443: use of closed network connection 2019-11-04T19:17:25.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 7.1% (avg per cpu); cpus: 36, Platform: 1.3% (Base: 1.1, k8s-system: 0.2), k8s-addon: 5.7 2019-11-04T19:17:25.289 controller-1 collectd[12276]: info platform memory usage: Usage: 2.5%; Reserved: 126276.7 MiB, Platform: 3201.4 MiB (Base: 2787.6, k8s-system: 413.8), k8s-addon: 6860.7 2019-11-04T19:17:25.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 8.0%, Anon: 10080.5 MiB, cgroup-rss: 10066.3 MiB, Avail: 116196.2 MiB, Total: 126276.7 MiB 2019-11-04T19:17:25.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 3.11%, Anon: 1975.9 MiB, Avail: 61469.5 MiB, Total: 63445.3 MiB 2019-11-04T19:17:25.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 12.75%, Anon: 8104.6 MiB, Avail: 55448.2 MiB, Total: 63552.9 MiB 2019-11-04T19:17:25.515 controller-1 kubelet[88595]: info E1104 19:17:25.515031 88595 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "controller-1": Get https://[fd00:205::2]:6443/api/v1/nodes/controller-1?timeout=4s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 2019-11-04T19:17:25.515 controller-1 kubelet[88595]: info E1104 19:17:25.515448 88595 controller.go:170] failed to update node lease, error: Put https://[fd00:205::2]:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/controller-1?timeout=4s: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) 2019-11-04T19:17:26.267 controller-1 kubelet[88595]: info E1104 19:17:26.267014 88595 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "controller-1": Get https://[fd00:205::2]:6443/api/v1/nodes/controller-1?timeout=4s: dial tcp [fd00:205::2]:6443: connect: no route to host 2019-11-04T19:17:26.267 controller-1 kubelet[88595]: info E1104 19:17:26.267013 88595 controller.go:170] failed to update node lease, error: Put https://[fd00:205::2]:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/controller-1?timeout=4s: dial tcp [fd00:205::2]:6443: connect: no route to host 2019-11-04T19:17:27.000 controller-1 ntpd[87625]: info Listen normally on 20 pxeboot0 192.168.202.2 UDP 123 2019-11-04T19:17:27.000 controller-1 ntpd[87625]: debug new interface(s) found: waking up resolver 2019-11-04T19:17:28.533 controller-1 kubelet[88595]: info W1104 19:17:28.533571 88595 reflector.go:299] object-"monitor"/"mon-metricbeat": watch of *v1.ConfigMap ended with: too old resource version: 8162718 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535001 88595 reflector.go:299] object-"monitor"/"mon-metricbeat-token-5vdfc": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535073 88595 reflector.go:299] object-"kube-system"/"default-registry-key": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535332 88595 reflector.go:299] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: too old resource version: 8162718 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535357 88595 reflector.go:299] object-"monitor"/"default-token-88gsr": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535357 88595 reflector.go:299] object-"monitor"/"mon-filebeat": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535366 88595 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSIDriver ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535374 88595 reflector.go:299] object-"monitor"/"mon-filebeat-token-z6rf8": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535370 88595 reflector.go:299] object-"monitor"/"mon-metricbeat-daemonset-config": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535375 88595 reflector.go:299] object-"kube-system"/"rbd-provisioner-token-587hn": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535456 88595 reflector.go:299] object-"monitor"/"mon-metricbeat-daemonset-modules": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535463 88595 reflector.go:299] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: watch of *v1.Service ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535456 88595 reflector.go:299] object-"kube-system"/"default-token-jxtxx": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535493 88595 reflector.go:299] object-"kube-system"/"multus-token-dtj6m": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535790 88595 reflector.go:299] object-"kube-system"/"kube-proxy-token-9m2nq": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535826 88595 reflector.go:299] object-"monitor"/"mon-filebeat": watch of *v1.ConfigMap ended with: too old resource version: 8162718 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535821 88595 reflector.go:299] object-"kube-system"/"calico-config": watch of *v1.ConfigMap ended with: too old resource version: 8162718 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535858 88595 reflector.go:299] object-"monitor"/"mon-nginx-ingress-token-dgbmq": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.535 controller-1 kubelet[88595]: info W1104 19:17:28.535880 88595 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.RuntimeClass ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.536 controller-1 kubelet[88595]: info W1104 19:17:28.536082 88595 reflector.go:299] object-"kube-system"/"calico-node-token-46p7c": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.536 controller-1 kubelet[88595]: info W1104 19:17:28.536223 88595 reflector.go:299] object-"kube-system"/"registry-local-secret": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:28.539 controller-1 kubelet[88595]: info W1104 19:17:28.539040 88595 reflector.go:299] object-"kube-system"/"coredns-token-x97rb": watch of *v1.Secret ended with: too old resource version: 8143357 (8162843) 2019-11-04T19:17:29.000 controller-1 ntpd[87625]: info Listen normally on 21 vlan108 fd00:204::2 UDP 123 2019-11-04T19:17:29.000 controller-1 ntpd[87625]: info Listen normally on 22 vlan109 fd00:205::2 UDP 123 2019-11-04T19:17:29.000 controller-1 ntpd[87625]: info fd00:204::3 interface fd00:204::4 -> fd00:204::2 2019-11-04T19:17:29.000 controller-1 ntpd[87625]: debug new interface(s) found: waking up resolver 2019-11-04T19:17:29.502 controller-1 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2019-11-04T19:17:29.514 controller-1 systemd[1]: info Starting v2 Registry token server for Docker... 2019-11-04T19:17:29.576 controller-1 systemd[1]: info Started v2 Registry token server for Docker. 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info E1104 19:17:30.267411 88595 kubelet_node_status.go:388] Error updating node status, will retry: error getting node "controller-1": Get https://[fd00:205::2]:6443/api/v1/nodes/controller-1?timeout=4s: context deadline exceeded (Client.Timeout exceeded while awaiting headers) 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info E1104 19:17:30.267449 88595 kubelet_node_status.go:375] Unable to update node status: update node status exceeds retry count 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info E1104 19:17:30.267411 88595 controller.go:170] failed to update node lease, error: Put https://[fd00:205::2]:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/controller-1?timeout=4s: net/http: request canceled (Client.Timeout exceeded while awaiting headers) 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267462 88595 reflector.go:299] object-"monitor"/"default-token-88gsr": watch of *v1.Secret ended with: very short watch: object-"monitor"/"default-token-88gsr": Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267464 88595 reflector.go:299] object-"monitor"/"mon-metricbeat-daemonset-modules": watch of *v1.Secret ended with: very short watch: object-"monitor"/"mon-metricbeat-daemonset-modules": Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267501 88595 reflector.go:299] object-"monitor"/"mon-metricbeat-token-5vdfc": watch of *v1.Secret ended with: very short watch: object-"monitor"/"mon-metricbeat-token-5vdfc": Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267510 88595 reflector.go:299] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267539 88595 reflector.go:299] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: watch of *v1.Service ended with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267519 88595 reflector.go:299] object-"kube-system"/"rbd-provisioner-token-587hn": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"rbd-provisioner-token-587hn": Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267568 88595 reflector.go:299] object-"kube-system"/"default-registry-key": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"default-registry-key": Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info E1104 19:17:30.267577 88595 event.go:246] Unable to write event: 'Post https://[fd00:205::2]:6443/api/v1/namespaces/kube-system/events: read tcp [fd00:205::4]:52701->[fd00:205::2]:6443: use of closed network connection' (may retry after sleeping) 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267590 88595 reflector.go:299] object-"kube-system"/"default-token-jxtxx": watch of *v1.Secret ended with: very short watch: object-"kube-system"/"default-token-jxtxx": Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267614 88595 reflector.go:299] object-"monitor"/"mon-filebeat": watch of *v1.Secret ended with: very short watch: object-"monitor"/"mon-filebeat": Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267629 88595 reflector.go:299] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: very short watch: object-"kube-system"/"kube-proxy": Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267599 88595 reflector.go:299] object-"monitor"/"mon-filebeat-token-z6rf8": watch of *v1.Secret ended with: very short watch: object-"monitor"/"mon-filebeat-token-z6rf8": Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info E1104 19:17:30.267650 88595 desired_state_of_world_populator.go:312] Error processing volume "mon-elasticsearch-master" for pod "mon-elasticsearch-master-1_monitor(89ee0fbc-6074-4195-82e0-63e4a478fa96)": error processing PVC monitor/mon-elasticsearch-master-mon-elasticsearch-master-1: failed to fetch PVC from API server: Get https://[fd00:205::2]:6443/api/v1/namespaces/monitor/persistentvolumeclaims/mon-elasticsearch-master-mon-elasticsearch-master-1: read tcp [fd00:205::4]:52701->[fd00:205::2]:6443: use of closed network connection 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267669 88595 reflector.go:299] object-"monitor"/"mon-metricbeat-daemonset-config": watch of *v1.Secret ended with: very short watch: object-"monitor"/"mon-metricbeat-daemonset-config": Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.267 controller-1 kubelet[88595]: info W1104 19:17:30.267676 88595 reflector.go:299] object-"monitor"/"mon-metricbeat": watch of *v1.ConfigMap ended with: very short watch: object-"monitor"/"mon-metricbeat": Unexpected watch close - watch lasted less than a second and no items received 2019-11-04T19:17:30.509 controller-1 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2019-11-04T19:17:30.509 controller-1 systemd[1]: warning Cannot add dependency job for unit dev-hugepages.mount, ignoring: Unit is masked. 2019-11-04T19:17:30.525 controller-1 systemd[1]: info Starting Etcd Server... 2019-11-04T19:17:30.541 controller-1 systemd[1]: info Starting v2 Registry server for Docker... 2019-11-04T19:17:30.000 controller-1 nslcd[84559]: warning [b141f2] ldap_search_ext() failed: Can't contact LDAP server: Connection reset by peer 2019-11-04T19:17:30.000 controller-1 nslcd[84559]: warning [b141f2] no available LDAP server found, sleeping 1 seconds 2019-11-04T19:17:30.598 controller-1 systemd[1]: info Started v2 Registry server for Docker. 2019-11-04T19:17:30.661 controller-1 registry[109742]: info time="2019-11-04T19:17:30Z" level=warning msg="No HTTP secret provided - generated random secret. This may cause problems with uploads if multiple registries are behind a load-balancer. To provide a shared secret, fill in http.secret in the configuration file or set the REGISTRY_HTTP_SECRET environment variable." go.version=go1.11.2 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:17:30.661 controller-1 registry[109742]: info time="2019-11-04T19:17:30Z" level=info msg="Starting upload purge in 34m0s" go.version=go1.11.2 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:17:30.661 controller-1 registry[109742]: info time="2019-11-04T19:17:30Z" level=info msg="redis not configured" go.version=go1.11.2 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:17:30.672 controller-1 registry[109742]: info time="2019-11-04T19:17:30Z" level=info msg="using inmemory blob descriptor cache" go.version=go1.11.2 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:17:30.673 controller-1 registry[109742]: info time="2019-11-04T19:17:30Z" level=info msg="listening on [fd00:204::2]:9001, tls" go.version=go1.11.2 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:17:31.000 controller-1 nslcd[84559]: info [b141f2] connected to LDAP server ldap://controller 2019-11-04T19:17:31.650 controller-1 systemd[1]: info Started Etcd Server. 2019-11-04T19:17:34.000 controller-1 nslcd[84559]: warning [e2a9e3] ldap_search_ext() failed: Can't contact LDAP server: Connection reset by peer 2019-11-04T19:17:34.000 controller-1 nslcd[84559]: warning [e2a9e3] no available LDAP server found, sleeping 1 seconds 2019-11-04T19:17:34.000 controller-1 ntpd[87625]: info Listen normally on 23 eno1 2620:10a:a001:a103::234 UDP 123 2019-11-04T19:17:34.000 controller-1 ntpd[87625]: info Listen normally on 24 vlan108 fd00:204::5 UDP 123 2019-11-04T19:17:34.000 controller-1 ntpd[87625]: info 64:ff9b::4559:cfc7 interface 2620:10a:a001:a103::233 -> 2620:10a:a001:a103::234 2019-11-04T19:17:34.000 controller-1 ntpd[87625]: info 64:ff9b::c632:ee9c interface 2620:10a:a001:a103::233 -> 2620:10a:a001:a103::234 2019-11-04T19:17:34.000 controller-1 ntpd[87625]: info fd00:204::3 interface fd00:204::2 -> fd00:204::5 2019-11-04T19:17:34.000 controller-1 ntpd[87625]: info 64:ff9b::ab42:617e interface 2620:10a:a001:a103::233 -> 2620:10a:a001:a103::234 2019-11-04T19:17:34.000 controller-1 ntpd[87625]: debug new interface(s) found: waking up resolver 2019-11-04T19:17:34.000 controller-1 nslcd[84559]: warning [45e146] ldap_search_ext() failed: Can't contact LDAP server: Connection reset by peer 2019-11-04T19:17:34.000 controller-1 nslcd[84559]: warning [45e146] no available LDAP server found, sleeping 1 seconds 2019-11-04T19:17:34.000 controller-1 nslcd[84559]: warning [5f007c] ldap_search_ext() failed: Can't contact LDAP server: Connection reset by peer 2019-11-04T19:17:34.000 controller-1 nslcd[84559]: warning [5f007c] no available LDAP server found, sleeping 1 seconds 2019-11-04T19:17:35.000 controller-1 nslcd[84559]: info [e2a9e3] connected to LDAP server ldap://controller 2019-11-04T19:17:35.279 controller-1 collectd[12276]: info alarm notifier reading: 18.88 % usage - /var/lib/docker-distribution 2019-11-04T19:17:35.280 controller-1 collectd[12276]: info alarm notifier reading: 10.86 % usage - /opt/etcd 2019-11-04T19:17:35.280 controller-1 collectd[12276]: info alarm notifier reading: 0.38 % usage - /opt/platform 2019-11-04T19:17:35.280 controller-1 collectd[12276]: info alarm notifier reading: 0.25 % usage - /opt/extension 2019-11-04T19:17:35.280 controller-1 collectd[12276]: info alarm notifier reading: 0.31 % usage - /var/lib/rabbitmq 2019-11-04T19:17:35.280 controller-1 collectd[12276]: info alarm notifier reading: 0.26 % usage - /var/lib/postgresql 2019-11-04T19:17:35.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 7.5% (avg per cpu); cpus: 36, Platform: 5.8% (Base: 3.6, k8s-system: 2.2), k8s-addon: 1.6 2019-11-04T19:17:35.289 controller-1 collectd[12276]: info platform memory usage: Usage: 2.9%; Reserved: 126242.5 MiB, Platform: 3609.0 MiB (Base: 3129.2, k8s-system: 479.8), k8s-addon: 6915.2 2019-11-04T19:17:35.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 8.4%, Anon: 10586.1 MiB, cgroup-rss: 10528.3 MiB, Avail: 115656.5 MiB, Total: 126242.5 MiB 2019-11-04T19:17:35.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 3.35%, Anon: 2125.6 MiB, Avail: 61331.0 MiB, Total: 63456.6 MiB 2019-11-04T19:17:35.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 13.31%, Anon: 8460.5 MiB, Avail: 55101.2 MiB, Total: 63561.6 MiB 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: info started, version 2.76 cachesize 150 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: info compile time options: IPv6 GNU-getopt DBus no-i18n IDN DHCP DHCPv6 no-Lua TFTP no-conntrack ipset auth no-DNSSEC loop-detect inotify 2019-11-04T19:17:36.000 controller-1 dnsmasq-dhcp[111976]: info DHCP, IP range 192.168.202.2 -- 192.168.202.254, lease time 1h 2019-11-04T19:17:36.000 controller-1 dnsmasq-dhcp[111976]: info DHCPv6, static leases only on fd00:204::2, lease time 1d 2019-11-04T19:17:36.000 controller-1 dnsmasq-tftp[111976]: info TFTP enabled 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: info using nameserver fd00:207::a#53 for domain cluster.local 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: info using local addresses only for unqualified names 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: info reading /etc/resolv.conf 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: info using nameserver fd00:207::a#53 for domain cluster.local 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: info using local addresses only for unqualified names 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: warning ignoring nameserver fd00:204::2 - local interface 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: info using nameserver 2620:10a:a001:a103::2#53 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: info read /etc/hosts - 10 addresses 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: info read /opt/platform/config/19.10//dnsmasq.addn_hosts_dc - 0 addresses 2019-11-04T19:17:36.000 controller-1 dnsmasq[111976]: info read /opt/platform/config/19.10//dnsmasq.addn_hosts - 1 addresses 2019-11-04T19:17:36.000 controller-1 dnsmasq-dhcp[111976]: info read /opt/platform/config/19.10//dnsmasq.hosts 2019-11-04T19:17:36.000 controller-1 nscd: notice 95151 monitoring file `/etc/resolv.conf` (5) 2019-11-04T19:17:36.000 controller-1 nscd: notice 95151 monitoring directory `/etc` (2) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: err Duplicate IPv4 address detected, some interfaces may not be visible in IP-MIB 2019-11-04T19:17:36.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:36.639 111981 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:67:20' with ip 'fd00:204::a4e4:77a2:377e:a63c' 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info init_cgtsAgentPlugin start 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info MIB registration: wrsAlarmActiveTable 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info MIB registration: wrsEventTable 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info get alarm database handler 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key ([DEFAULT]), value ([DEFAULT]) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (use_syslog), value (True) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (syslog_log_facility), value (local2) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmSnmpUtils.cpp(110): Set trap entries: (0) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (trap_destinations), value () 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (region_name), value (RegionOne) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (event_log_max_size), value (4000) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (system_name), value (yow-cgcs-wildcat-35-60) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key ([cors]), value ([cors]) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key ([database]), value ([database]) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (connection), value (postgresql+psycopg2://admin-fm:82bfce2c70beTi0*@[fd00:204::2]/fm) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (connection_recycle_time), value (60) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (max_pool_size), value (1) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (max_overflow), value (20) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key ([healthcheck]), value ([healthcheck]) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key ([oslo_middleware]), value ([oslo_middleware]) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (enable_proxy_headers_parsing), value (True) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key ([keystone_authtoken]), value ([keystone_authtoken]) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (user_domain_name), value (Default) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (project_name), value (services) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (auth_uri), value (http://[fd00:204::2]:5000) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (auth_type), value (password) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (username), value (fm) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (region_name), value (RegionOne) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (project_domain_name), value (Default) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (auth_url), value (http://[fd00:204::2]:5000) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (password), value (b0ae601fda2bTi0*) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key ([sysinv]), value ([sysinv]) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (catalog_info), value (platform:sysinv:internalURL) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (os_region_name), value (RegionOne) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key ([api]), value ([api]) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (api_paste_config), value (/etc/fm/api-paste.ini) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (api_workers), value (20) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (bind_port), value (18002) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info fmConfig.cpp(83): Config key (bind_host), value (fd00:204::4) 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: info init_snmpAuditPlugin 2019-11-04T19:17:36.000 controller-1 snmpd[112310]: warning Warning: no access control information configured. (Config search path: /etc/snmp:/usr/share/snmp:/usr/lib64/snmp:/root/.snmp) It's unlikely this agent can serve any useful purpose in this state. Run "snmpconf -g basic_setup" to help you configure the snmpd.conf file for this agent. 2019-11-04T19:17:36.000 controller-1 snmpd[112351]: info NET-SNMP version 5.7.2 2019-11-04T19:17:37.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:37.262 112355 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:67:20' with ip 'fd00:204::a4e4:77a2:377e:a63c' 2019-11-04T19:17:37.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:37.893 112612 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:66:a8' with ip 'fd00:204::e6c5:664e:a972:2e57' 2019-11-04T19:17:38.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:38.501 112787 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:65:c0' with ip 'fd00:204::334:7b78:b43c:4446' 2019-11-04T19:17:39.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:39.031 112865 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:67:70' with ip 'fd00:204::61d2:4f2f:eb61:78a1' 2019-11-04T19:17:39.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:39.599 112883 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:64:c8' with ip 'fd00:204::6031:d375:e981:9cce' 2019-11-04T19:17:40.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:40.172 113043 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:a0:15:d0' with ip 'fd00:204::e425:6be4:64c1:8ee4' 2019-11-04T19:17:40.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:40.704 113091 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:65:f0' with ip 'fd00:204::92d9:578f:f7c:e755' 2019-11-04T19:17:41.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:41.292 113269 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:a0:15:60' with ip 'fd00:204::d30:8281:294e:1413' 2019-11-04T19:17:41.377 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.377 [INFO][113774] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"rbd-provisioner-7484d49cf6-w6dzr", ContainerID:"0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b"}} 2019-11-04T19:17:41.393 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.393 [INFO][113774] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0 rbd-provisioner-7484d49cf6- kube-system dc73653f-0080-41a3-83b7-78a1b5a282bb 8164040 0 2019-11-04 19:17:04 +0000 UTC map[app:rbd-provisioner pod-template-hash:7484d49cf6 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:rbd-provisioner] map[] [] nil [] } {k8s controller-1 rbd-provisioner-7484d49cf6-w6dzr eth0 [] [] [kns.kube-system ksa.kube-system.rbd-provisioner] cali13622e2691b []}} ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-w6dzr" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-" 2019-11-04T19:17:41.393 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.393 [INFO][113774] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-w6dzr" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" 2019-11-04T19:17:41.396 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.396 [INFO][113774] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kube-system,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/kube-system,UID:5d016a6c-19e8-4b97-88a9-b6113a3cb736,ResourceVersion:5,Generation:0,CreationTimestamp:2019-10-25 15:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:17:41.397 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.397 [INFO][113774] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:rbd-provisioner-7484d49cf6-w6dzr,GenerateName:rbd-provisioner-7484d49cf6-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/rbd-provisioner-7484d49cf6-w6dzr,UID:dc73653f-0080-41a3-83b7-78a1b5a282bb,ResourceVersion:8164040,Generation:0,CreationTimestamp:2019-11-04 19:17:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: rbd-provisioner,pod-template-hash: 7484d49cf6,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet rbd-provisioner-7484d49cf6 4293aea8-b2ec-41f5-b635-07aeb9f394f9 0xc00055eeb7 0xc00055eeb8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{rbd-provisioner-token-587hn {nil nil nil nil nil SecretVolumeSource{SecretName:rbd-provisioner-token-587hn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{rbd-provisioner registry.local:9001/quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 [] [] [] [] [{PROVISIONER_NAME ceph.com/rbd nil}] {map[] map[]} [{rbd-provisioner-token-587hn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:rbd-provisioner,DeprecatedServiceAccount:rbd-provisioner,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[{default-registry-key}],Hostname:,Subdomain:,Affinity:&Affinity{NodeAffinity:nil,PodAffinity:nil,PodAntiAffinity:&PodAntiAffinity{RequiredDuringSchedulingIgnoredDuringExecution:[{LabelSelector{MatchLabels:map[string]string{},MatchExpressions:[{app In [rbd-provisioner]}],} [] kubernetes.io/hostname}],PreferredDuringSchedulingIgnoredDuringExecution:[],},},SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00055f080} {node.kubernetes.io/unreachable Exists NoExecute 0xc00055f0a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:17:04 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:17:04 +0000 UTC ContainersNotReady containers with unready status: [rbd-provisioner]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:17:04 +0000 UTC ContainersNotReady containers with unready status: [rbd-provisioner]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:17:04 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:17:04 +0000 UTC,ContainerStatuses:[{rbd-provisioner {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 registry.local:9001/quay.io/external_storage/rbd-provisioner:v2.1.1-k8s1.11 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T19:17:41.416 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.416 [INFO][113832] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" HandleID="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" 2019-11-04T19:17:41.424 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.424 [INFO][113832] ipam_plugin.go 220: Calico CNI IPAM handle=chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" HandleID="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" 2019-11-04T19:17:41.424 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.424 [INFO][113832] ipam_plugin.go 230: Auto assigning IP ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" HandleID="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002cbb70), Attrs:map[string]string{"node":"controller-1", "pod":"rbd-provisioner-7484d49cf6-w6dzr", "namespace":"kube-system"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:17:41.424 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.424 [INFO][113832] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:17:41.428 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.428 [INFO][113832] ipam.go 309: Looking up existing affinities for host handle="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" host="controller-1" 2019-11-04T19:17:41.432 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.432 [INFO][113832] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" host="controller-1" 2019-11-04T19:17:41.434 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.434 [INFO][113832] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:17:41.454 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.454 [INFO][113832] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:17:41.454 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.454 [INFO][113832] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" host="controller-1" 2019-11-04T19:17:41.474 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.474 [INFO][113832] ipam.go 1244: Creating new handle: chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b 2019-11-04T19:17:41.476 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.476 [INFO][113832] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" host="controller-1" 2019-11-04T19:17:41.479 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.479 [INFO][113832] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e338/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" host="controller-1" 2019-11-04T19:17:41.479 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.479 [INFO][113832] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e338/122] handle="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" host="controller-1" 2019-11-04T19:17:41.499 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.499 [INFO][113832] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e338/122] handle="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" host="controller-1" 2019-11-04T19:17:41.499 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.499 [INFO][113832] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e338/122] ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" HandleID="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" 2019-11-04T19:17:41.499 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.499 [INFO][113832] ipam_plugin.go 258: IPAM Result ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" HandleID="chain.0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Workload="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc0006a0180)} 2019-11-04T19:17:41.501 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.500 [INFO][113774] k8s.go 361: Populated endpoint ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-w6dzr" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0", GenerateName:"rbd-provisioner-7484d49cf6-", Namespace:"kube-system", SelfLink:"", UID:"dc73653f-0080-41a3-83b7-78a1b5a282bb", ResourceVersion:"8164040", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708491824, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"pod-template-hash":"7484d49cf6", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"rbd-provisioner", "app":"rbd-provisioner"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"rbd-provisioner-7484d49cf6-w6dzr", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e338/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.rbd-provisioner"}, InterfaceName:"cali13622e2691b", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:17:41.501 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.501 [INFO][113774] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e338/128] ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-w6dzr" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" 2019-11-04T19:17:41.501 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.501 [INFO][113774] network_linux.go 76: Setting the host side veth name to cali13622e2691b ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-w6dzr" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" 2019-11-04T19:17:41.504 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.503 [INFO][113774] network_linux.go 411: Disabling IPv6 forwarding ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-w6dzr" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" 2019-11-04T19:17:41.545 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.545 [INFO][113774] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-w6dzr" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0", GenerateName:"rbd-provisioner-7484d49cf6-", Namespace:"kube-system", SelfLink:"", UID:"dc73653f-0080-41a3-83b7-78a1b5a282bb", ResourceVersion:"8164040", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708491824, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"rbd-provisioner", "pod-template-hash":"7484d49cf6", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"rbd-provisioner"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b", Pod:"rbd-provisioner-7484d49cf6-w6dzr", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e338/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.rbd-provisioner"}, InterfaceName:"cali13622e2691b", MAC:"5e:16:47:ad:6c:04", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:17:41.550 controller-1 kubelet[88595]: info 2019-11-04 19:17:41.549 [INFO][113774] k8s.go 420: Wrote updated endpoint to datastore ContainerID="0df8ebc0136f3e47c0700685f9f5e5db035a2ddfc08a572cbd5d3f2e6bf15b2b" Namespace="kube-system" Pod="rbd-provisioner-7484d49cf6-w6dzr" WorkloadEndpoint="controller--1-k8s-rbd--provisioner--7484d49cf6--w6dzr-eth0" 2019-11-04T19:17:41.625 controller-1 containerd[12214]: info time="2019-11-04T19:17:41.625883393Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4e16ea6ea9de22ef5123387f346c6dd0a06757a3cf6699991867154477a01103/shim.sock" debug=false pid=113959 2019-11-04T19:17:41.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:41.902 113742 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:a0:17:58' with ip 'fd00:204::7672:30ea:4635:85c9' 2019-11-04T19:17:42.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:42.470 114116 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9f:71:08' with ip 'fd00:204::cfbc:a4c1:8864:e140' 2019-11-04T19:17:43.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:43.031 114207 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:a0:16:d0' with ip 'fd00:204::6a91:a499:f12f:3877' 2019-11-04T19:17:43.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:43.635 114280 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9f:74:c0' with ip 'fd00:204::56c0:18c9:f477:67f9' 2019-11-04T19:17:44.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:44.241 114351 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:a0:19:68' with ip 'fd00:204::f7e1:1a09:6ba7:92e2' 2019-11-04T19:17:44.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:44.835 114429 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:a0:11:00' with ip 'fd00:204::2966:7701:a798:3e3a' 2019-11-04T19:17:45.000 controller-1 ntpd[87625]: info Listen normally on 25 cali13622e2691b fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:17:45.000 controller-1 ntpd[87625]: debug new interface(s) found: waking up resolver 2019-11-04T19:17:45.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 27.0% (avg per cpu); cpus: 36, Platform: 24.2% (Base: 23.0, k8s-system: 1.2), k8s-addon: 2.7 2019-11-04T19:17:45.284 controller-1 collectd[12276]: info alarm notifier reading: 27.01 % usage - Platform CPU 2019-11-04T19:17:45.289 controller-1 collectd[12276]: info alarm notifier reading: 11.61 % usage - Platform Memory total 2019-11-04T19:17:45.289 controller-1 collectd[12276]: info platform memory usage: Usage: 6.1%; Reserved: 126147.7 MiB, Platform: 7641.0 MiB (Base: 7123.4, k8s-system: 517.6), k8s-addon: 6930.4 2019-11-04T19:17:45.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 11.6%, Anon: 14644.9 MiB, cgroup-rss: 14575.4 MiB, Avail: 111502.8 MiB, Total: 126147.7 MiB 2019-11-04T19:17:45.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 6.65%, Anon: 4214.4 MiB, Avail: 59204.8 MiB, Total: 63419.2 MiB 2019-11-04T19:17:45.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 16.42%, Anon: 10430.5 MiB, Avail: 53091.3 MiB, Total: 63521.8 MiB 2019-11-04T19:17:45.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:45.410 114573 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:63:30' with ip 'fd00:204::5458:f2d4:abec:4f3e' 2019-11-04T19:17:45.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:45.995 114584 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:a0:0b:58' with ip 'fd00:204::61c:24c2:557d:5aa7' 2019-11-04T19:17:46.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:46.574 114664 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9f:70:50' with ip 'fd00:204::37ad:cb:6285:8372' 2019-11-04T19:17:47.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:47.173 114787 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:65:28' with ip 'fd00:204::552b:2bbe:c8fa:502f' 2019-11-04T19:17:47.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:47.788 114824 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:66:b8' with ip 'fd00:204::78fe:8421:1cd1:51db' 2019-11-04T19:17:48.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:17:48.407 114854 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:65:b8' with ip 'fd00:204::59b8:d0e5:39e:baa3' 2019-11-04T19:17:55.289 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 18.3% (avg per cpu); cpus: 36, Platform: 12.0% (Base: 11.1, k8s-system: 0.9), k8s-addon: 6.2 2019-11-04T19:17:55.293 controller-1 collectd[12276]: info platform memory usage: Usage: 6.5%; Reserved: 126128.5 MiB, Platform: 8156.4 MiB (Base: 7617.4, k8s-system: 539.0), k8s-addon: 6985.7 2019-11-04T19:17:55.293 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.1%, Anon: 15211.9 MiB, cgroup-rss: 15142.6 MiB, Avail: 110916.7 MiB, Total: 126128.5 MiB 2019-11-04T19:17:55.293 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.31%, Anon: 4633.4 MiB, Avail: 58780.6 MiB, Total: 63414.0 MiB 2019-11-04T19:17:55.293 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 16.66%, Anon: 10578.4 MiB, Avail: 52933.2 MiB, Total: 63511.7 MiB 2019-11-04T19:18:03.548 controller-1 registry[109742]: info time="2019-11-04T19:18:03Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=a50e69cb-d60c-4695-954b-ef26b10b8162 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54451" http.request.uri="/v2/" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:03.548 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:03 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:03.556 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:03Z" level=info msg=getToken go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=388590d4-13b9-4edc-8162-f0a0a3cf53e3 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54288" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fcni%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:03.956 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:03Z" level=info msg="authenticated client" acctSubject=admin go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=388590d4-13b9-4edc-8162-f0a0a3cf53e3 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54288" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fcni%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:03.958 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:03Z" level=info msg="authorized client" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository quay.io/calico/cni} push} {{repository quay.io/calico/cni} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=388590d4-13b9-4edc-8162-f0a0a3cf53e3 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54288" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fcni%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository quay.io/calico/cni} push} {{repository quay.io/calico/cni} pull}] 2019-11-04T19:18:03.959 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:03Z" level=info msg="get token complete" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository quay.io/calico/cni} push} {{repository quay.io/calico/cni} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=388590d4-13b9-4edc-8162-f0a0a3cf53e3 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54288" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fcni%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/json" http.response.duration=404.622283ms http.response.status=200 http.response.written=1329 instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository quay.io/calico/cni} push} {{repository quay.io/calico/cni} pull}] 2019-11-04T19:18:03.971 controller-1 registry[109742]: info time="2019-11-04T19:18:03Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=25993d27-fce2-486f-97f4-45df0254ed79 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54457" http.request.uri="/v2/quay.io/calico/cni/blobs/sha256:c87736221ed0bcaa60b8e92a19bec2284899ef89226f2a07968677cf59e637a4" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=6.73167ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:03.971 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:03 +0000] "HEAD /v2/quay.io/calico/cni/blobs/sha256:c87736221ed0bcaa60b8e92a19bec2284899ef89226f2a07968677cf59e637a4 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:03.971 controller-1 registry[109742]: info time="2019-11-04T19:18:03Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=cf0882f8-bc56-40b5-9401-addb28e9d5cc http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54463" http.request.uri="/v2/quay.io/calico/cni/blobs/sha256:b1aab78761e07251ecb79b18144b2b28ec04d989aeaecaff8dff7838e31de5d9" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=7.014061ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:03.971 controller-1 registry[109742]: info time="2019-11-04T19:18:03Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=cf1feb93-e550-4060-b1e9-f53f3f002f07 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54461" http.request.uri="/v2/quay.io/calico/cni/blobs/sha256:c22d96b96dad622f1ff186f0b51895be958dd7b0fffc5d6bfac69a3968d85d26" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=7.136318ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:03.971 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:03 +0000] "HEAD /v2/quay.io/calico/cni/blobs/sha256:b1aab78761e07251ecb79b18144b2b28ec04d989aeaecaff8dff7838e31de5d9 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:03.971 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:03 +0000] "HEAD /v2/quay.io/calico/cni/blobs/sha256:c22d96b96dad622f1ff186f0b51895be958dd7b0fffc5d6bfac69a3968d85d26 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:03.971 controller-1 registry[109742]: info time="2019-11-04T19:18:03Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=8c03bc08-665a-4691-832b-5905a68078d4 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54459" http.request.uri="/v2/quay.io/calico/cni/blobs/sha256:b0d4f571272d2066553342c21691f6379e7811b4d14df83e3519250ea26a7e66" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=7.015015ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:03.971 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:03 +0000] "HEAD /v2/quay.io/calico/cni/blobs/sha256:b0d4f571272d2066553342c21691f6379e7811b4d14df83e3519250ea26a7e66 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:03.978 controller-1 registry[109742]: info time="2019-11-04T19:18:03Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=c71078aa-3666-4668-83b0-4d403f4f70e7 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54467" http.request.uri="/v2/quay.io/calico/cni/blobs/sha256:14f1e7286a2dc5b6a93b14b9296c4031b789b71ef7902316a37d000e67263135" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=1.977376ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:03.978 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:03 +0000] "HEAD /v2/quay.io/calico/cni/blobs/sha256:14f1e7286a2dc5b6a93b14b9296c4031b789b71ef7902316a37d000e67263135 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:03.992 controller-1 registry[109742]: info time="2019-11-04T19:18:03Z" level=info msg="response completed" go.version=go1.11.2 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host="registry.local:9001" http.request.id=d562acaa-26d7-4a7e-9581-19cbe96082c8 http.request.method=PUT http.request.remoteaddr="[fd00:204::2]:54469" http.request.uri="/v2/quay.io/calico/cni/manifests/v3.6.2" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.duration=9.387982ms http.response.status=201 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:03.992 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:03 +0000] "PUT /v2/quay.io/calico/cni/manifests/v3.6.2 HTTP/1.1" 201 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:04.532 controller-1 registry[109742]: info time="2019-11-04T19:18:04Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=f5827a49-ddf1-43c8-94c0-5681872f3508 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54483" http.request.uri="/v2/" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:04.532 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:04 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:04.538 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:04Z" level=info msg=getToken go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=e02ff71c-db63-4527-b5e5-1dacdf22898f http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54320" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fnode%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:04.902 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:04Z" level=info msg="authenticated client" acctSubject=admin go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=e02ff71c-db63-4527-b5e5-1dacdf22898f http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54320" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fnode%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:04.904 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:04Z" level=info msg="authorized client" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository quay.io/calico/node} push} {{repository quay.io/calico/node} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=e02ff71c-db63-4527-b5e5-1dacdf22898f http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54320" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fnode%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository quay.io/calico/node} push} {{repository quay.io/calico/node} pull}] 2019-11-04T19:18:04.904 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:04Z" level=info msg="get token complete" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository quay.io/calico/node} push} {{repository quay.io/calico/node} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=e02ff71c-db63-4527-b5e5-1dacdf22898f http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54320" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fnode%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/json" http.response.duration=366.193649ms http.response.status=200 http.response.written=1330 instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository quay.io/calico/node} push} {{repository quay.io/calico/node} pull}] 2019-11-04T19:18:04.916 controller-1 registry[109742]: info time="2019-11-04T19:18:04Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=954b9429-a2e0-4e98-a6db-742a690ac8a8 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54491" http.request.uri="/v2/quay.io/calico/node/blobs/sha256:c1c6d881f90f7be13414c89ccb6ed596aa88b5b995b75e6fa2596dbb988e79b4" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=5.825534ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:04.916 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:04 +0000] "HEAD /v2/quay.io/calico/node/blobs/sha256:c1c6d881f90f7be13414c89ccb6ed596aa88b5b995b75e6fa2596dbb988e79b4 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:04.916 controller-1 registry[109742]: info time="2019-11-04T19:18:04Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=ec3e4ea1-754b-49c8-a2f6-d6bbad2b834d http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54497" http.request.uri="/v2/quay.io/calico/node/blobs/sha256:7073dad5fdac81a27bfd444ae086efc64875cead873d97c8759ec10fe3a92f63" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=6.057107ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:04.916 controller-1 registry[109742]: info time="2019-11-04T19:18:04Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=a8078d3c-d295-4c4b-b307-59f9e1151529 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54493" http.request.uri="/v2/quay.io/calico/node/blobs/sha256:7db1cf455b1e1ce162062e5d03fb7f11ac03b934e17ad209ce1fc5e7bd523233" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=5.869131ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:04.916 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:04 +0000] "HEAD /v2/quay.io/calico/node/blobs/sha256:7073dad5fdac81a27bfd444ae086efc64875cead873d97c8759ec10fe3a92f63 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:04.916 controller-1 registry[109742]: info time="2019-11-04T19:18:04Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=09c0b018-ee7e-4ce0-9f0b-0a3995f4d48d http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54489" http.request.uri="/v2/quay.io/calico/node/blobs/sha256:27ae94e903818d842e5ce09600564ecabee0baa9d9e8379ff052948a650e3022" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=6.178435ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:04.916 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:04 +0000] "HEAD /v2/quay.io/calico/node/blobs/sha256:7db1cf455b1e1ce162062e5d03fb7f11ac03b934e17ad209ce1fc5e7bd523233 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:04.916 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:04 +0000] "HEAD /v2/quay.io/calico/node/blobs/sha256:27ae94e903818d842e5ce09600564ecabee0baa9d9e8379ff052948a650e3022 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:04.916 controller-1 registry[109742]: info time="2019-11-04T19:18:04Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=a2f4a5df-e735-400f-bbb6-4fc16c154db3 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54495" http.request.uri="/v2/quay.io/calico/node/blobs/sha256:a2882f30bc7c52dea52a7b8b823a35fef8f3ac04f361473e6293d75d3d8e89be" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=5.776224ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:04.916 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:04 +0000] "HEAD /v2/quay.io/calico/node/blobs/sha256:a2882f30bc7c52dea52a7b8b823a35fef8f3ac04f361473e6293d75d3d8e89be HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:04.922 controller-1 registry[109742]: info time="2019-11-04T19:18:04Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=fe39c7a4-1493-4087-94bc-c32c0d8630dd http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54499" http.request.uri="/v2/quay.io/calico/node/blobs/sha256:c87736221ed0bcaa60b8e92a19bec2284899ef89226f2a07968677cf59e637a4" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=1.85353ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:04.922 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:04 +0000] "HEAD /v2/quay.io/calico/node/blobs/sha256:c87736221ed0bcaa60b8e92a19bec2284899ef89226f2a07968677cf59e637a4 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:04.929 controller-1 registry[109742]: info time="2019-11-04T19:18:04Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=53810ceb-aa11-4124-8974-4eb6e73f07de http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54501" http.request.uri="/v2/quay.io/calico/node/blobs/sha256:707815f0ee0a716991399d917838dbcfbba2ac9bfba3cfd0d7652fe0ba3a04e6" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=1.690781ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:04.929 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:04 +0000] "HEAD /v2/quay.io/calico/node/blobs/sha256:707815f0ee0a716991399d917838dbcfbba2ac9bfba3cfd0d7652fe0ba3a04e6 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:04.939 controller-1 registry[109742]: info time="2019-11-04T19:18:04Z" level=info msg="response completed" go.version=go1.11.2 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host="registry.local:9001" http.request.id=a21ae01c-2fd5-428d-b2f5-947765beb93c http.request.method=PUT http.request.remoteaddr="[fd00:204::2]:54503" http.request.uri="/v2/quay.io/calico/node/manifests/v3.6.2" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.duration=6.011248ms http.response.status=201 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:04.939 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:04 +0000] "PUT /v2/quay.io/calico/node/manifests/v3.6.2 HTTP/1.1" 201 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:05.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 11.6% (avg per cpu); cpus: 36, Platform: 5.9% (Base: 4.9, k8s-system: 1.0), k8s-addon: 5.5 2019-11-04T19:18:05.284 controller-1 collectd[12276]: info alarm notifier reading: 11.56 % usage - Platform CPU 2019-11-04T19:18:05.289 controller-1 collectd[12276]: info platform memory usage: Usage: 6.5%; Reserved: 126122.9 MiB, Platform: 8197.6 MiB (Base: 7643.6, k8s-system: 554.0), k8s-addon: 6984.7 2019-11-04T19:18:05.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.1%, Anon: 15254.5 MiB, cgroup-rss: 15186.2 MiB, Avail: 110868.4 MiB, Total: 126122.9 MiB 2019-11-04T19:18:05.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.33%, Anon: 4651.0 MiB, Avail: 58759.4 MiB, Total: 63410.4 MiB 2019-11-04T19:18:05.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 16.70%, Anon: 10603.5 MiB, Avail: 52908.6 MiB, Total: 63512.1 MiB 2019-11-04T19:18:05.472 controller-1 registry[109742]: info time="2019-11-04T19:18:05Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=d11bd4d3-52f9-43f0-93dd-2818bcc91a63 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54515" http.request.uri="/v2/" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:05.472 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:05 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:05.478 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:05Z" level=info msg=getToken go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=faf69780-5055-4052-8220-0130aebd9b4b http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54352" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fkube-controllers%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:05.832 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:05Z" level=info msg="authenticated client" acctSubject=admin go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=faf69780-5055-4052-8220-0130aebd9b4b http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54352" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fkube-controllers%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:05.835 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:05Z" level=info msg="authorized client" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository quay.io/calico/kube-controllers} push} {{repository quay.io/calico/kube-controllers} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=faf69780-5055-4052-8220-0130aebd9b4b http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54352" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fkube-controllers%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository quay.io/calico/kube-controllers} push} {{repository quay.io/calico/kube-controllers} pull}] 2019-11-04T19:18:05.835 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:05Z" level=info msg="get token complete" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository quay.io/calico/kube-controllers} push} {{repository quay.io/calico/kube-controllers} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=faf69780-5055-4052-8220-0130aebd9b4b http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54352" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fcalico%2Fkube-controllers%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/json" http.response.duration=356.796393ms http.response.status=200 http.response.written=1346 instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository quay.io/calico/kube-controllers} push} {{repository quay.io/calico/kube-controllers} pull}] 2019-11-04T19:18:05.843 controller-1 registry[109742]: info time="2019-11-04T19:18:05Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=285173c0-f6c0-4421-b49c-65bbc0f75845 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54527" http.request.uri="/v2/quay.io/calico/kube-controllers/blobs/sha256:c87736221ed0bcaa60b8e92a19bec2284899ef89226f2a07968677cf59e637a4" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=2.658138ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:05.843 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:05 +0000] "HEAD /v2/quay.io/calico/kube-controllers/blobs/sha256:c87736221ed0bcaa60b8e92a19bec2284899ef89226f2a07968677cf59e637a4 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:05.843 controller-1 registry[109742]: info time="2019-11-04T19:18:05Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=09f3d763-41ad-49cb-94de-316f087c10bf http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54531" http.request.uri="/v2/quay.io/calico/kube-controllers/blobs/sha256:ed21051cc7aa3b5716bda017063830730de0f0b18d4dd4053b5942b918b6e192" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=3.096688ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:05.843 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:05 +0000] "HEAD /v2/quay.io/calico/kube-controllers/blobs/sha256:ed21051cc7aa3b5716bda017063830730de0f0b18d4dd4053b5942b918b6e192 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:05.843 controller-1 registry[109742]: info time="2019-11-04T19:18:05Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=9ca45568-bbae-465a-a692-c2946aa83ebf http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54529" http.request.uri="/v2/quay.io/calico/kube-controllers/blobs/sha256:1b645f9cf1ea5219508892b73cd5bff40b401a18c358b30a9c60bf841c7b04d5" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=2.993793ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:05.843 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:05 +0000] "HEAD /v2/quay.io/calico/kube-controllers/blobs/sha256:1b645f9cf1ea5219508892b73cd5bff40b401a18c358b30a9c60bf841c7b04d5 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:05.850 controller-1 registry[109742]: info time="2019-11-04T19:18:05Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=e517226d-d501-46e3-abb8-92e78e1ed521 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54533" http.request.uri="/v2/quay.io/calico/kube-controllers/blobs/sha256:c4b7095e1af25b56a5e652d63184ea9910e6397a49a0bcf50bada831a9baf52e" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=1.675423ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:05.850 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:05 +0000] "HEAD /v2/quay.io/calico/kube-controllers/blobs/sha256:c4b7095e1af25b56a5e652d63184ea9910e6397a49a0bcf50bada831a9baf52e HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:05.861 controller-1 registry[109742]: info time="2019-11-04T19:18:05Z" level=info msg="response completed" go.version=go1.11.2 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host="registry.local:9001" http.request.id=864091d3-e8fd-43d5-8441-190b0677da42 http.request.method=PUT http.request.remoteaddr="[fd00:204::2]:54535" http.request.uri="/v2/quay.io/calico/kube-controllers/manifests/v3.6.2" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.duration=6.525838ms http.response.status=201 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:05.861 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:05 +0000] "PUT /v2/quay.io/calico/kube-controllers/manifests/v3.6.2 HTTP/1.1" 201 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:07.240 controller-1 registry[109742]: info time="2019-11-04T19:18:07Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=565bdced-4567-4051-ac8b-ea6f59e01cea http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54643" http.request.uri="/v2/" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:07.240 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:07 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:07.246 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:07Z" level=info msg=getToken go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=4241a007-e815-4cb8-8d94-2040029f8389 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54480" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fmultus%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:07.625 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:07Z" level=info msg="authenticated client" acctSubject=admin go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=4241a007-e815-4cb8-8d94-2040029f8389 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54480" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fmultus%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:07.627 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:07Z" level=info msg="authorized client" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository docker.io/starlingx/multus} push} {{repository docker.io/starlingx/multus} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=4241a007-e815-4cb8-8d94-2040029f8389 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54480" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fmultus%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository docker.io/starlingx/multus} push} {{repository docker.io/starlingx/multus} pull}] 2019-11-04T19:18:07.628 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:07Z" level=info msg="get token complete" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository docker.io/starlingx/multus} push} {{repository docker.io/starlingx/multus} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=4241a007-e815-4cb8-8d94-2040029f8389 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54480" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fmultus%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/json" http.response.duration=381.701458ms http.response.status=200 http.response.written=1339 instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository docker.io/starlingx/multus} push} {{repository docker.io/starlingx/multus} pull}] 2019-11-04T19:18:07.636 controller-1 registry[109742]: info time="2019-11-04T19:18:07Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=d3e846c2-ca68-4676-aa0e-530280555614 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54653" http.request.uri="/v2/docker.io/starlingx/multus/blobs/sha256:5f7b3b73e295a029eb600b852ebdaa7def447fa4ce6b0f3213a7883d3d0ceb01" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=3.523219ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:07.636 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:07 +0000] "HEAD /v2/docker.io/starlingx/multus/blobs/sha256:5f7b3b73e295a029eb600b852ebdaa7def447fa4ce6b0f3213a7883d3d0ceb01 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:07.637 controller-1 registry[109742]: info time="2019-11-04T19:18:07Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=ec6a57b2-3a49-4c0d-b28d-7d939ad7c490 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54655" http.request.uri="/v2/docker.io/starlingx/multus/blobs/sha256:b927cf00e63cd02c1791ed81770a616264a260a564f956d78e128f8bb83429c6" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=3.537907ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:07.637 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:07 +0000] "HEAD /v2/docker.io/starlingx/multus/blobs/sha256:b927cf00e63cd02c1791ed81770a616264a260a564f956d78e128f8bb83429c6 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:07.637 controller-1 registry[109742]: info time="2019-11-04T19:18:07Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=d0e2d295-3833-4bc2-857d-04efe50f9b41 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54651" http.request.uri="/v2/docker.io/starlingx/multus/blobs/sha256:d8d02d45731499028db01b6fa35475f91d230628b4e25fab8e3c015594dc3261" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=3.853294ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:07.637 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:07 +0000] "HEAD /v2/docker.io/starlingx/multus/blobs/sha256:d8d02d45731499028db01b6fa35475f91d230628b4e25fab8e3c015594dc3261 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:07.644 controller-1 registry[109742]: info time="2019-11-04T19:18:07Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=48ce5913-1a14-49d6-b1d0-34cd95d1d378 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54657" http.request.uri="/v2/docker.io/starlingx/multus/blobs/sha256:4c3da64245c6e3b4ec855b727fdc5ad3a40df7b0957cd2605d7e30b7830203c8" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=2.509111ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:07.644 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:07 +0000] "HEAD /v2/docker.io/starlingx/multus/blobs/sha256:4c3da64245c6e3b4ec855b727fdc5ad3a40df7b0957cd2605d7e30b7830203c8 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:07.654 controller-1 registry[109742]: info time="2019-11-04T19:18:07Z" level=info msg="response completed" go.version=go1.11.2 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host="registry.local:9001" http.request.id=abe51114-a281-4294-9d8d-de0875fda9c8 http.request.method=PUT http.request.remoteaddr="[fd00:204::2]:54659" http.request.uri="/v2/docker.io/starlingx/multus/manifests/v3.2.16" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.duration=6.179855ms http.response.status=201 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:07.654 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:07 +0000] "PUT /v2/docker.io/starlingx/multus/manifests/v3.2.16 HTTP/1.1" 201 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:08.205 controller-1 registry[109742]: info time="2019-11-04T19:18:08Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=623cd23e-fad9-4bdd-8123-e42ea935bdfe http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54677" http.request.uri="/v2/" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:08.205 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:08 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:08.210 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:08Z" level=info msg=getToken go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=c07bd425-c310-4b59-a41b-7081b028e069 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54514" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fk8s-cni-sriov%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:08.568 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:08Z" level=info msg="authenticated client" acctSubject=admin go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=c07bd425-c310-4b59-a41b-7081b028e069 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54514" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fk8s-cni-sriov%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:08.571 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:08Z" level=info msg="authorized client" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository docker.io/starlingx/k8s-cni-sriov} push} {{repository docker.io/starlingx/k8s-cni-sriov} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=c07bd425-c310-4b59-a41b-7081b028e069 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54514" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fk8s-cni-sriov%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository docker.io/starlingx/k8s-cni-sriov} push} {{repository docker.io/starlingx/k8s-cni-sriov} pull}] 2019-11-04T19:18:08.571 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:08Z" level=info msg="get token complete" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository docker.io/starlingx/k8s-cni-sriov} push} {{repository docker.io/starlingx/k8s-cni-sriov} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=c07bd425-c310-4b59-a41b-7081b028e069 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54514" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fk8s-cni-sriov%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/json" http.response.duration=360.367724ms http.response.status=200 http.response.written=1349 instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository docker.io/starlingx/k8s-cni-sriov} push} {{repository docker.io/starlingx/k8s-cni-sriov} pull}] 2019-11-04T19:18:08.579 controller-1 registry[109742]: info time="2019-11-04T19:18:08Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=e5dcce25-165f-4ce5-b02c-f3cc1090f42a http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54685" http.request.uri="/v2/docker.io/starlingx/k8s-cni-sriov/blobs/sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=2.936009ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:08.579 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:08 +0000] "HEAD /v2/docker.io/starlingx/k8s-cni-sriov/blobs/sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:08.580 controller-1 registry[109742]: info time="2019-11-04T19:18:08Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=08f29564-7e0a-4b62-ae61-fe329aefb639 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54687" http.request.uri="/v2/docker.io/starlingx/k8s-cni-sriov/blobs/sha256:3878d573d8dd62d7addab8321f166fc22395978f62485e13fd3c0a18809f9083" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=3.997867ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:08.580 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:08 +0000] "HEAD /v2/docker.io/starlingx/k8s-cni-sriov/blobs/sha256:3878d573d8dd62d7addab8321f166fc22395978f62485e13fd3c0a18809f9083 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:08.580 controller-1 registry[109742]: info time="2019-11-04T19:18:08Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=5aa29127-8093-4a94-a5ab-37747d5a0155 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54689" http.request.uri="/v2/docker.io/starlingx/k8s-cni-sriov/blobs/sha256:749c16c6c394bc33d22f0af69e524824e99c8f8b8afa3e2887d35b70c9dbb788" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=4.393016ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:08.580 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:08 +0000] "HEAD /v2/docker.io/starlingx/k8s-cni-sriov/blobs/sha256:749c16c6c394bc33d22f0af69e524824e99c8f8b8afa3e2887d35b70c9dbb788 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:08.587 controller-1 registry[109742]: info time="2019-11-04T19:18:08Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=62feb667-4549-45c8-8b79-ded531cd6fb0 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54691" http.request.uri="/v2/docker.io/starlingx/k8s-cni-sriov/blobs/sha256:4523d5d3f6b3a40082eb778230ddecdaac004ee4ca532e654f39e9da397e6b1b" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=1.885964ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:08.587 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:08 +0000] "HEAD /v2/docker.io/starlingx/k8s-cni-sriov/blobs/sha256:4523d5d3f6b3a40082eb778230ddecdaac004ee4ca532e654f39e9da397e6b1b HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:08.598 controller-1 registry[109742]: info time="2019-11-04T19:18:08Z" level=info msg="response completed" go.version=go1.11.2 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host="registry.local:9001" http.request.id=209a79bf-4251-4418-92c2-04c9ae519c02 http.request.method=PUT http.request.remoteaddr="[fd00:204::2]:54693" http.request.uri="/v2/docker.io/starlingx/k8s-cni-sriov/manifests/master-centos-stable-latest" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.duration=6.30426ms http.response.status=201 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:08.598 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:08 +0000] "PUT /v2/docker.io/starlingx/k8s-cni-sriov/manifests/master-centos-stable-latest HTTP/1.1" 201 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:09.147 controller-1 registry[109742]: info time="2019-11-04T19:18:09Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=99b18b73-3fa7-4057-bd94-3746d915d65c http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54707" http.request.uri="/v2/" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:09.147 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:09 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:09.153 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:09Z" level=info msg=getToken go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=afa0f870-bd0d-4d1a-986e-2cea9f7054f6 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54544" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fk8s-plugins-sriov-network-device%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:09.521 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:09Z" level=info msg="authenticated client" acctSubject=admin go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=afa0f870-bd0d-4d1a-986e-2cea9f7054f6 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54544" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fk8s-plugins-sriov-network-device%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:09.524 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:09Z" level=info msg="authorized client" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository docker.io/starlingx/k8s-plugins-sriov-network-device} push} {{repository docker.io/starlingx/k8s-plugins-sriov-network-device} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=afa0f870-bd0d-4d1a-986e-2cea9f7054f6 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54544" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fk8s-plugins-sriov-network-device%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository docker.io/starlingx/k8s-plugins-sriov-network-device} push} {{repository docker.io/starlingx/k8s-plugins-sriov-network-device} pull}] 2019-11-04T19:18:09.524 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:09Z" level=info msg="get token complete" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository docker.io/starlingx/k8s-plugins-sriov-network-device} push} {{repository docker.io/starlingx/k8s-plugins-sriov-network-device} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=afa0f870-bd0d-4d1a-986e-2cea9f7054f6 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54544" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fk8s-plugins-sriov-network-device%3Apush%2Cpull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/json" http.response.duration=371.24436ms http.response.status=200 http.response.written=1374 instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository docker.io/starlingx/k8s-plugins-sriov-network-device} push} {{repository docker.io/starlingx/k8s-plugins-sriov-network-device} pull}] 2019-11-04T19:18:09.533 controller-1 registry[109742]: info time="2019-11-04T19:18:09Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=8dcb7d5d-575c-4a76-9a89-8adc701dd496 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54723" http.request.uri="/v2/docker.io/starlingx/k8s-plugins-sriov-network-device/blobs/sha256:2b23cb105e9c09a8c2d8ab0b48e7c00944fac17c6fbf81732bb852d7191f725e" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=2.745841ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:09.533 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:09 +0000] "HEAD /v2/docker.io/starlingx/k8s-plugins-sriov-network-device/blobs/sha256:2b23cb105e9c09a8c2d8ab0b48e7c00944fac17c6fbf81732bb852d7191f725e HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:09.533 controller-1 registry[109742]: info time="2019-11-04T19:18:09Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=52b28ebc-73c7-4f37-92e1-8a18161be816 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54721" http.request.uri="/v2/docker.io/starlingx/k8s-plugins-sriov-network-device/blobs/sha256:1e3770ef2907b864fe9a56c02ffdb7af02b43d402b54d76db1e4abb571cb9b55" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=3.682606ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:09.533 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:09 +0000] "HEAD /v2/docker.io/starlingx/k8s-plugins-sriov-network-device/blobs/sha256:1e3770ef2907b864fe9a56c02ffdb7af02b43d402b54d76db1e4abb571cb9b55 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:09.533 controller-1 registry[109742]: info time="2019-11-04T19:18:09Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=ea80ca81-ff35-437e-be9c-0af0c5553b9b http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54727" http.request.uri="/v2/docker.io/starlingx/k8s-plugins-sriov-network-device/blobs/sha256:f193525f34bfe0f8163e483ad082edf01dd325d6f7b5aae5c47c1de91a5c2875" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=3.879034ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:09.533 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:09 +0000] "HEAD /v2/docker.io/starlingx/k8s-plugins-sriov-network-device/blobs/sha256:f193525f34bfe0f8163e483ad082edf01dd325d6f7b5aae5c47c1de91a5c2875 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:09.534 controller-1 registry[109742]: info time="2019-11-04T19:18:09Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=0482dfe2-d882-48da-86b2-a106b1e7933b http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54725" http.request.uri="/v2/docker.io/starlingx/k8s-plugins-sriov-network-device/blobs/sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=1.476934ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:09.534 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:09 +0000] "HEAD /v2/docker.io/starlingx/k8s-plugins-sriov-network-device/blobs/sha256:e7c96db7181be991f19a9fb6975cdbbd73c65f4a2681348e63a141a2192a5f10 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:09.541 controller-1 registry[109742]: info time="2019-11-04T19:18:09Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=fb4c0c20-2f9e-4db8-8b50-a9c659f00610 http.request.method=HEAD http.request.remoteaddr="[fd00:204::2]:54729" http.request.uri="/v2/docker.io/starlingx/k8s-plugins-sriov-network-device/blobs/sha256:6b0626b922ca30efa582c104d49a0654de86ac7ec621481f0b672ddcd69220a3" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/octet-stream" http.response.duration=2.724408ms http.response.status=200 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:09.541 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:09 +0000] "HEAD /v2/docker.io/starlingx/k8s-plugins-sriov-network-device/blobs/sha256:6b0626b922ca30efa582c104d49a0654de86ac7ec621481f0b672ddcd69220a3 HTTP/1.1" 200 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:09.552 controller-1 registry[109742]: info time="2019-11-04T19:18:09Z" level=info msg="response completed" go.version=go1.11.2 http.request.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.request.host="registry.local:9001" http.request.id=2bbe7331-3b08-41df-ab36-3c9ea0e72f61 http.request.method=PUT http.request.remoteaddr="[fd00:204::2]:54731" http.request.uri="/v2/docker.io/starlingx/k8s-plugins-sriov-network-device/manifests/master-centos-stable-latest" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.duration=6.543091ms http.response.status=201 http.response.written=0 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:09.552 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:09 +0000] "PUT /v2/docker.io/starlingx/k8s-plugins-sriov-network-device/manifests/master-centos-stable-latest HTTP/1.1" 201 0 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:11.944 controller-1 containerd[12214]: info time="2019-11-04T19:18:11.944126270Z" level=info msg="shim reaped" id=4e16ea6ea9de22ef5123387f346c6dd0a06757a3cf6699991867154477a01103 2019-11-04T19:18:11.954 controller-1 dockerd[12332]: info time="2019-11-04T19:18:11.954193359Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:18:12.765 controller-1 containerd[12214]: info time="2019-11-04T19:18:12.765388747Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e2279c7394a0e64eaa3a0f326c8c6754c5b580f56f25e0abeb5576ac50859e88/shim.sock" debug=false pid=118004 2019-11-04T19:18:14.526 controller-1 kubelet[88595]: info W1104 19:18:14.526468 88595 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease 2019-11-04T19:18:14.529 controller-1 kubelet[88595]: info W1104 19:18:14.529115 88595 conversion.go:110] Could not get instant cpu stats: cumulative stats decrease 2019-11-04T19:18:15.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 12.8% (avg per cpu); cpus: 36, Platform: 6.8% (Base: 6.0, k8s-system: 0.7), k8s-addon: 5.8 2019-11-04T19:18:15.289 controller-1 collectd[12276]: info platform memory usage: Usage: 6.6%; Reserved: 126111.9 MiB, Platform: 8275.0 MiB (Base: 7719.4, k8s-system: 555.7), k8s-addon: 6995.1 2019-11-04T19:18:15.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.2%, Anon: 15346.2 MiB, cgroup-rss: 15274.3 MiB, Avail: 110765.6 MiB, Total: 126111.9 MiB 2019-11-04T19:18:15.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.39%, Anon: 4687.7 MiB, Avail: 58723.4 MiB, Total: 63411.1 MiB 2019-11-04T19:18:15.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 16.78%, Anon: 10658.5 MiB, Avail: 52850.3 MiB, Total: 63508.8 MiB 2019-11-04T19:18:19.653 controller-1 registry[109742]: info time="2019-11-04T19:18:19Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=144f953b-4187-49bc-8f2c-1ad4fdb71646 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:55037" http.request.uri="/v2/" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:19.653 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:19 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:19.658 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:19Z" level=info msg=getToken go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=972801db-37b1-4b94-8e7a-d33a241bd6a9 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54874" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fexternal_storage%2Frbd-provisioner%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:19.672 controller-1 registry[109742]: info time="2019-11-04T19:18:19Z" level=warning msg="error authorizing context: authorization token required" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=b5f81795-7ac1-4c3e-a6b6-1e769de32a76 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:55043" http.request.uri="/v2/" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:19.672 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:19 +0000] "GET /v2/ HTTP/1.1" 401 87 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:19.677 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:19Z" level=info msg=getToken go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=22dcf747-fe93-4d80-a8ec-a0c4cf22ec61 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54880" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fceph-config-helper%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:20.032 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:20Z" level=info msg="authenticated client" acctSubject=admin go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=972801db-37b1-4b94-8e7a-d33a241bd6a9 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54874" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fexternal_storage%2Frbd-provisioner%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:20.036 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:20Z" level=info msg="authorized client" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository quay.io/external_storage/rbd-provisioner} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=972801db-37b1-4b94-8e7a-d33a241bd6a9 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54874" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fexternal_storage%2Frbd-provisioner%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository quay.io/external_storage/rbd-provisioner} pull}] 2019-11-04T19:18:20.036 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:20Z" level=info msg="get token complete" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository quay.io/external_storage/rbd-provisioner} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=972801db-37b1-4b94-8e7a-d33a241bd6a9 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54874" http.request.uri="/token/?account=admin&scope=repository%3Aquay.io%2Fexternal_storage%2Frbd-provisioner%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/json" http.response.duration=377.858669ms http.response.status=200 http.response.written=1349 instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository quay.io/external_storage/rbd-provisioner} pull}] 2019-11-04T19:18:20.044 controller-1 registry[109742]: info time="2019-11-04T19:18:20Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=696a0285-044c-46c0-b3b6-69028f89bfa9 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:55051" http.request.uri="/v2/quay.io/external_storage/rbd-provisioner/manifests/v2.1.1-k8s1.11" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.response.duration=3.380174ms http.response.status=200 http.response.written=953 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:20.044 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:20 +0000] "GET /v2/quay.io/external_storage/rbd-provisioner/manifests/v2.1.1-k8s1.11 HTTP/1.1" 200 953 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:20.158 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:20Z" level=info msg="authenticated client" acctSubject=admin go.version=go1.11.2 http.request.host="[fd00:204::2]:9002" http.request.id=22dcf747-fe93-4d80-a8ec-a0c4cf22ec61 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54880" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fceph-config-helper%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d 2019-11-04T19:18:20.161 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:20Z" level=info msg="authorized client" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository docker.io/starlingx/ceph-config-helper} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=22dcf747-fe93-4d80-a8ec-a0c4cf22ec61 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54880" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fceph-config-helper%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository docker.io/starlingx/ceph-config-helper} pull}] 2019-11-04T19:18:20.161 controller-1 registry-token-server[108926]: info time="2019-11-04T19:18:20Z" level=info msg="get token complete" acctSubject=admin go.version=go1.11.2 grantedAccess=[{{repository docker.io/starlingx/ceph-config-helper} pull}] http.request.host="[fd00:204::2]:9002" http.request.id=22dcf747-fe93-4d80-a8ec-a0c4cf22ec61 http.request.method=GET http.request.remoteaddr="[fd00:204::2]:54880" http.request.uri="/token/?account=admin&scope=repository%3Adocker.io%2Fstarlingx%2Fceph-config-helper%3Apull&service=%5Bfd00%3A204%3A%3A2%5D%3A9001" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/json" http.response.duration=483.349877ms http.response.status=200 http.response.written=1346 instance.id=7e010d60-d9a2-4677-8233-1eeaad1cdd2d requestedAccess=[{{repository docker.io/starlingx/ceph-config-helper} pull}] 2019-11-04T19:18:20.169 controller-1 registry[109742]: info time="2019-11-04T19:18:20Z" level=info msg="response completed" go.version=go1.11.2 http.request.host="registry.local:9001" http.request.id=f318707b-3a6a-4f82-a1d9-023c2febac2b http.request.method=GET http.request.remoteaddr="[fd00:204::2]:55053" http.request.uri="/v2/docker.io/starlingx/ceph-config-helper/manifests/v1.15.0" http.request.useragent="docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" http.response.contenttype="application/vnd.docker.distribution.manifest.v2+json" http.response.duration=3.143486ms http.response.status=200 http.response.written=1571 instance.id=966ccd3b-ed50-4d9d-b04d-ec49a1195ac5 version="v2.6.2+unknown" 2019-11-04T19:18:20.169 controller-1 registry[109742]: info fd00:204::2 - - [04/Nov/2019:19:18:20 +0000] "GET /v2/docker.io/starlingx/ceph-config-helper/manifests/v1.15.0 HTTP/1.1" 200 1571 "" "docker/18.09.6 go/go1.10.8 git-commit/481bc77 kernel/3.10.0-957.21.3.el7.2.tis.x86_64 os/linux arch/amd64 UpstreamClient(Docker-Client/18.09.6 \\(linux\\))" 2019-11-04T19:18:20.233 controller-1 containerd[12214]: info time="2019-11-04T19:18:20.233613696Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/59453fc5e23be21458232b3d07d8364dab7e833d02de596aa83a54d526694422/shim.sock" debug=false pid=119946 2019-11-04T19:18:21.000 controller-1 dnsmasq[111976]: warning nameserver fd00:207::a refused to do a recursive query 2019-11-04T19:18:25.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 12.6% (avg per cpu); cpus: 36, Platform: 7.5% (Base: 6.5, k8s-system: 1.0), k8s-addon: 5.0 2019-11-04T19:18:25.289 controller-1 collectd[12276]: info platform memory usage: Usage: 6.8%; Reserved: 126082.0 MiB, Platform: 8553.2 MiB (Base: 7980.7, k8s-system: 572.5), k8s-addon: 7002.0 2019-11-04T19:18:25.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.4%, Anon: 15635.0 MiB, cgroup-rss: 15558.6 MiB, Avail: 110447.0 MiB, Total: 126082.0 MiB 2019-11-04T19:18:25.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.75%, Anon: 4910.8 MiB, Avail: 58490.0 MiB, Total: 63400.9 MiB 2019-11-04T19:18:25.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 16.89%, Anon: 10724.2 MiB, Avail: 52779.5 MiB, Total: 63503.7 MiB 2019-11-04T19:18:35.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 11.7% (avg per cpu); cpus: 36, Platform: 5.9% (Base: 5.1, k8s-system: 0.9), k8s-addon: 5.6 2019-11-04T19:18:35.288 controller-1 collectd[12276]: info platform memory usage: Usage: 6.9%; Reserved: 126076.7 MiB, Platform: 8644.3 MiB (Base: 8060.5, k8s-system: 583.8), k8s-addon: 7011.8 2019-11-04T19:18:35.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.5%, Anon: 15728.7 MiB, cgroup-rss: 15659.8 MiB, Avail: 110348.0 MiB, Total: 126076.7 MiB 2019-11-04T19:18:35.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.84%, Anon: 4969.1 MiB, Avail: 58430.2 MiB, Total: 63399.4 MiB 2019-11-04T19:18:35.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 16.94%, Anon: 10759.6 MiB, Avail: 52741.4 MiB, Total: 63500.9 MiB 2019-11-04T19:18:45.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 10.3% (avg per cpu); cpus: 36, Platform: 7.4% (Base: 6.5, k8s-system: 0.9), k8s-addon: 2.8 2019-11-04T19:18:45.288 controller-1 collectd[12276]: info platform memory usage: Usage: 6.9%; Reserved: 126081.7 MiB, Platform: 8645.9 MiB (Base: 8060.6, k8s-system: 585.4), k8s-addon: 7007.6 2019-11-04T19:18:45.288 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.5%, Anon: 15732.3 MiB, cgroup-rss: 15657.4 MiB, Avail: 110349.4 MiB, Total: 126081.7 MiB 2019-11-04T19:18:45.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.84%, Anon: 4968.6 MiB, Avail: 58434.3 MiB, Total: 63402.9 MiB 2019-11-04T19:18:45.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 16.95%, Anon: 10763.7 MiB, Avail: 52737.9 MiB, Total: 63501.6 MiB 2019-11-04T19:18:55.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 12.9% (avg per cpu); cpus: 36, Platform: 10.0% (Base: 9.2, k8s-system: 0.8), k8s-addon: 2.6 2019-11-04T19:18:55.288 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 126075.4 MiB, Platform: 8769.8 MiB (Base: 8179.1, k8s-system: 590.7), k8s-addon: 7011.1 2019-11-04T19:18:55.288 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.6%, Anon: 15859.9 MiB, cgroup-rss: 15785.0 MiB, Avail: 110215.5 MiB, Total: 126075.4 MiB 2019-11-04T19:18:55.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.91%, Anon: 5012.5 MiB, Avail: 58389.0 MiB, Total: 63401.5 MiB 2019-11-04T19:18:55.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.08%, Anon: 10847.5 MiB, Avail: 52652.2 MiB, Total: 63499.7 MiB 2019-11-04T19:19:05.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 10.3% (avg per cpu); cpus: 36, Platform: 7.7% (Base: 6.6, k8s-system: 1.0), k8s-addon: 2.5 2019-11-04T19:19:05.288 controller-1 collectd[12276]: info platform memory usage: Usage: 6.9%; Reserved: 126073.1 MiB, Platform: 8741.0 MiB (Base: 8148.7, k8s-system: 592.3), k8s-addon: 7012.0 2019-11-04T19:19:05.288 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.6%, Anon: 15832.7 MiB, cgroup-rss: 15757.1 MiB, Avail: 110240.4 MiB, Total: 126073.1 MiB 2019-11-04T19:19:05.288 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.92%, Anon: 5019.2 MiB, Avail: 58377.6 MiB, Total: 63396.8 MiB 2019-11-04T19:19:05.288 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.03%, Anon: 10813.5 MiB, Avail: 52688.7 MiB, Total: 63502.2 MiB 2019-11-04T19:19:15.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 10.4% (avg per cpu); cpus: 36, Platform: 7.6% (Base: 6.9, k8s-system: 0.7), k8s-addon: 2.6 2019-11-04T19:19:15.288 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 126072.7 MiB, Platform: 8791.9 MiB (Base: 8199.2, k8s-system: 592.7), k8s-addon: 7013.9 2019-11-04T19:19:15.288 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.6%, Anon: 15884.9 MiB, cgroup-rss: 15809.9 MiB, Avail: 110187.8 MiB, Total: 126072.7 MiB 2019-11-04T19:19:15.288 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.91%, Anon: 5015.0 MiB, Avail: 58384.3 MiB, Total: 63399.3 MiB 2019-11-04T19:19:15.288 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.12%, Anon: 10869.9 MiB, Avail: 52631.8 MiB, Total: 63501.6 MiB 2019-11-04T19:19:25.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 13.1% (avg per cpu); cpus: 36, Platform: 8.8% (Base: 7.8, k8s-system: 1.0), k8s-addon: 4.0 2019-11-04T19:19:25.288 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 126062.5 MiB, Platform: 8792.2 MiB (Base: 8197.9, k8s-system: 594.3), k8s-addon: 7028.0 2019-11-04T19:19:25.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.6%, Anon: 15899.8 MiB, cgroup-rss: 15824.4 MiB, Avail: 110162.7 MiB, Total: 126062.5 MiB 2019-11-04T19:19:25.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.92%, Anon: 5020.7 MiB, Avail: 58375.2 MiB, Total: 63395.9 MiB 2019-11-04T19:19:25.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.13%, Anon: 10879.1 MiB, Avail: 52621.8 MiB, Total: 63500.9 MiB 2019-11-04T19:19:35.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 11.5% (avg per cpu); cpus: 36, Platform: 8.7% (Base: 7.8, k8s-system: 0.8), k8s-addon: 2.5 2019-11-04T19:19:35.288 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 126065.5 MiB, Platform: 8866.0 MiB (Base: 8271.5, k8s-system: 594.5), k8s-addon: 7030.1 2019-11-04T19:19:35.288 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.7%, Anon: 15972.9 MiB, cgroup-rss: 15900.2 MiB, Avail: 110092.6 MiB, Total: 126065.5 MiB 2019-11-04T19:19:35.288 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.92%, Anon: 5023.7 MiB, Avail: 58371.8 MiB, Total: 63395.5 MiB 2019-11-04T19:19:35.288 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.24%, Anon: 10949.2 MiB, Avail: 52552.4 MiB, Total: 63501.7 MiB 2019-11-04T19:19:45.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 12.2% (avg per cpu); cpus: 36, Platform: 9.9% (Base: 9.0, k8s-system: 0.9), k8s-addon: 2.1 2019-11-04T19:19:45.288 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 126061.1 MiB, Platform: 8872.3 MiB (Base: 8277.4, k8s-system: 594.9), k8s-addon: 7040.9 2019-11-04T19:19:45.288 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.7%, Anon: 15989.1 MiB, cgroup-rss: 15916.5 MiB, Avail: 110072.0 MiB, Total: 126061.1 MiB 2019-11-04T19:19:45.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.95%, Anon: 5038.7 MiB, Avail: 58357.5 MiB, Total: 63396.2 MiB 2019-11-04T19:19:45.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.25%, Anon: 10950.8 MiB, Avail: 52546.4 MiB, Total: 63497.2 MiB 2019-11-04T19:19:55.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 10.8% (avg per cpu); cpus: 36, Platform: 8.5% (Base: 7.7, k8s-system: 0.8), k8s-addon: 2.1 2019-11-04T19:19:55.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 126063.5 MiB, Platform: 8891.9 MiB (Base: 8295.7, k8s-system: 596.2), k8s-addon: 7036.2 2019-11-04T19:19:55.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.7%, Anon: 16008.5 MiB, cgroup-rss: 15932.3 MiB, Avail: 110055.0 MiB, Total: 126063.5 MiB 2019-11-04T19:19:55.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.02%, Anon: 5087.2 MiB, Avail: 58307.5 MiB, Total: 63394.7 MiB 2019-11-04T19:19:55.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.20%, Anon: 10921.3 MiB, Avail: 52577.2 MiB, Total: 63498.5 MiB 2019-11-04T19:20:01.352 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T19:20:01.390 controller-1 systemd[1]: info Started Session 4 of user root. 2019-11-04T19:20:01.423 controller-1 kubelet[88595]: info I1104 19:20:01.423375 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tiller-token-c6p8n" (UniqueName: "kubernetes.io/secret/cda72498-f852-40c4-99af-4c321eec8d7e-tiller-token-c6p8n") pod "tiller-deploy-d6b59fcb-qptps" (UID: "cda72498-f852-40c4-99af-4c321eec8d7e") 2019-11-04T19:20:01.423 controller-1 kubelet[88595]: info I1104 19:20:01.423412 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "metricbeat-config" (UniqueName: "kubernetes.io/secret/c1dacd07-7924-45c2-9b2f-f5bcf5226ed6-metricbeat-config") pod "mon-metricbeat-7948cd594c-pz6pb" (UID: "c1dacd07-7924-45c2-9b2f-f5bcf5226ed6") 2019-11-04T19:20:01.423 controller-1 kubelet[88595]: info I1104 19:20:01.423438 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "modules" (UniqueName: "kubernetes.io/secret/c1dacd07-7924-45c2-9b2f-f5bcf5226ed6-modules") pod "mon-metricbeat-7948cd594c-pz6pb" (UID: "c1dacd07-7924-45c2-9b2f-f5bcf5226ed6") 2019-11-04T19:20:01.423 controller-1 kubelet[88595]: info I1104 19:20:01.423539 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kibana" (UniqueName: "kubernetes.io/configmap/b52283f2-29ea-4bae-aeb6-d7fe28283e5f-kibana") pod "mon-kibana-6cf57cfd5b-7zrrf" (UID: "b52283f2-29ea-4bae-aeb6-d7fe28283e5f") 2019-11-04T19:20:01.423 controller-1 kubelet[88595]: info I1104 19:20:01.423632 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "mon-kube-state-metrics-token-qj6tw" (UniqueName: "kubernetes.io/secret/68fc5aa6-9d1d-4724-88ea-379ab2e66d82-mon-kube-state-metrics-token-qj6tw") pod "mon-kube-state-metrics-59947d74fb-qww9j" (UID: "68fc5aa6-9d1d-4724-88ea-379ab2e66d82") 2019-11-04T19:20:01.423 controller-1 kubelet[88595]: info I1104 19:20:01.423669 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "mon-metricbeat-token-5vdfc" (UniqueName: "kubernetes.io/secret/c1dacd07-7924-45c2-9b2f-f5bcf5226ed6-mon-metricbeat-token-5vdfc") pod "mon-metricbeat-7948cd594c-pz6pb" (UID: "c1dacd07-7924-45c2-9b2f-f5bcf5226ed6") 2019-11-04T19:20:01.423 controller-1 kubelet[88595]: info I1104 19:20:01.423694 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "root" (UniqueName: "kubernetes.io/host-path/c1dacd07-7924-45c2-9b2f-f5bcf5226ed6-root") pod "mon-metricbeat-7948cd594c-pz6pb" (UID: "c1dacd07-7924-45c2-9b2f-f5bcf5226ed6") 2019-11-04T19:20:01.423 controller-1 kubelet[88595]: info I1104 19:20:01.423761 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/b52283f2-29ea-4bae-aeb6-d7fe28283e5f-default-token-88gsr") pod "mon-kibana-6cf57cfd5b-7zrrf" (UID: "b52283f2-29ea-4bae-aeb6-d7fe28283e5f") 2019-11-04T19:20:01.423 controller-1 kubelet[88595]: info I1104 19:20:01.423808 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-88gsr" (UniqueName: "kubernetes.io/secret/493dbaa5-c410-4daf-993b-8f90e2e3f526-default-token-88gsr") pod "mon-nginx-ingress-default-backend-5997cfc99f-g2rbd" (UID: "493dbaa5-c410-4daf-993b-8f90e2e3f526") 2019-11-04T19:20:01.440 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T19:20:01.545 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/c1dacd07-7924-45c2-9b2f-f5bcf5226ed6/volumes/kubernetes.io~secret/mon-metricbeat-token-5vdfc. 2019-11-04T19:20:01.560 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/493dbaa5-c410-4daf-993b-8f90e2e3f526/volumes/kubernetes.io~secret/default-token-88gsr. 2019-11-04T19:20:01.574 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b52283f2-29ea-4bae-aeb6-d7fe28283e5f/volumes/kubernetes.io~secret/default-token-88gsr. 2019-11-04T19:20:01.590 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/cda72498-f852-40c4-99af-4c321eec8d7e/volumes/kubernetes.io~secret/tiller-token-c6p8n. 2019-11-04T19:20:01.606 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/c1dacd07-7924-45c2-9b2f-f5bcf5226ed6/volumes/kubernetes.io~secret/modules. 2019-11-04T19:20:01.624 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/68fc5aa6-9d1d-4724-88ea-379ab2e66d82/volumes/kubernetes.io~secret/mon-kube-state-metrics-token-qj6tw. 2019-11-04T19:20:01.640 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/c1dacd07-7924-45c2-9b2f-f5bcf5226ed6/volumes/kubernetes.io~secret/metricbeat-config. 2019-11-04T19:20:01.705 controller-1 dockerd[12332]: info time="2019-11-04T19:20:01.705302099Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:20:01.711 controller-1 containerd[12214]: info time="2019-11-04T19:20:01.711234864Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32/shim.sock" debug=false pid=152535 2019-11-04T19:20:01.736 controller-1 dockerd[12332]: info time="2019-11-04T19:20:01.736033600Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:20:01.741 controller-1 containerd[12214]: info time="2019-11-04T19:20:01.741122432Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc/shim.sock" debug=false pid=152551 2019-11-04T19:20:01.759 controller-1 dockerd[12332]: info time="2019-11-04T19:20:01.759237302Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:20:01.762 controller-1 dockerd[12332]: info time="2019-11-04T19:20:01.762747684Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:20:01.764 controller-1 containerd[12214]: info time="2019-11-04T19:20:01.764908355Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c/shim.sock" debug=false pid=152571 2019-11-04T19:20:01.767 controller-1 containerd[12214]: info time="2019-11-04T19:20:01.767837716Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03/shim.sock" debug=false pid=152581 2019-11-04T19:20:01.796 controller-1 containerd[12214]: info time="2019-11-04T19:20:01.796846830Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/475ada3213df7d6318cae85e6d82b9ca9951e35fff25753729425892623bb61a/shim.sock" debug=false pid=152609 2019-11-04T19:20:02.082 controller-1 kubelet[88595]: info W1104 19:20:02.082481 88595 pod_container_deletor.go:75] Container "475ada3213df7d6318cae85e6d82b9ca9951e35fff25753729425892623bb61a" not found in pod's containers 2019-11-04T19:20:02.238 controller-1 containerd[12214]: info time="2019-11-04T19:20:02.238507733Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dd3f347601f624e5a92fda57c5141156e3d98bb47e87ab387b0df77eec6263a3/shim.sock" debug=false pid=152820 2019-11-04T19:20:02.329 controller-1 kubelet[88595]: info W1104 19:20:02.329522 88595 pod_container_deletor.go:75] Container "a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" not found in pod's containers 2019-11-04T19:20:02.332 controller-1 kubelet[88595]: info W1104 19:20:02.332181 88595 pod_container_deletor.go:75] Container "e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" not found in pod's containers 2019-11-04T19:20:02.334 controller-1 kubelet[88595]: info W1104 19:20:02.334737 88595 pod_container_deletor.go:75] Container "6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" not found in pod's containers 2019-11-04T19:20:02.427 controller-1 kubelet[88595]: info W1104 19:20:02.427888 88595 pod_container_deletor.go:75] Container "b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" not found in pod's containers 2019-11-04T19:20:02.727 controller-1 kubelet[88595]: info I1104 19:20:02.727415 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/63339254-120f-499b-a172-54f3a09ee4ad-ceph-etc") pod "ceph-pools-audit-1572895200-h82f6" (UID: "63339254-120f-499b-a172-54f3a09ee4ad") 2019-11-04T19:20:02.727 controller-1 kubelet[88595]: info I1104 19:20:02.727448 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/63339254-120f-499b-a172-54f3a09ee4ad-etcceph") pod "ceph-pools-audit-1572895200-h82f6" (UID: "63339254-120f-499b-a172-54f3a09ee4ad") 2019-11-04T19:20:02.727 controller-1 kubelet[88595]: info I1104 19:20:02.727471 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/63339254-120f-499b-a172-54f3a09ee4ad-ceph-pools-audit-token-bsfbw") pod "ceph-pools-audit-1572895200-h82f6" (UID: "63339254-120f-499b-a172-54f3a09ee4ad") 2019-11-04T19:20:02.727 controller-1 kubelet[88595]: info I1104 19:20:02.727544 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/63339254-120f-499b-a172-54f3a09ee4ad-ceph-pools-bin") pod "ceph-pools-audit-1572895200-h82f6" (UID: "63339254-120f-499b-a172-54f3a09ee4ad") 2019-11-04T19:20:02.837 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/63339254-120f-499b-a172-54f3a09ee4ad/volumes/kubernetes.io~secret/ceph-pools-audit-token-bsfbw. 2019-11-04T19:20:03.016 controller-1 dockerd[12332]: info time="2019-11-04T19:20:03.016220749Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:20:03.022 controller-1 containerd[12214]: info time="2019-11-04T19:20:03.022008458Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93/shim.sock" debug=false pid=153208 2019-11-04T19:20:05.278 controller-1 collectd[12276]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T19:20:05.285 controller-1 collectd[12276]: info 2019-11-04 19:20:05,285 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', error(104, 'Connection reset by peer'))': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:20:05.285 controller-1 collectd[12276]: info 2019-11-04 19:20:05,285 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', error(104, 'Connection reset by peer'))': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:20:05.285 controller-1 collectd[12276]: info WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', error(104, 'Connection reset by peer'))': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:20:05.292 controller-1 collectd[12276]: info 2019-11-04 19:20:05,292 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', error(104, 'Connection reset by peer'))': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:20:05.292 controller-1 collectd[12276]: info 2019-11-04 19:20:05,292 WARNING Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', error(104, 'Connection reset by peer'))': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:20:05.292 controller-1 collectd[12276]: info WARNING:urllib3.connectionpool:Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', error(104, 'Connection reset by peer'))': /api/v1/pods?fieldSelector=spec.nodeName%3Dcontroller-1&watch=False 2019-11-04T19:20:05.534 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 10.5% (avg per cpu); cpus: 36, Platform: 8.6% (Base: 7.3, k8s-system: 1.4), k8s-addon: 1.5 2019-11-04T19:20:05.540 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 125970.6 MiB, Platform: 8843.0 MiB (Base: 8231.4, k8s-system: 611.5), k8s-addon: 7061.8 2019-11-04T19:20:05.540 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.7%, Anon: 15986.2 MiB, cgroup-rss: 15908.9 MiB, Avail: 109984.3 MiB, Total: 125970.6 MiB 2019-11-04T19:20:05.540 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.99%, Anon: 5062.0 MiB, Avail: 58294.7 MiB, Total: 63356.7 MiB 2019-11-04T19:20:05.540 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.22%, Anon: 10924.3 MiB, Avail: 52524.6 MiB, Total: 63448.8 MiB 2019-11-04T19:20:07.995 controller-1 kubelet[88595]: info 2019-11-04 19:20:07.995 [INFO][154331] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"monitor", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"mon-nginx-ingress-default-backend-5997cfc99f-g2rbd", ContainerID:"e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03"}} 2019-11-04T19:20:07.995 controller-1 kubelet[88595]: info 2019-11-04 19:20:07.995 [INFO][154330] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"monitor", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"mon-kibana-6cf57cfd5b-7zrrf", ContainerID:"6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c"}} 2019-11-04T19:20:07.995 controller-1 kubelet[88595]: info 2019-11-04 19:20:07.995 [INFO][154336] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"monitor", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"mon-metricbeat-7948cd594c-pz6pb", ContainerID:"a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32"}} 2019-11-04T19:20:08.009 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.009 [INFO][154330] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0 mon-kibana-6cf57cfd5b- monitor b52283f2-29ea-4bae-aeb6-d7fe28283e5f 8165697 0 2019-11-04 19:20:01 +0000 UTC map[projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default app:kibana pod-template-hash:6cf57cfd5b release:mon-kibana projectcalico.org/namespace:monitor] map[] [] nil [] } {k8s controller-1 mon-kibana-6cf57cfd5b-7zrrf eth0 [] [] [kns.monitor ksa.monitor.default] cali0dee4d738cd [{kibana TCP 5601}]}} ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Namespace="monitor" Pod="mon-kibana-6cf57cfd5b-7zrrf" WorkloadEndpoint="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-" 2019-11-04T19:20:08.009 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.009 [INFO][154330] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Namespace="monitor" Pod="mon-kibana-6cf57cfd5b-7zrrf" WorkloadEndpoint="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" 2019-11-04T19:20:08.009 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.009 [INFO][154331] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0 mon-nginx-ingress-default-backend-5997cfc99f- monitor 493dbaa5-c410-4daf-993b-8f90e2e3f526 8165758 0 2019-11-04 19:20:01 +0000 UTC map[projectcalico.org/serviceaccount:default app:nginx-ingress component:default-backend pod-template-hash:5997cfc99f release:mon-nginx-ingress projectcalico.org/namespace:monitor projectcalico.org/orchestrator:k8s] map[] [] nil [] } {k8s controller-1 mon-nginx-ingress-default-backend-5997cfc99f-g2rbd eth0 [] [] [kns.monitor ksa.monitor.default] cali7b987b3bd2d [{http TCP 8080}]}} ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Namespace="monitor" Pod="mon-nginx-ingress-default-backend-5997cfc99f-g2rbd" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-" 2019-11-04T19:20:08.009 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.009 [INFO][154331] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Namespace="monitor" Pod="mon-nginx-ingress-default-backend-5997cfc99f-g2rbd" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" 2019-11-04T19:20:08.010 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.010 [INFO][154336] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0 mon-metricbeat-7948cd594c- monitor c1dacd07-7924-45c2-9b2f-f5bcf5226ed6 8165640 0 2019-11-04 19:20:01 +0000 UTC map[release:mon-metricbeat projectcalico.org/namespace:monitor projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:mon-metricbeat app:metricbeat pod-template-hash:7948cd594c] map[] [] nil [] } {k8s controller-1 mon-metricbeat-7948cd594c-pz6pb eth0 [] [] [kns.monitor ksa.monitor.mon-metricbeat] calia983dc8a7fc []}} ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Namespace="monitor" Pod="mon-metricbeat-7948cd594c-pz6pb" WorkloadEndpoint="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-" 2019-11-04T19:20:08.010 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.010 [INFO][154336] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Namespace="monitor" Pod="mon-metricbeat-7948cd594c-pz6pb" WorkloadEndpoint="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" 2019-11-04T19:20:08.011 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.011 [INFO][154330] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:monitor,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/monitor,UID:85834187-55ff-4686-b97c-c3f524d37f83,ResourceVersion:46120,Generation:0,CreationTimestamp:2019-10-25 19:07:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:20:08.012 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.012 [INFO][154331] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:monitor,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/monitor,UID:85834187-55ff-4686-b97c-c3f524d37f83,ResourceVersion:46120,Generation:0,CreationTimestamp:2019-10-25 19:07:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:20:08.012 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.012 [INFO][154336] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:monitor,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/monitor,UID:85834187-55ff-4686-b97c-c3f524d37f83,ResourceVersion:46120,Generation:0,CreationTimestamp:2019-10-25 19:07:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:20:08.013 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.012 [INFO][154330] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:mon-kibana-6cf57cfd5b-7zrrf,GenerateName:mon-kibana-6cf57cfd5b-,Namespace:monitor,SelfLink:/api/v1/namespaces/monitor/pods/mon-kibana-6cf57cfd5b-7zrrf,UID:b52283f2-29ea-4bae-aeb6-d7fe28283e5f,ResourceVersion:8165697,Generation:0,CreationTimestamp:2019-11-04 19:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: kibana,pod-template-hash: 6cf57cfd5b,release: mon-kibana,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet mon-kibana-6cf57cfd5b e65ebd26-0976-4ad0-be91-ff60e7736168 0xc00055e65a 0xc00055e65b}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{kibana {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:mon-kibana,},Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil}} {default-token-88gsr {nil nil nil nil nil &SecretVolumeSource{SecretName:default-token-88gsr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{kibana docker.elastic.co/kibana/kibana-oss:7.4.0 [] [] [{kibana 0 5601 TCP }] [] [] {map[cpu:{{1 0} {} 1 DecimalSI} memory:{{536870912 0} {} BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{268435456 0} {} BinarySI}]} [{kibana false /usr/share/kibana/config/kibana.yml kibana.yml } {default-token-88gsr true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{elastic-controller: enabled,},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00055e830} {node.kubernetes.io/unreachable Exists NoExecute 0xc00055e850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC ContainersNotReady containers with unready status: [kibana]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC ContainersNotReady containers with unready status: [kibana]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:20:01 +0000 UTC,ContainerStatuses:[{kibana {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.elastic.co/kibana/kibana-oss:7.4.0 }],QOSClass:Burstable,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T19:20:08.013 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.013 [INFO][154331] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:mon-nginx-ingress-default-backend-5997cfc99f-g2rbd,GenerateName:mon-nginx-ingress-default-backend-5997cfc99f-,Namespace:monitor,SelfLink:/api/v1/namespaces/monitor/pods/mon-nginx-ingress-default-backend-5997cfc99f-g2rbd,UID:493dbaa5-c410-4daf-993b-8f90e2e3f526,ResourceVersion:8165758,Generation:0,CreationTimestamp:2019-11-04 19:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: nginx-ingress,component: default-backend,pod-template-hash: 5997cfc99f,release: mon-nginx-ingress,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet mon-nginx-ingress-default-backend-5997cfc99f c29984b7-0d79-4745-9e0f-a936bc31f4c2 0xc0005639d7 0xc0005639d8}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{default-token-88gsr {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-88gsr,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx-ingress-default-backend k8s.gcr.io/defaultbackend:1.4 [] [] [{http 0 8080 TCP }] [] [] {map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{134217728 0} {} BinarySI}] map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{134217728 0} {} BinarySI}]} [{default-token-88gsr true /var/run/secrets/kubernetes.io/serviceaccount }] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*60,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{elastic-controller: enabled,},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000563bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000563bd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC ContainersNotReady containers with unready status: [nginx-ingress-default-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC ContainersNotReady containers with unready status: [nginx-ingress-default-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:20:01 +0000 UTC,ContainerStatuses:[{nginx-ingress-default-backend {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 k8s.gcr.io/defaultbackend:1.4 }],QOSClass:Guaranteed,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T19:20:08.014 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.014 [INFO][154336] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:mon-metricbeat-7948cd594c-pz6pb,GenerateName:mon-metricbeat-7948cd594c-,Namespace:monitor,SelfLink:/api/v1/namespaces/monitor/pods/mon-metricbeat-7948cd594c-pz6pb,UID:c1dacd07-7924-45c2-9b2f-f5bcf5226ed6,ResourceVersion:8165640,Generation:0,CreationTimestamp:2019-11-04 19:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: metricbeat,pod-template-hash: 7948cd594c,release: mon-metricbeat,},Annotations:map[string]string{checksum/config: b879833c8e6ac1b5f7ced53229955294db69ca4897a6972a5306cc716a96d4b7,checksum/modules: f52784e24ce965a53273ddd27f34c7d1b651152b7f70754d871dc1f86decf183,},OwnerReferences:[{apps/v1 ReplicaSet mon-metricbeat-7948cd594c f1bfb642-6a6f-4885-b408-ff1f75387848 0xc000470a0a 0xc000470a0b}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{metricbeat-config {nil nil nil nil nil SecretVolumeSource{SecretName:mon-metricbeat-deployment-config,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {modules {nil nil nil nil nil &SecretVolumeSource{SecretName:mon-metricbeat-deployment-modules,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {root {&HostPathVolumeSource{Path:/,Type:*,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {mon-metricbeat-token-5vdfc {nil nil nil nil nil &SecretVolumeSource{SecretName:mon-metricbeat-token-5vdfc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{metricbeat docker.elastic.co/beats/metricbeat-oss:7.4.0 [] [-e] [] [] [{POD_NAMESPACE EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {NODE_NAME &EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}} {SYSTEM_NAME_FOR_INDEX -yow-cgcs-wildcat-35-60 nil} {INDEX_PATTERN metricbeat-%{[agent.version]}-yow-cgcs-wildcat-35-60-* nil} {INDEX_NAME metricbeat-%{[agent.version]}-yow-cgcs-wildcat-35-60 nil} {KUBE_STATE_METRICS_HOST mon-kube-state-metrics nil}] {map[cpu:{{180 -3} {} 180m DecimalSI} memory:{{536870912 0} {} BinarySI}] map[memory:{{536870912 0} {} BinarySI} cpu:{{50 -3} {} 50m DecimalSI}]} [{metricbeat-config true /usr/share/metricbeat/metricbeat.yml metricbeat.yml } {modules true /usr/share/metricbeat/modules.d } {root true /hostfs } {mon-metricbeat-token-5vdfc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent &SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{elastic-controller: enabled,},ServiceAccountName:mon-metricbeat,DeprecatedServiceAccount:mon-metricbeat,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000471980} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004719f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC ContainersNotReady containers with unready status: [metricbeat]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC ContainersNotReady containers with unready status: [metricbeat]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:20:01 +0000 UTC,ContainerStatuses:[{metricbeat {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.elastic.co/beats/metricbeat-oss:7.4.0 }],QOSClass:Burstable,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T19:20:08.033 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.033 [INFO][154408] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" HandleID="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Workload="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" 2019-11-04T19:20:08.033 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.033 [INFO][154404] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" HandleID="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Workload="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" 2019-11-04T19:20:08.033 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.033 [INFO][154409] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" HandleID="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Workload="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" 2019-11-04T19:20:08.041 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.041 [INFO][154408] ipam_plugin.go 220: Calico CNI IPAM handle=chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03 ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" HandleID="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Workload="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" 2019-11-04T19:20:08.041 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.041 [INFO][154404] ipam_plugin.go 220: Calico CNI IPAM handle=chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" HandleID="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Workload="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" 2019-11-04T19:20:08.041 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.041 [INFO][154408] ipam_plugin.go 230: Auto assigning IP ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" HandleID="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Workload="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0001e37f0), Attrs:map[string]string{"node":"controller-1", "pod":"mon-nginx-ingress-default-backend-5997cfc99f-g2rbd", "namespace":"monitor"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:20:08.041 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.041 [INFO][154404] ipam_plugin.go 230: Auto assigning IP ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" HandleID="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Workload="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc00000fb60), Attrs:map[string]string{"pod":"mon-kibana-6cf57cfd5b-7zrrf", "namespace":"monitor", "node":"controller-1"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:20:08.041 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.041 [INFO][154408] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:20:08.041 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.041 [INFO][154404] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:20:08.043 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.043 [INFO][154409] ipam_plugin.go 220: Calico CNI IPAM handle=chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32 ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" HandleID="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Workload="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" 2019-11-04T19:20:08.043 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.043 [INFO][154409] ipam_plugin.go 230: Auto assigning IP ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" HandleID="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Workload="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002d5b60), Attrs:map[string]string{"pod":"mon-metricbeat-7948cd594c-pz6pb", "namespace":"monitor", "node":"controller-1"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:20:08.043 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.043 [INFO][154409] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:20:08.045 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.045 [INFO][154408] ipam.go 309: Looking up existing affinities for host handle="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" host="controller-1" 2019-11-04T19:20:08.045 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.045 [INFO][154404] ipam.go 309: Looking up existing affinities for host handle="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" host="controller-1" 2019-11-04T19:20:08.047 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.047 [INFO][154409] ipam.go 309: Looking up existing affinities for host handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.049 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.048 [INFO][154404] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" host="controller-1" 2019-11-04T19:20:08.049 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.048 [INFO][154408] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" host="controller-1" 2019-11-04T19:20:08.050 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.050 [INFO][154408] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.050 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.050 [INFO][154404] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.050 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.050 [INFO][154409] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.052 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.052 [INFO][154409] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.052 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.052 [INFO][154404] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.052 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.052 [INFO][154408] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.052 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.052 [INFO][154404] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" host="controller-1" 2019-11-04T19:20:08.052 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.052 [INFO][154408] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" host="controller-1" 2019-11-04T19:20:08.053 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.053 [INFO][154404] ipam.go 1244: Creating new handle: chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c 2019-11-04T19:20:08.053 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.053 [INFO][154408] ipam.go 1244: Creating new handle: chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03 2019-11-04T19:20:08.054 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.053 [INFO][154409] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.054 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.054 [INFO][154409] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.055 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.054 [INFO][154409] ipam.go 1244: Creating new handle: chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32 2019-11-04T19:20:08.055 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.055 [INFO][154404] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" host="controller-1" 2019-11-04T19:20:08.056 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.056 [INFO][154408] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" host="controller-1" 2019-11-04T19:20:08.058 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.058 [INFO][154404] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e336/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" host="controller-1" 2019-11-04T19:20:08.058 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.058 [INFO][154404] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e336/122] handle="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" host="controller-1" 2019-11-04T19:20:08.058 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.058 [INFO][154409] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.059 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.059 [INFO][154404] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e336/122] handle="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" host="controller-1" 2019-11-04T19:20:08.059 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.059 [INFO][154404] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e336/122] ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" HandleID="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Workload="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" 2019-11-04T19:20:08.059 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.059 [INFO][154404] ipam_plugin.go 258: IPAM Result ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" HandleID="chain.6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Workload="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc0000c8420)} 2019-11-04T19:20:08.059 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.059 [ERROR][154408] customresource.go 136: Error updating resource Key=IPAMBlock(fd00-206--a4ce-fec1-5423-e300-122) Name="fd00-206--a4ce-fec1-5423-e300-122" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"fd00-206--a4ce-fec1-5423-e300-122", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"8164194", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.IPAMBlockSpec{CIDR:"fd00:206::a4ce:fec1:5423:e300/122", Affinity:(*string)(0xc0001d3560), StrictAffinity:false, Allocations:[]*int{(*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc00046e108), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc00046e140), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc00046e148), (*int)(nil), (*int)(0xc00046e150), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc00046e158), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc00046e3f8), (*int)(nil), (*int)(0xc00046e160), (*int)(0xc00046e168), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{2, 5, 15, 21, 16, 1, 48, 50, 38, 60, 3, 24, 17, 45, 9, 31, 49, 58, 8, 34, 12, 23, 59, 42, 14, 11, 61, 25, 4, 28, 36, 22, 62, 39, 40, 52, 30, 51, 55, 27, 20, 26, 53, 13, 6, 19, 43, 47, 63, 7, 32, 10, 46, 44, 0, 37}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0001d35b0), AttrSecondary:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-filebeat-bppwv"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0001d3660), AttrSecondary:map[string]string{"namespace":"kube-system", "node":"controller-1", "pod":"coredns-6bc668cd76-6dtt6"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0001d36d0), AttrSecondary:map[string]string{"node":"controller-1", "pod":"mon-elasticsearch-client-1", "namespace":"monitor"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0001d3740), AttrSecondary:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-nginx-ingress-controller-b8l4k"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0001d37b0), AttrSecondary:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-elasticsearch-master-1"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0001d3820), AttrSecondary:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-elasticsearch-data-1"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0001d3890), AttrSecondary:map[string]string{"namespace":"kube-system", "node":"controller-1", "pod":"rbd-provisioner-7484d49cf6-w6dzr"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0001e37f0), AttrSecondary:map[string]string{"node":"controller-1", "pod":"mon-nginx-ingress-default-backend-5997cfc99f-g2rbd", "namespace":"monitor"}}}, Deleted:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "fd00-206--a4ce-fec1-5423-e300-122": the object has been modified; please apply your changes to the latest version and try again 2019-11-04T19:20:08.059 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.059 [INFO][154408] ipam.go 816: Failed to update block block=fd00:206::a4ce:fec1:5423:e300/122 error=update conflict: IPAMBlock(fd00-206--a4ce-fec1-5423-e300-122) handle="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" host="controller-1" 2019-11-04T19:20:08.060 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.060 [ERROR][154409] customresource.go 136: Error updating resource Key=IPAMBlock(fd00-206--a4ce-fec1-5423-e300-122) Name="fd00-206--a4ce-fec1-5423-e300-122" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"fd00-206--a4ce-fec1-5423-e300-122", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"8164194", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.IPAMBlockSpec{CIDR:"fd00:206::a4ce:fec1:5423:e300/122", Affinity:(*string)(0xc0003a2660), StrictAffinity:false, Allocations:[]*int{(*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc000495c28), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc000495c30), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc000495c38), (*int)(nil), (*int)(0xc000495c40), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc000495c48), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc000495e28), (*int)(nil), (*int)(0xc000495c50), (*int)(0xc000495c58), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{2, 5, 15, 21, 16, 1, 48, 50, 38, 60, 3, 24, 17, 45, 9, 31, 49, 58, 8, 34, 12, 23, 59, 42, 14, 11, 61, 25, 4, 28, 36, 22, 62, 39, 40, 52, 30, 51, 55, 27, 20, 26, 53, 13, 6, 19, 43, 47, 63, 7, 32, 10, 46, 44, 0, 37}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003a2690), AttrSecondary:map[string]string{"node":"controller-1", "pod":"mon-filebeat-bppwv", "namespace":"monitor"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003a2700), AttrSecondary:map[string]string{"node":"controller-1", "pod":"coredns-6bc668cd76-6dtt6", "namespace":"kube-system"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003a2770), AttrSecondary:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-elasticsearch-client-1"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003a27e0), AttrSecondary:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-nginx-ingress-controller-b8l4k"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003a2850), AttrSecondary:map[string]string{"pod":"mon-elasticsearch-master-1", "namespace":"monitor", "node":"controller-1"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003a28c0), AttrSecondary:map[string]string{"node":"controller-1", "pod":"mon-elasticsearch-data-1", "namespace":"monitor"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003a2930), AttrSecondary:map[string]string{"pod":"rbd-provisioner-7484d49cf6-w6dzr", "namespace":"kube-system", "node":"controller-1"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002d5b60), AttrSecondary:map[string]string{"node":"controller-1", "pod":"mon-metricbeat-7948cd594c-pz6pb", "namespace":"monitor"}}}, Deleted:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "fd00-206--a4ce-fec1-5423-e300-122": the object has been modified; please apply your changes to the latest version and try again 2019-11-04T19:20:08.060 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.060 [INFO][154409] ipam.go 816: Failed to update block block=fd00:206::a4ce:fec1:5423:e300/122 error=update conflict: IPAMBlock(fd00-206--a4ce-fec1-5423-e300-122) handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.061 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.061 [INFO][154330] k8s.go 361: Populated endpoint ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Namespace="monitor" Pod="mon-kibana-6cf57cfd5b-7zrrf" WorkloadEndpoint="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0", GenerateName:"mon-kibana-6cf57cfd5b-", Namespace:"monitor", SelfLink:"", UID:"b52283f2-29ea-4bae-aeb6-d7fe28283e5f", ResourceVersion:"8165697", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492001, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"release":"mon-kibana", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "app":"kibana", "pod-template-hash":"6cf57cfd5b"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-kibana-6cf57cfd5b-7zrrf", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e336/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"cali0dee4d738cd", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"kibana", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x15e1}}}} 2019-11-04T19:20:08.061 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.061 [INFO][154330] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e336/128] ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Namespace="monitor" Pod="mon-kibana-6cf57cfd5b-7zrrf" WorkloadEndpoint="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" 2019-11-04T19:20:08.061 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.061 [INFO][154330] network_linux.go 76: Setting the host side veth name to cali0dee4d738cd ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Namespace="monitor" Pod="mon-kibana-6cf57cfd5b-7zrrf" WorkloadEndpoint="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" 2019-11-04T19:20:08.064 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.064 [INFO][154330] network_linux.go 411: Disabling IPv6 forwarding ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Namespace="monitor" Pod="mon-kibana-6cf57cfd5b-7zrrf" WorkloadEndpoint="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" 2019-11-04T19:20:08.065 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.065 [INFO][154408] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" host="controller-1" 2019-11-04T19:20:08.065 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.065 [INFO][154409] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.066 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.066 [INFO][154408] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.066 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.066 [INFO][154409] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.067 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.067 [INFO][154408] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.067 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.067 [INFO][154408] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" host="controller-1" 2019-11-04T19:20:08.067 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.067 [INFO][154409] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.067 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.067 [INFO][154409] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.067 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.067 [INFO][154408] ipam.go 1244: Creating new handle: chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03 2019-11-04T19:20:08.068 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.068 [INFO][154409] ipam.go 1244: Creating new handle: chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32 2019-11-04T19:20:08.069 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.069 [INFO][154408] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" host="controller-1" 2019-11-04T19:20:08.070 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.070 [INFO][154409] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.071 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.071 [INFO][154408] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e302/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" host="controller-1" 2019-11-04T19:20:08.071 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.071 [INFO][154408] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e302/122] handle="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" host="controller-1" 2019-11-04T19:20:08.072 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.071 [INFO][154408] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e302/122] handle="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" host="controller-1" 2019-11-04T19:20:08.072 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.072 [INFO][154408] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e302/122] ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" HandleID="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Workload="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" 2019-11-04T19:20:08.072 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.071 [ERROR][154409] customresource.go 136: Error updating resource Key=IPAMBlock(fd00-206--a4ce-fec1-5423-e300-122) Name="fd00-206--a4ce-fec1-5423-e300-122" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"fd00-206--a4ce-fec1-5423-e300-122", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"8165834", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.IPAMBlockSpec{CIDR:"fd00:206::a4ce:fec1:5423:e300/122", Affinity:(*string)(0xc0002d43c0), StrictAffinity:false, Allocations:[]*int{(*int)(nil), (*int)(nil), (*int)(0xc0000f59c0), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc0000f50f8), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc0000f5100), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc0000f5108), (*int)(nil), (*int)(0xc0000f5110), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc0000f5118), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(0xc0000f5120), (*int)(nil), (*int)(0xc0000f5128), (*int)(0xc0000f5130), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{5, 15, 21, 16, 1, 48, 50, 38, 60, 3, 24, 17, 45, 9, 31, 49, 58, 8, 34, 12, 23, 59, 42, 14, 11, 61, 25, 4, 28, 36, 22, 62, 39, 40, 52, 30, 51, 55, 27, 20, 26, 53, 13, 6, 19, 43, 47, 63, 7, 32, 10, 46, 44, 0, 37}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002d43f0), AttrSecondary:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-filebeat-bppwv"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002d4460), AttrSecondary:map[string]string{"namespace":"kube-system", "node":"controller-1", "pod":"coredns-6bc668cd76-6dtt6"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002d44d0), AttrSecondary:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-elasticsearch-client-1"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002d4540), AttrSecondary:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-nginx-ingress-controller-b8l4k"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002d45b0), AttrSecondary:map[string]string{"pod":"mon-elasticsearch-master-1", "namespace":"monitor", "node":"controller-1"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002d4620), AttrSecondary:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-elasticsearch-data-1"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002d4690), AttrSecondary:map[string]string{"namespace":"kube-system", "node":"controller-1", "pod":"rbd-provisioner-7484d49cf6-w6dzr"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002d4700), AttrSecondary:map[string]string{"namespace":"monitor", "node":"controller-1", "pod":"mon-kibana-6cf57cfd5b-7zrrf"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc0002d5b60), AttrSecondary:map[string]string{"node":"controller-1", "pod":"mon-metricbeat-7948cd594c-pz6pb", "namespace":"monitor"}}}, Deleted:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "fd00-206--a4ce-fec1-5423-e300-122": the object has been modified; please apply your changes to the latest version and try again 2019-11-04T19:20:08.072 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.072 [INFO][154408] ipam_plugin.go 258: IPAM Result ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" HandleID="chain.e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Workload="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc000428300)} 2019-11-04T19:20:08.072 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.072 [INFO][154409] ipam.go 816: Failed to update block block=fd00:206::a4ce:fec1:5423:e300/122 error=update conflict: IPAMBlock(fd00-206--a4ce-fec1-5423-e300-122) handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.076 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.076 [INFO][154331] k8s.go 361: Populated endpoint ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Namespace="monitor" Pod="mon-nginx-ingress-default-backend-5997cfc99f-g2rbd" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0", GenerateName:"mon-nginx-ingress-default-backend-5997cfc99f-", Namespace:"monitor", SelfLink:"", UID:"493dbaa5-c410-4daf-993b-8f90e2e3f526", ResourceVersion:"8165758", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492001, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"pod-template-hash":"5997cfc99f", "release":"mon-nginx-ingress", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "app":"nginx-ingress", "component":"default-backend"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-nginx-ingress-default-backend-5997cfc99f-g2rbd", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e302/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"cali7b987b3bd2d", MAC:"", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90}}}} 2019-11-04T19:20:08.076 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.076 [INFO][154331] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e302/128] ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Namespace="monitor" Pod="mon-nginx-ingress-default-backend-5997cfc99f-g2rbd" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" 2019-11-04T19:20:08.076 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.076 [INFO][154331] network_linux.go 76: Setting the host side veth name to cali7b987b3bd2d ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Namespace="monitor" Pod="mon-nginx-ingress-default-backend-5997cfc99f-g2rbd" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" 2019-11-04T19:20:08.077 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.077 [INFO][154409] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.078 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.078 [INFO][154409] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.079 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.079 [INFO][154409] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.079 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.079 [INFO][154409] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.080 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.080 [INFO][154409] ipam.go 1244: Creating new handle: chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32 2019-11-04T19:20:08.081 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.081 [INFO][154409] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.083 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.083 [INFO][154409] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e305/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.083 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.083 [INFO][154409] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e305/122] handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.084 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.084 [INFO][154409] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e305/122] handle="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" host="controller-1" 2019-11-04T19:20:08.084 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.084 [INFO][154409] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e305/122] ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" HandleID="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Workload="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" 2019-11-04T19:20:08.084 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.084 [INFO][154409] ipam_plugin.go 258: IPAM Result ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" HandleID="chain.a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Workload="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc00015e960)} 2019-11-04T19:20:08.086 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.086 [INFO][154336] k8s.go 361: Populated endpoint ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Namespace="monitor" Pod="mon-metricbeat-7948cd594c-pz6pb" WorkloadEndpoint="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0", GenerateName:"mon-metricbeat-7948cd594c-", Namespace:"monitor", SelfLink:"", UID:"c1dacd07-7924-45c2-9b2f-f5bcf5226ed6", ResourceVersion:"8165640", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492001, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"release":"mon-metricbeat", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-metricbeat", "app":"metricbeat", "pod-template-hash":"7948cd594c"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-metricbeat-7948cd594c-pz6pb", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e305/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-metricbeat"}, InterfaceName:"calia983dc8a7fc", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:20:08.086 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.086 [INFO][154336] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e305/128] ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Namespace="monitor" Pod="mon-metricbeat-7948cd594c-pz6pb" WorkloadEndpoint="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" 2019-11-04T19:20:08.086 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.086 [INFO][154336] network_linux.go 76: Setting the host side veth name to calia983dc8a7fc ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Namespace="monitor" Pod="mon-metricbeat-7948cd594c-pz6pb" WorkloadEndpoint="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" 2019-11-04T19:20:08.106 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.106 [INFO][154330] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Namespace="monitor" Pod="mon-kibana-6cf57cfd5b-7zrrf" WorkloadEndpoint="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0", GenerateName:"mon-kibana-6cf57cfd5b-", Namespace:"monitor", SelfLink:"", UID:"b52283f2-29ea-4bae-aeb6-d7fe28283e5f", ResourceVersion:"8165697", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492001, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "app":"kibana", "pod-template-hash":"6cf57cfd5b", "release":"mon-kibana"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c", Pod:"mon-kibana-6cf57cfd5b-7zrrf", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e336/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"cali0dee4d738cd", MAC:"6e:9b:b8:e2:1e:52", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"kibana", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x15e1}}}} 2019-11-04T19:20:08.109 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.109 [INFO][154330] k8s.go 420: Wrote updated endpoint to datastore ContainerID="6e69a2aaa19600960e813eb67ef9189e0462c5526bf43e2eb6ed227cd7bcf71c" Namespace="monitor" Pod="mon-kibana-6cf57cfd5b-7zrrf" WorkloadEndpoint="controller--1-k8s-mon--kibana--6cf57cfd5b--7zrrf-eth0" 2019-11-04T19:20:08.110 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.110 [INFO][154336] network_linux.go 411: Disabling IPv6 forwarding ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Namespace="monitor" Pod="mon-metricbeat-7948cd594c-pz6pb" WorkloadEndpoint="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" 2019-11-04T19:20:08.125 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.124 [INFO][154472] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"monitor", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"mon-kube-state-metrics-59947d74fb-qww9j", ContainerID:"b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc"}} 2019-11-04T19:20:08.142 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.142 [INFO][154472] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0 mon-kube-state-metrics-59947d74fb- monitor 68fc5aa6-9d1d-4724-88ea-379ab2e66d82 8165680 0 2019-11-04 19:20:01 +0000 UTC map[projectcalico.org/serviceaccount:mon-kube-state-metrics app.kubernetes.io/instance:mon-kube-state-metrics app.kubernetes.io/name:kube-state-metrics pod-template-hash:59947d74fb release:mon-kube-state-metrics projectcalico.org/namespace:monitor projectcalico.org/orchestrator:k8s] map[] [] nil [] } {k8s controller-1 mon-kube-state-metrics-59947d74fb-qww9j eth0 [] [] [kns.monitor ksa.monitor.mon-kube-state-metrics] calic159a87595a []}} ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Namespace="monitor" Pod="mon-kube-state-metrics-59947d74fb-qww9j" WorkloadEndpoint="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-" 2019-11-04T19:20:08.142 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.142 [INFO][154472] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Namespace="monitor" Pod="mon-kube-state-metrics-59947d74fb-qww9j" WorkloadEndpoint="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" 2019-11-04T19:20:08.145 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.145 [INFO][154472] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:monitor,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/monitor,UID:85834187-55ff-4686-b97c-c3f524d37f83,ResourceVersion:46120,Generation:0,CreationTimestamp:2019-10-25 19:07:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:20:08.146 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.146 [INFO][154472] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:mon-kube-state-metrics-59947d74fb-qww9j,GenerateName:mon-kube-state-metrics-59947d74fb-,Namespace:monitor,SelfLink:/api/v1/namespaces/monitor/pods/mon-kube-state-metrics-59947d74fb-qww9j,UID:68fc5aa6-9d1d-4724-88ea-379ab2e66d82,ResourceVersion:8165680,Generation:0,CreationTimestamp:2019-11-04 19:20:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app.kubernetes.io/instance: mon-kube-state-metrics,app.kubernetes.io/name: kube-state-metrics,pod-template-hash: 59947d74fb,release: mon-kube-state-metrics,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet mon-kube-state-metrics-59947d74fb f52b2960-b9a4-4345-b260-f940f8a8cad3 0xc0003d9fea 0xc0003d9feb}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{mon-kube-state-metrics-token-qj6tw {nil nil nil nil nil SecretVolumeSource{SecretName:mon-kube-state-metrics-token-qj6tw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{kube-state-metrics quay.io/coreos/kube-state-metrics:v1.8.0 [] [--collectors=certificatesigningrequests --collectors=configmaps --collectors=cronjobs --collectors=daemonsets --collectors=deployments --collectors=endpoints --collectors=horizontalpodautoscalers --collectors=ingresses --collectors=jobs --collectors=limitranges --collectors=namespaces --collectors=nodes --collectors=persistentvolumeclaims --collectors=persistentvolumes --collectors=poddisruptionbudgets --collectors=pods --collectors=replicasets --collectors=replicationcontrollers --collectors=resourcequotas --collectors=secrets --collectors=services --collectors=statefulsets --collectors=storageclasses] [{ 0 8080 TCP }] [] [] {map[cpu:{{100 -3} {} 100m DecimalSI} memory:{{268435456 0} {} BinarySI}] map[cpu:{{50 -3} {} 50m DecimalSI} memory:{{268435456 0} {} BinarySI}]} [{mon-kube-state-metrics-token-qj6tw true /var/run/secrets/kubernetes.io/serviceaccount }] [] Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} &Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{elastic-controller: enabled,},ServiceAccountName:mon-kube-state-metrics,DeprecatedServiceAccount:mon-kube-state-metrics,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:*65534,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:*65534,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00040e7a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00040e7c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC ContainersNotReady containers with unready status: [kube-state-metrics]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC ContainersNotReady containers with unready status: [kube-state-metrics]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:01 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:20:01 +0000 UTC,ContainerStatuses:[{kube-state-metrics {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 quay.io/coreos/kube-state-metrics:v1.8.0 }],QOSClass:Burstable,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T19:20:08.151 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.151 [INFO][154331] network_linux.go 411: Disabling IPv6 forwarding ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Namespace="monitor" Pod="mon-nginx-ingress-default-backend-5997cfc99f-g2rbd" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" 2019-11-04T19:20:08.166 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.165 [INFO][154529] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" HandleID="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Workload="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" 2019-11-04T19:20:08.174 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.173 [INFO][154529] ipam_plugin.go 220: Calico CNI IPAM handle=chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" HandleID="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Workload="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" 2019-11-04T19:20:08.174 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.173 [INFO][154529] ipam_plugin.go 230: Auto assigning IP ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" HandleID="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Workload="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002de560), Attrs:map[string]string{"node":"controller-1", "pod":"mon-kube-state-metrics-59947d74fb-qww9j", "namespace":"monitor"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:20:08.174 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.174 [INFO][154529] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:20:08.178 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.177 [INFO][154529] ipam.go 309: Looking up existing affinities for host handle="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" host="controller-1" 2019-11-04T19:20:08.181 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.181 [INFO][154529] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" host="controller-1" 2019-11-04T19:20:08.183 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.183 [INFO][154529] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.185 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.185 [INFO][154529] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:08.186 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.185 [INFO][154529] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" host="controller-1" 2019-11-04T19:20:08.187 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.187 [INFO][154529] ipam.go 1244: Creating new handle: chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc 2019-11-04T19:20:08.189 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.188 [INFO][154331] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Namespace="monitor" Pod="mon-nginx-ingress-default-backend-5997cfc99f-g2rbd" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0", GenerateName:"mon-nginx-ingress-default-backend-5997cfc99f-", Namespace:"monitor", SelfLink:"", UID:"493dbaa5-c410-4daf-993b-8f90e2e3f526", ResourceVersion:"8165758", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492001, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default", "app":"nginx-ingress", "component":"default-backend", "pod-template-hash":"5997cfc99f", "release":"mon-nginx-ingress", "projectcalico.org/namespace":"monitor"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03", Pod:"mon-nginx-ingress-default-backend-5997cfc99f-g2rbd", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e302/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.default"}, InterfaceName:"cali7b987b3bd2d", MAC:"02:c9:eb:a0:87:b2", Ports:[]v3.EndpointPort{v3.EndpointPort{Name:"http", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90}}}} 2019-11-04T19:20:08.189 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.189 [INFO][154529] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" host="controller-1" 2019-11-04T19:20:08.190 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.189 [INFO][154336] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Namespace="monitor" Pod="mon-metricbeat-7948cd594c-pz6pb" WorkloadEndpoint="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0", GenerateName:"mon-metricbeat-7948cd594c-", Namespace:"monitor", SelfLink:"", UID:"c1dacd07-7924-45c2-9b2f-f5bcf5226ed6", ResourceVersion:"8165640", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492001, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"metricbeat", "pod-template-hash":"7948cd594c", "release":"mon-metricbeat", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-metricbeat"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32", Pod:"mon-metricbeat-7948cd594c-pz6pb", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e305/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-metricbeat"}, InterfaceName:"calia983dc8a7fc", MAC:"26:4a:49:13:ff:a3", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:20:08.192 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.192 [INFO][154331] k8s.go 420: Wrote updated endpoint to datastore ContainerID="e71305dedc8a9df15d0114d82aeb57c7ff459f8ff22ee1e3d5622bc419cdfb03" Namespace="monitor" Pod="mon-nginx-ingress-default-backend-5997cfc99f-g2rbd" WorkloadEndpoint="controller--1-k8s-mon--nginx--ingress--default--backend--5997cfc99f--g2rbd-eth0" 2019-11-04T19:20:08.192 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.192 [INFO][154529] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e30f/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" host="controller-1" 2019-11-04T19:20:08.192 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.192 [INFO][154529] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e30f/122] handle="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" host="controller-1" 2019-11-04T19:20:08.193 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.193 [INFO][154336] k8s.go 420: Wrote updated endpoint to datastore ContainerID="a9e89b5a94645685cd86dd8aeabc3688162b63991b466444268f974ccc439a32" Namespace="monitor" Pod="mon-metricbeat-7948cd594c-pz6pb" WorkloadEndpoint="controller--1-k8s-mon--metricbeat--7948cd594c--pz6pb-eth0" 2019-11-04T19:20:08.194 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.194 [INFO][154529] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e30f/122] handle="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" host="controller-1" 2019-11-04T19:20:08.194 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.194 [INFO][154529] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e30f/122] ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" HandleID="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Workload="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" 2019-11-04T19:20:08.194 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.194 [INFO][154529] ipam_plugin.go 258: IPAM Result ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" HandleID="chain.b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Workload="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc000142a20)} 2019-11-04T19:20:08.196 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.196 [INFO][154472] k8s.go 361: Populated endpoint ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Namespace="monitor" Pod="mon-kube-state-metrics-59947d74fb-qww9j" WorkloadEndpoint="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0", GenerateName:"mon-kube-state-metrics-59947d74fb-", Namespace:"monitor", SelfLink:"", UID:"68fc5aa6-9d1d-4724-88ea-379ab2e66d82", ResourceVersion:"8165680", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492001, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/instance":"mon-kube-state-metrics", "app.kubernetes.io/name":"kube-state-metrics", "pod-template-hash":"59947d74fb", "release":"mon-kube-state-metrics", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-kube-state-metrics"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"mon-kube-state-metrics-59947d74fb-qww9j", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e30f/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-kube-state-metrics"}, InterfaceName:"calic159a87595a", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:20:08.196 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.196 [INFO][154472] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e30f/128] ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Namespace="monitor" Pod="mon-kube-state-metrics-59947d74fb-qww9j" WorkloadEndpoint="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" 2019-11-04T19:20:08.196 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.196 [INFO][154472] network_linux.go 76: Setting the host side veth name to calic159a87595a ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Namespace="monitor" Pod="mon-kube-state-metrics-59947d74fb-qww9j" WorkloadEndpoint="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" 2019-11-04T19:20:08.199 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.199 [INFO][154472] network_linux.go 411: Disabling IPv6 forwarding ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Namespace="monitor" Pod="mon-kube-state-metrics-59947d74fb-qww9j" WorkloadEndpoint="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" 2019-11-04T19:20:08.210 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b52283f2-29ea-4bae-aeb6-d7fe28283e5f/volume-subpaths/kibana/kibana/0. 2019-11-04T19:20:08.244 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.244 [INFO][154472] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Namespace="monitor" Pod="mon-kube-state-metrics-59947d74fb-qww9j" WorkloadEndpoint="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0", GenerateName:"mon-kube-state-metrics-59947d74fb-", Namespace:"monitor", SelfLink:"", UID:"68fc5aa6-9d1d-4724-88ea-379ab2e66d82", ResourceVersion:"8165680", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492001, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/instance":"mon-kube-state-metrics", "app.kubernetes.io/name":"kube-state-metrics", "pod-template-hash":"59947d74fb", "release":"mon-kube-state-metrics", "projectcalico.org/namespace":"monitor", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"mon-kube-state-metrics"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc", Pod:"mon-kube-state-metrics-59947d74fb-qww9j", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e30f/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.monitor", "ksa.monitor.mon-kube-state-metrics"}, InterfaceName:"calic159a87595a", MAC:"82:bb:8f:56:cb:13", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:20:08.247 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.247 [INFO][154472] k8s.go 420: Wrote updated endpoint to datastore ContainerID="b8ac285f648759e6f929c4c70c112f682480fe20b4815a13d2dd5be2f8b3efbc" Namespace="monitor" Pod="mon-kube-state-metrics-59947d74fb-qww9j" WorkloadEndpoint="controller--1-k8s-mon--kube--state--metrics--59947d74fb--qww9j-eth0" 2019-11-04T19:20:08.295 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/b52283f2-29ea-4bae-aeb6-d7fe28283e5f/volume-subpaths/kibana/kibana/0. 2019-11-04T19:20:08.300 controller-1 containerd[12214]: info time="2019-11-04T19:20:08.300571954Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/05d977d4afb2620a45a54dbf0b3428c3c2c7d258180c5605b2bffe07bb1bd622/shim.sock" debug=false pid=154705 2019-11-04T19:20:08.317 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/c1dacd07-7924-45c2-9b2f-f5bcf5226ed6/volume-subpaths/metricbeat-config/metricbeat/0. 2019-11-04T19:20:08.383 controller-1 containerd[12214]: info time="2019-11-04T19:20:08.383340047Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/3e133580eefef2a7118e960836051056f08e7de43ed355c4fcab1fb5933d344c/shim.sock" debug=false pid=154737 2019-11-04T19:20:08.397 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/c1dacd07-7924-45c2-9b2f-f5bcf5226ed6/volume-subpaths/metricbeat-config/metricbeat/0. 2019-11-04T19:20:08.403 controller-1 containerd[12214]: info time="2019-11-04T19:20:08.403016189Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ae758c215c025f0ba7d45e0ca0baea9ae52822cdb0a2e36501db70871e0579c7/shim.sock" debug=false pid=154755 2019-11-04T19:20:08.466 controller-1 containerd[12214]: info time="2019-11-04T19:20:08.466365050Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d8e00479a33827909bca6fc3efafc5b48876f09f2040ae55589743e5c81389f5/shim.sock" debug=false pid=154812 2019-11-04T19:20:08.977 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.977 [INFO][155455] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"ceph-pools-audit-1572895200-h82f6", ContainerID:"1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93"}} 2019-11-04T19:20:08.993 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.992 [INFO][155455] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0 ceph-pools-audit-1572895200- kube-system 63339254-120f-499b-a172-54f3a09ee4ad 8165778 0 2019-11-04 19:20:02 +0000 UTC map[app:ceph-pools-audit controller-uid:40ac2bbc-6835-4d02-a8f5-6516799e4f6a job-name:ceph-pools-audit-1572895200 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:ceph-pools-audit] map[] [] nil [] } {k8s controller-1 ceph-pools-audit-1572895200-h82f6 eth0 [] [] [kns.kube-system ksa.kube-system.ceph-pools-audit] cali9ad4992664b []}} ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Namespace="kube-system" Pod="ceph-pools-audit-1572895200-h82f6" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-" 2019-11-04T19:20:08.993 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.993 [INFO][155455] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Namespace="kube-system" Pod="ceph-pools-audit-1572895200-h82f6" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:08.996 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.996 [INFO][155455] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kube-system,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/kube-system,UID:5d016a6c-19e8-4b97-88a9-b6113a3cb736,ResourceVersion:5,Generation:0,CreationTimestamp:2019-10-25 15:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:20:08.998 controller-1 kubelet[88595]: info 2019-11-04 19:20:08.998 [INFO][155455] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ceph-pools-audit-1572895200-h82f6,GenerateName:ceph-pools-audit-1572895200-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/ceph-pools-audit-1572895200-h82f6,UID:63339254-120f-499b-a172-54f3a09ee4ad,ResourceVersion:8165778,Generation:0,CreationTimestamp:2019-11-04 19:20:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: ceph-pools-audit,controller-uid: 40ac2bbc-6835-4d02-a8f5-6516799e4f6a,job-name: ceph-pools-audit-1572895200,},Annotations:map[string]string{},OwnerReferences:[{batch/v1 Job ceph-pools-audit-1572895200 40ac2bbc-6835-4d02-a8f5-6516799e4f6a 0xc00056b05b 0xc00056b05c}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{ceph-pools-bin {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:ceph-pools-bin,},Items:[],DefaultMode:*365,Optional:nil,} nil nil nil nil nil nil nil nil}} {etcceph {nil &EmptyDirVolumeSource{Medium:,SizeLimit:,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {ceph-etc {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:ceph-etc,},Items:[],DefaultMode:*292,Optional:nil,} nil nil nil nil nil nil nil nil}} {ceph-pools-audit-token-bsfbw {nil nil nil nil nil &SecretVolumeSource{SecretName:ceph-pools-audit-token-bsfbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{ceph-pools-audit-ceph-store registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 [/tmp/ceph-pools-audit.sh] [] [] [] [{RBD_POOL_REPLICATION 2 nil} {RBD_POOL_MIN_REPLICATION 1 nil} {RBD_POOL_CRUSH_RULE_NAME storage_tier_ruleset nil}] {map[] map[]} [{ceph-pools-bin true /tmp/ceph-pools-audit.sh ceph-pools-audit.sh } {etcceph false /etc/ceph } {ceph-etc true /etc/ceph/ceph.conf ceph.conf } {ceph-pools-audit-token-bsfbw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:OnFailure,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:ceph-pools-audit,DeprecatedServiceAccount:ceph-pools-audit,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[{default-registry-key}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00056b280} {node.kubernetes.io/unreachable Exists NoExecute 0xc00056b2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:02 +0000 UTC ContainersNotReady containers with unready status: [ceph-pools-audit-ceph-store]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:02 +0000 UTC ContainersNotReady containers with unready status: [ceph-pools-audit-ceph-store]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:20:02 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:20:02 +0000 UTC,ContainerStatuses:[{ceph-pools-audit-ceph-store {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T19:20:09.017 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.017 [INFO][155504] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" HandleID="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Workload="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:09.026 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.026 [INFO][155504] ipam_plugin.go 220: Calico CNI IPAM handle=chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93 ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" HandleID="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Workload="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:09.026 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.026 [INFO][155504] ipam_plugin.go 230: Auto assigning IP ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" HandleID="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Workload="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002c1b60), Attrs:map[string]string{"node":"controller-1", "pod":"ceph-pools-audit-1572895200-h82f6", "namespace":"kube-system"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:20:09.026 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.026 [INFO][155504] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:20:09.030 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.030 [INFO][155504] ipam.go 309: Looking up existing affinities for host handle="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" host="controller-1" 2019-11-04T19:20:09.043 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.043 [INFO][155504] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" host="controller-1" 2019-11-04T19:20:09.046 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.046 [INFO][155504] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:09.049 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.049 [INFO][155504] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:20:09.049 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.049 [INFO][155504] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" host="controller-1" 2019-11-04T19:20:09.051 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.051 [INFO][155504] ipam.go 1244: Creating new handle: chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93 2019-11-04T19:20:09.056 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.056 [INFO][155504] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" host="controller-1" 2019-11-04T19:20:09.059 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.059 [INFO][155504] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e315/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" host="controller-1" 2019-11-04T19:20:09.059 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.059 [INFO][155504] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e315/122] handle="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" host="controller-1" 2019-11-04T19:20:09.061 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.061 [INFO][155504] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e315/122] handle="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" host="controller-1" 2019-11-04T19:20:09.061 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.061 [INFO][155504] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e315/122] ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" HandleID="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Workload="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:09.061 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.061 [INFO][155504] ipam_plugin.go 258: IPAM Result ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" HandleID="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Workload="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc000440120)} 2019-11-04T19:20:09.063 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.063 [INFO][155455] k8s.go 361: Populated endpoint ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Namespace="kube-system" Pod="ceph-pools-audit-1572895200-h82f6" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0", GenerateName:"ceph-pools-audit-1572895200-", Namespace:"kube-system", SelfLink:"", UID:"63339254-120f-499b-a172-54f3a09ee4ad", ResourceVersion:"8165778", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492002, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ceph-pools-audit", "controller-uid":"40ac2bbc-6835-4d02-a8f5-6516799e4f6a", "job-name":"ceph-pools-audit-1572895200", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"ceph-pools-audit-1572895200-h82f6", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e315/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali9ad4992664b", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:20:09.063 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.063 [INFO][155455] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e315/128] ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Namespace="kube-system" Pod="ceph-pools-audit-1572895200-h82f6" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:09.063 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.063 [INFO][155455] network_linux.go 76: Setting the host side veth name to cali9ad4992664b ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Namespace="kube-system" Pod="ceph-pools-audit-1572895200-h82f6" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:09.066 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.066 [INFO][155455] network_linux.go 411: Disabling IPv6 forwarding ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Namespace="kube-system" Pod="ceph-pools-audit-1572895200-h82f6" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:09.113 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.113 [INFO][155455] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Namespace="kube-system" Pod="ceph-pools-audit-1572895200-h82f6" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0", GenerateName:"ceph-pools-audit-1572895200-", Namespace:"kube-system", SelfLink:"", UID:"63339254-120f-499b-a172-54f3a09ee4ad", ResourceVersion:"8165778", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492002, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ceph-pools-audit", "controller-uid":"40ac2bbc-6835-4d02-a8f5-6516799e4f6a", "job-name":"ceph-pools-audit-1572895200", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93", Pod:"ceph-pools-audit-1572895200-h82f6", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e315/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali9ad4992664b", MAC:"a6:2f:dd:4d:f6:dd", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:20:09.115 controller-1 kubelet[88595]: info 2019-11-04 19:20:09.115 [INFO][155455] k8s.go 420: Wrote updated endpoint to datastore ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Namespace="kube-system" Pod="ceph-pools-audit-1572895200-h82f6" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:09.172 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/63339254-120f-499b-a172-54f3a09ee4ad/volume-subpaths/ceph-pools-bin/ceph-pools-audit-ceph-store/0. 2019-11-04T19:20:09.225 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/63339254-120f-499b-a172-54f3a09ee4ad/volume-subpaths/ceph-pools-bin/ceph-pools-audit-ceph-store/0. 2019-11-04T19:20:09.272 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/63339254-120f-499b-a172-54f3a09ee4ad/volume-subpaths/ceph-etc/ceph-pools-audit-ceph-store/2. 2019-11-04T19:20:09.299 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/63339254-120f-499b-a172-54f3a09ee4ad/volume-subpaths/ceph-etc/ceph-pools-audit-ceph-store/2. 2019-11-04T19:20:09.351 controller-1 containerd[12214]: info time="2019-11-04T19:20:09.351504616Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b2e89d14cfa884aa3d91a22ef0fe7b7b1508c1157a401ed7cab9e31af1fc357e/shim.sock" debug=false pid=155570 2019-11-04T19:20:12.000 controller-1 ntpd[87625]: info Listen normally on 26 cali9ad4992664b fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:20:12.000 controller-1 ntpd[87625]: info Listen normally on 27 calic159a87595a fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:20:12.000 controller-1 ntpd[87625]: info Listen normally on 28 calia983dc8a7fc fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:20:12.000 controller-1 ntpd[87625]: info Listen normally on 29 cali7b987b3bd2d fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:20:12.000 controller-1 ntpd[87625]: info Listen normally on 30 cali0dee4d738cd fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:20:12.000 controller-1 ntpd[87625]: debug new interface(s) found: waking up resolver 2019-11-04T19:20:15.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 11.6% (avg per cpu); cpus: 36, Platform: 6.2% (Base: 4.6, k8s-system: 1.6), k8s-addon: 5.1 2019-11-04T19:20:15.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 125927.2 MiB, Platform: 8812.8 MiB (Base: 8175.9, k8s-system: 636.9), k8s-addon: 7235.7 2019-11-04T19:20:15.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.8%, Anon: 16127.6 MiB, cgroup-rss: 16052.7 MiB, Avail: 109799.6 MiB, Total: 125927.2 MiB 2019-11-04T19:20:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.98%, Anon: 5052.5 MiB, Avail: 58290.5 MiB, Total: 63343.0 MiB 2019-11-04T19:20:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.46%, Anon: 11075.1 MiB, Avail: 52356.7 MiB, Total: 63431.8 MiB 2019-11-04T19:20:20.723 controller-1 containerd[12214]: info time="2019-11-04T19:20:20.723075115Z" level=info msg="shim reaped" id=b2e89d14cfa884aa3d91a22ef0fe7b7b1508c1157a401ed7cab9e31af1fc357e 2019-11-04T19:20:20.733 controller-1 dockerd[12332]: info time="2019-11-04T19:20:20.733089214Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:20:21.072 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.072 [INFO][157923] plugin.go 442: Extracted identifiers ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:21.078 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.078 [WARNING][157923] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T19:20:21.078 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.078 [INFO][157923] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0", GenerateName:"ceph-pools-audit-1572895200-", Namespace:"kube-system", SelfLink:"", UID:"63339254-120f-499b-a172-54f3a09ee4ad", ResourceVersion:"8166071", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492002, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ceph-pools-audit", "controller-uid":"40ac2bbc-6835-4d02-a8f5-6516799e4f6a", "job-name":"ceph-pools-audit-1572895200", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"ceph-pools-audit-1572895200-h82f6", Endpoint:"eth0", IPNetworks:[]string(nil), IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali9ad4992664b", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:20:21.078 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.078 [INFO][157923] k8s.go 477: Releasing IP address(es) ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" 2019-11-04T19:20:21.078 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.078 [INFO][157923] utils.go 171: Calico CNI releasing IP address ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" 2019-11-04T19:20:21.079 controller-1 kubelet[88595]: info I1104 19:20:21.079125 88595 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/63339254-120f-499b-a172-54f3a09ee4ad-ceph-pools-audit-token-bsfbw") pod "63339254-120f-499b-a172-54f3a09ee4ad" (UID: "63339254-120f-499b-a172-54f3a09ee4ad") 2019-11-04T19:20:21.079 controller-1 kubelet[88595]: info I1104 19:20:21.079166 88595 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/63339254-120f-499b-a172-54f3a09ee4ad-ceph-etc") pod "63339254-120f-499b-a172-54f3a09ee4ad" (UID: "63339254-120f-499b-a172-54f3a09ee4ad") 2019-11-04T19:20:21.079 controller-1 kubelet[88595]: info I1104 19:20:21.079215 88595 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/63339254-120f-499b-a172-54f3a09ee4ad-ceph-pools-bin") pod "63339254-120f-499b-a172-54f3a09ee4ad" (UID: "63339254-120f-499b-a172-54f3a09ee4ad") 2019-11-04T19:20:21.079 controller-1 kubelet[88595]: info I1104 19:20:21.079258 88595 reconciler.go:181] operationExecutor.UnmountVolume started for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/63339254-120f-499b-a172-54f3a09ee4ad-etcceph") pod "63339254-120f-499b-a172-54f3a09ee4ad" (UID: "63339254-120f-499b-a172-54f3a09ee4ad") 2019-11-04T19:20:21.079 controller-1 kubelet[88595]: info W1104 19:20:21.079379 88595 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/63339254-120f-499b-a172-54f3a09ee4ad/volumes/kubernetes.io~empty-dir/etcceph: ClearQuota called, but quotas disabled 2019-11-04T19:20:21.079 controller-1 kubelet[88595]: info I1104 19:20:21.079499 88595 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63339254-120f-499b-a172-54f3a09ee4ad-etcceph" (OuterVolumeSpecName: "etcceph") pod "63339254-120f-499b-a172-54f3a09ee4ad" (UID: "63339254-120f-499b-a172-54f3a09ee4ad"). InnerVolumeSpecName "etcceph". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" 2019-11-04T19:20:21.092 controller-1 kubelet[88595]: info I1104 19:20:21.092678 88595 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63339254-120f-499b-a172-54f3a09ee4ad-ceph-pools-audit-token-bsfbw" (OuterVolumeSpecName: "ceph-pools-audit-token-bsfbw") pod "63339254-120f-499b-a172-54f3a09ee4ad" (UID: "63339254-120f-499b-a172-54f3a09ee4ad"). InnerVolumeSpecName "ceph-pools-audit-token-bsfbw". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T19:20:21.092 controller-1 kubelet[88595]: info W1104 19:20:21.092746 88595 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/63339254-120f-499b-a172-54f3a09ee4ad/volumes/kubernetes.io~configmap/ceph-pools-bin: ClearQuota called, but quotas disabled 2019-11-04T19:20:21.092 controller-1 kubelet[88595]: info W1104 19:20:21.092819 88595 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/63339254-120f-499b-a172-54f3a09ee4ad/volumes/kubernetes.io~configmap/ceph-etc: ClearQuota called, but quotas disabled 2019-11-04T19:20:21.092 controller-1 kubelet[88595]: info I1104 19:20:21.092937 88595 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63339254-120f-499b-a172-54f3a09ee4ad-ceph-pools-bin" (OuterVolumeSpecName: "ceph-pools-bin") pod "63339254-120f-499b-a172-54f3a09ee4ad" (UID: "63339254-120f-499b-a172-54f3a09ee4ad"). InnerVolumeSpecName "ceph-pools-bin". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T19:20:21.093 controller-1 kubelet[88595]: info I1104 19:20:21.093012 88595 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63339254-120f-499b-a172-54f3a09ee4ad-ceph-etc" (OuterVolumeSpecName: "ceph-etc") pod "63339254-120f-499b-a172-54f3a09ee4ad" (UID: "63339254-120f-499b-a172-54f3a09ee4ad"). InnerVolumeSpecName "ceph-etc". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T19:20:21.098 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.098 [INFO][157941] ipam_plugin.go 299: Releasing address using handleID ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" HandleID="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Workload="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:21.099 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.099 [INFO][157941] ipam.go 1145: Releasing all IPs with handle 'chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93' 2019-11-04T19:20:21.122 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.122 [INFO][157941] ipam_plugin.go 308: Released address using handleID ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" HandleID="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Workload="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:21.122 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.122 [INFO][157941] ipam_plugin.go 317: Releasing address using workloadID ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" HandleID="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Workload="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:21.122 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.122 [INFO][157941] ipam.go 1145: Releasing all IPs with handle 'kube-system.ceph-pools-audit-1572895200-h82f6' 2019-11-04T19:20:21.125 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.124 [INFO][157923] k8s.go 481: Cleaning up netns ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" 2019-11-04T19:20:21.125 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.125 [INFO][157923] network_linux.go 473: veth does not exist, no need to clean up. ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" ifName="eth0" 2019-11-04T19:20:21.125 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.125 [INFO][157923] k8s.go 493: Teardown processing complete. ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" 2019-11-04T19:20:21.179 controller-1 kubelet[88595]: info I1104 19:20:21.179538 88595 reconciler.go:301] Volume detached for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/63339254-120f-499b-a172-54f3a09ee4ad-ceph-pools-audit-token-bsfbw") on node "controller-1" DevicePath "" 2019-11-04T19:20:21.179 controller-1 kubelet[88595]: info I1104 19:20:21.179563 88595 reconciler.go:301] Volume detached for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/63339254-120f-499b-a172-54f3a09ee4ad-ceph-etc") on node "controller-1" DevicePath "" 2019-11-04T19:20:21.179 controller-1 kubelet[88595]: info I1104 19:20:21.179571 88595 reconciler.go:301] Volume detached for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/63339254-120f-499b-a172-54f3a09ee4ad-ceph-pools-bin") on node "controller-1" DevicePath "" 2019-11-04T19:20:21.179 controller-1 kubelet[88595]: info I1104 19:20:21.179579 88595 reconciler.go:301] Volume detached for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/63339254-120f-499b-a172-54f3a09ee4ad-etcceph") on node "controller-1" DevicePath "" 2019-11-04T19:20:21.223 controller-1 containerd[12214]: info time="2019-11-04T19:20:21.223198803Z" level=info msg="shim reaped" id=1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93 2019-11-04T19:20:21.223 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.223 [INFO][158045] plugin.go 442: Extracted identifiers ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:21.230 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.230 [WARNING][158045] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T19:20:21.230 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.230 [INFO][158045] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0", GenerateName:"ceph-pools-audit-1572895200-", Namespace:"kube-system", SelfLink:"", UID:"63339254-120f-499b-a172-54f3a09ee4ad", ResourceVersion:"8166071", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492002, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ceph-pools-audit", "controller-uid":"40ac2bbc-6835-4d02-a8f5-6516799e4f6a", "job-name":"ceph-pools-audit-1572895200", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"ceph-pools-audit-1572895200-h82f6", Endpoint:"eth0", IPNetworks:[]string(nil), IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali9ad4992664b", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:20:21.230 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.230 [INFO][158045] k8s.go 477: Releasing IP address(es) ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" 2019-11-04T19:20:21.230 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.230 [INFO][158045] utils.go 171: Calico CNI releasing IP address ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" 2019-11-04T19:20:21.233 controller-1 dockerd[12332]: info time="2019-11-04T19:20:21.233269845Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:20:21.250 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.250 [INFO][158081] ipam_plugin.go 299: Releasing address using handleID ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" HandleID="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Workload="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:21.250 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.250 [INFO][158081] ipam.go 1145: Releasing all IPs with handle 'chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93' 2019-11-04T19:20:21.256 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.256 [WARNING][158081] ipam_plugin.go 306: Asked to release address but it doesn't exist. Ignoring ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" HandleID="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Workload="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:21.256 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.256 [INFO][158081] ipam_plugin.go 317: Releasing address using workloadID ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" HandleID="chain.1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" Workload="controller--1-k8s-ceph--pools--audit--1572895200--h82f6-eth0" 2019-11-04T19:20:21.256 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.256 [INFO][158081] ipam.go 1145: Releasing all IPs with handle 'kube-system.ceph-pools-audit-1572895200-h82f6' 2019-11-04T19:20:21.258 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.258 [INFO][158045] k8s.go 481: Cleaning up netns ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" 2019-11-04T19:20:21.258 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.258 [INFO][158045] network_linux.go 473: veth does not exist, no need to clean up. ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" ifName="eth0" 2019-11-04T19:20:21.258 controller-1 kubelet[88595]: info 2019-11-04 19:20:21.258 [INFO][158045] k8s.go 493: Teardown processing complete. ContainerID="1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" 2019-11-04T19:20:21.000 controller-1 lldpd[12281]: warning removal request for address of fe80::ecee:eeff:feee:eeee%27, but no knowledge of it 2019-11-04T19:20:21.993 controller-1 kubelet[88595]: info W1104 19:20:21.993807 88595 pod_container_deletor.go:75] Container "1496efe09e106f0f706af916e274b2cfc7bcc0522fa972dab6fe6e559c6d4d93" not found in pod's containers 2019-11-04T19:20:23.000 controller-1 ntpd[87625]: info Deleting interface #26 cali9ad4992664b, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=11 secs 2019-11-04T19:20:25.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 10.0% (avg per cpu); cpus: 36, Platform: 5.9% (Base: 4.8, k8s-system: 1.2), k8s-addon: 3.6 2019-11-04T19:20:25.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 125935.7 MiB, Platform: 8793.9 MiB (Base: 8167.7, k8s-system: 626.2), k8s-addon: 7251.3 2019-11-04T19:20:25.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.8%, Anon: 16124.1 MiB, cgroup-rss: 16049.3 MiB, Avail: 109811.6 MiB, Total: 125935.7 MiB 2019-11-04T19:20:25.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.96%, Anon: 5042.4 MiB, Avail: 58303.0 MiB, Total: 63345.5 MiB 2019-11-04T19:20:25.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.47%, Anon: 11081.7 MiB, Avail: 52356.6 MiB, Total: 63438.2 MiB 2019-11-04T19:20:35.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 8.5% (avg per cpu); cpus: 36, Platform: 5.4% (Base: 4.4, k8s-system: 1.0), k8s-addon: 3.0 2019-11-04T19:20:35.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 125933.0 MiB, Platform: 8807.8 MiB (Base: 8181.5, k8s-system: 626.3), k8s-addon: 7239.3 2019-11-04T19:20:35.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.8%, Anon: 16124.0 MiB, cgroup-rss: 16051.2 MiB, Avail: 109809.0 MiB, Total: 125933.0 MiB 2019-11-04T19:20:35.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 7.95%, Anon: 5033.5 MiB, Avail: 58311.6 MiB, Total: 63345.1 MiB 2019-11-04T19:20:35.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.48%, Anon: 11090.6 MiB, Avail: 52345.3 MiB, Total: 63435.9 MiB 2019-11-04T19:20:36.639 controller-1 containerd[12214]: info time="2019-11-04T19:20:36.639403103Z" level=info msg="shim reaped" id=3e133580eefef2a7118e960836051056f08e7de43ed355c4fcab1fb5933d344c 2019-11-04T19:20:36.649 controller-1 dockerd[12332]: info time="2019-11-04T19:20:36.649292868Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:20:36.703 controller-1 containerd[12214]: info time="2019-11-04T19:20:36.703130029Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/149af8709708ba37b6de1a635f3aa8d29d55146ad871b2781b0c539fdcdf508f/shim.sock" debug=false pid=160954 2019-11-04T19:20:45.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 7.7% (avg per cpu); cpus: 36, Platform: 5.0% (Base: 3.8, k8s-system: 1.2), k8s-addon: 2.7 2019-11-04T19:20:45.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 125936.0 MiB, Platform: 8820.8 MiB (Base: 8190.1, k8s-system: 630.7), k8s-addon: 7279.1 2019-11-04T19:20:45.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.8%, Anon: 16177.9 MiB, cgroup-rss: 16103.6 MiB, Avail: 109758.0 MiB, Total: 125936.0 MiB 2019-11-04T19:20:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.02%, Anon: 5078.4 MiB, Avail: 58268.3 MiB, Total: 63346.7 MiB 2019-11-04T19:20:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.50%, Anon: 11100.0 MiB, Avail: 52338.3 MiB, Total: 63438.4 MiB 2019-11-04T19:20:55.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.2% (avg per cpu); cpus: 36, Platform: 4.4% (Base: 3.3, k8s-system: 1.0), k8s-addon: 1.7 2019-11-04T19:20:55.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 125930.3 MiB, Platform: 8817.0 MiB (Base: 8186.6, k8s-system: 630.4), k8s-addon: 7253.3 2019-11-04T19:20:55.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.8%, Anon: 16149.4 MiB, cgroup-rss: 16074.4 MiB, Avail: 109780.9 MiB, Total: 125930.3 MiB 2019-11-04T19:20:55.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.00%, Anon: 5066.3 MiB, Avail: 58271.9 MiB, Total: 63338.3 MiB 2019-11-04T19:20:55.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.47%, Anon: 11083.1 MiB, Avail: 52357.6 MiB, Total: 63440.7 MiB 2019-11-04T19:20:59.000 controller-1 ntpd[87625]: info 0.0.0.0 0615 05 clock_sync 2019-11-04T19:21:05.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 7.1% (avg per cpu); cpus: 36, Platform: 5.0% (Base: 4.1, k8s-system: 0.9), k8s-addon: 1.9 2019-11-04T19:21:05.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 125928.2 MiB, Platform: 8837.6 MiB (Base: 8191.6, k8s-system: 646.0), k8s-addon: 7378.4 2019-11-04T19:21:05.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 12.9%, Anon: 16292.2 MiB, cgroup-rss: 16220.1 MiB, Avail: 109636.0 MiB, Total: 125928.2 MiB 2019-11-04T19:21:05.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.08%, Anon: 5117.0 MiB, Avail: 58223.5 MiB, Total: 63340.5 MiB 2019-11-04T19:21:05.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.62%, Anon: 11175.2 MiB, Avail: 52262.0 MiB, Total: 63437.2 MiB 2019-11-04T19:21:15.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.1% (avg per cpu); cpus: 36, Platform: 4.3% (Base: 3.3, k8s-system: 1.0), k8s-addon: 1.7 2019-11-04T19:21:15.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 125930.9 MiB, Platform: 8865.0 MiB (Base: 8205.5, k8s-system: 659.5), k8s-addon: 7384.0 2019-11-04T19:21:15.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16327.7 MiB, cgroup-rss: 16253.1 MiB, Avail: 109603.1 MiB, Total: 125930.9 MiB 2019-11-04T19:21:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.10%, Anon: 5128.4 MiB, Avail: 58213.2 MiB, Total: 63341.6 MiB 2019-11-04T19:21:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.65%, Anon: 11199.3 MiB, Avail: 52239.1 MiB, Total: 63438.4 MiB 2019-11-04T19:21:25.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.7% (avg per cpu); cpus: 36, Platform: 4.4% (Base: 3.3, k8s-system: 1.0), k8s-addon: 1.3 2019-11-04T19:21:25.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.0%; Reserved: 125931.6 MiB, Platform: 8876.4 MiB (Base: 8215.7, k8s-system: 660.7), k8s-addon: 7386.1 2019-11-04T19:21:25.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16342.8 MiB, cgroup-rss: 16266.7 MiB, Avail: 109588.8 MiB, Total: 125931.6 MiB 2019-11-04T19:21:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.11%, Anon: 5140.0 MiB, Avail: 58203.3 MiB, Total: 63343.3 MiB 2019-11-04T19:21:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.66%, Anon: 11202.8 MiB, Avail: 52235.2 MiB, Total: 63438.0 MiB 2019-11-04T19:21:35.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 7.2% (avg per cpu); cpus: 36, Platform: 5.5% (Base: 4.7, k8s-system: 0.8), k8s-addon: 1.6 2019-11-04T19:21:35.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125925.4 MiB, Platform: 8899.1 MiB (Base: 8238.2, k8s-system: 660.8), k8s-addon: 7389.5 2019-11-04T19:21:35.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16367.2 MiB, cgroup-rss: 16292.7 MiB, Avail: 109558.2 MiB, Total: 125925.4 MiB 2019-11-04T19:21:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.12%, Anon: 5142.6 MiB, Avail: 58197.8 MiB, Total: 63340.4 MiB 2019-11-04T19:21:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.69%, Anon: 11224.6 MiB, Avail: 52212.3 MiB, Total: 63436.9 MiB 2019-11-04T19:21:45.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 7.8% (avg per cpu); cpus: 36, Platform: 6.2% (Base: 5.2, k8s-system: 1.0), k8s-addon: 1.6 2019-11-04T19:21:45.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125926.5 MiB, Platform: 8890.7 MiB (Base: 8229.5, k8s-system: 661.2), k8s-addon: 7392.2 2019-11-04T19:21:45.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16360.9 MiB, cgroup-rss: 16286.8 MiB, Avail: 109565.6 MiB, Total: 125926.5 MiB 2019-11-04T19:21:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.25%, Anon: 5224.8 MiB, Avail: 58113.3 MiB, Total: 63338.1 MiB 2019-11-04T19:21:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.55%, Anon: 11136.0 MiB, Avail: 52302.3 MiB, Total: 63438.3 MiB 2019-11-04T19:21:55.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.1% (avg per cpu); cpus: 36, Platform: 4.1% (Base: 3.1, k8s-system: 1.0), k8s-addon: 0.9 2019-11-04T19:21:55.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125928.5 MiB, Platform: 8887.6 MiB (Base: 8225.4, k8s-system: 662.1), k8s-addon: 7392.6 2019-11-04T19:21:55.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16361.6 MiB, cgroup-rss: 16284.3 MiB, Avail: 109567.0 MiB, Total: 125928.5 MiB 2019-11-04T19:21:55.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.25%, Anon: 5223.7 MiB, Avail: 58116.5 MiB, Total: 63340.2 MiB 2019-11-04T19:21:55.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.56%, Anon: 11137.9 MiB, Avail: 52300.7 MiB, Total: 63438.5 MiB 2019-11-04T19:22:05.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.0% (avg per cpu); cpus: 36, Platform: 4.9% (Base: 4.0, k8s-system: 0.9), k8s-addon: 1.0 2019-11-04T19:22:05.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125925.7 MiB, Platform: 8886.9 MiB (Base: 8224.6, k8s-system: 662.4), k8s-addon: 7393.4 2019-11-04T19:22:05.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16362.0 MiB, cgroup-rss: 16284.5 MiB, Avail: 109563.7 MiB, Total: 125925.7 MiB 2019-11-04T19:22:05.289 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.24%, Anon: 5222.1 MiB, Avail: 58115.8 MiB, Total: 63337.9 MiB 2019-11-04T19:22:05.289 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.56%, Anon: 11139.9 MiB, Avail: 52298.4 MiB, Total: 63438.3 MiB 2019-11-04T19:22:15.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 7.1% (avg per cpu); cpus: 36, Platform: 6.0% (Base: 5.1, k8s-system: 0.9), k8s-addon: 1.0 2019-11-04T19:22:15.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125925.8 MiB, Platform: 8896.1 MiB (Base: 8233.6, k8s-system: 662.5), k8s-addon: 7393.5 2019-11-04T19:22:15.289 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16370.1 MiB, cgroup-rss: 16293.7 MiB, Avail: 109555.7 MiB, Total: 125925.8 MiB 2019-11-04T19:22:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.24%, Anon: 5221.9 MiB, Avail: 58118.1 MiB, Total: 63339.9 MiB 2019-11-04T19:22:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.57%, Anon: 11148.2 MiB, Avail: 52290.1 MiB, Total: 63438.4 MiB 2019-11-04T19:22:25.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.5% (avg per cpu); cpus: 36, Platform: 4.3% (Base: 3.3, k8s-system: 1.0), k8s-addon: 2.0 2019-11-04T19:22:25.291 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125924.7 MiB, Platform: 8893.9 MiB (Base: 8230.5, k8s-system: 663.4), k8s-addon: 7395.1 2019-11-04T19:22:25.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16367.4 MiB, cgroup-rss: 16293.2 MiB, Avail: 109557.3 MiB, Total: 125924.7 MiB 2019-11-04T19:22:25.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.24%, Anon: 5219.4 MiB, Avail: 58121.6 MiB, Total: 63341.0 MiB 2019-11-04T19:22:25.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.57%, Anon: 11148.0 MiB, Avail: 52286.9 MiB, Total: 63434.9 MiB 2019-11-04T19:22:35.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.0% (avg per cpu); cpus: 36, Platform: 4.8% (Base: 3.9, k8s-system: 0.8), k8s-addon: 1.1 2019-11-04T19:22:35.291 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125925.3 MiB, Platform: 8903.2 MiB (Base: 8239.7, k8s-system: 663.4), k8s-addon: 7395.1 2019-11-04T19:22:35.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16379.2 MiB, cgroup-rss: 16302.4 MiB, Avail: 109546.1 MiB, Total: 125925.3 MiB 2019-11-04T19:22:35.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.25%, Anon: 5228.3 MiB, Avail: 58107.7 MiB, Total: 63336.0 MiB 2019-11-04T19:22:35.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.58%, Anon: 11150.9 MiB, Avail: 52289.5 MiB, Total: 63440.4 MiB 2019-11-04T19:22:45.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.8% (avg per cpu); cpus: 36, Platform: 5.4% (Base: 4.4, k8s-system: 1.0), k8s-addon: 1.3 2019-11-04T19:22:45.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125923.6 MiB, Platform: 8911.6 MiB (Base: 8248.0, k8s-system: 663.6), k8s-addon: 7396.4 2019-11-04T19:22:45.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16387.1 MiB, cgroup-rss: 16311.4 MiB, Avail: 109536.5 MiB, Total: 125923.6 MiB 2019-11-04T19:22:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.27%, Anon: 5234.9 MiB, Avail: 58101.0 MiB, Total: 63336.0 MiB 2019-11-04T19:22:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.58%, Anon: 11152.2 MiB, Avail: 52287.3 MiB, Total: 63439.5 MiB 2019-11-04T19:22:55.281 controller-1 collectd[12276]: info NTP query plugin re-running init 2019-11-04T19:22:55.281 controller-1 collectd[12276]: info NTP query plugin server list: ['0.pool.ntp.org', '1.pool.ntp.org', '3.pool.ntp.org'] 2019-11-04T19:22:55.283 controller-1 collectd[12276]: info WARNING:root:fm_python_extension: Failed to connect to FM manager 2019-11-04T19:22:55.283 controller-1 collectd[12276]: info NTP query plugin 'get_faults_by_id' exception ; 100.114 ; Failed to execute get_faults_by_id. 2019-11-04T19:22:55.292 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.5% (avg per cpu); cpus: 36, Platform: 4.3% (Base: 3.4, k8s-system: 0.9), k8s-addon: 1.2 2019-11-04T19:22:55.297 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125923.8 MiB, Platform: 8907.6 MiB (Base: 8246.8, k8s-system: 660.8), k8s-addon: 7396.4 2019-11-04T19:22:55.297 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16385.4 MiB, cgroup-rss: 16309.8 MiB, Avail: 109538.5 MiB, Total: 125923.8 MiB 2019-11-04T19:22:55.298 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.26%, Anon: 5228.6 MiB, Avail: 58108.3 MiB, Total: 63336.9 MiB 2019-11-04T19:22:55.298 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.59%, Anon: 11156.8 MiB, Avail: 52281.3 MiB, Total: 63438.1 MiB 2019-11-04T19:23:05.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.2% (avg per cpu); cpus: 36, Platform: 4.6% (Base: 3.7, k8s-system: 1.0), k8s-addon: 1.4 2019-11-04T19:23:05.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125925.2 MiB, Platform: 8912.6 MiB (Base: 8251.8, k8s-system: 660.9), k8s-addon: 7396.9 2019-11-04T19:23:05.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16391.4 MiB, cgroup-rss: 16313.7 MiB, Avail: 109533.8 MiB, Total: 125925.2 MiB 2019-11-04T19:23:05.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.26%, Anon: 5232.1 MiB, Avail: 58105.2 MiB, Total: 63337.3 MiB 2019-11-04T19:23:05.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.59%, Anon: 11159.3 MiB, Avail: 52279.8 MiB, Total: 63439.1 MiB 2019-11-04T19:23:15.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.9% (avg per cpu); cpus: 36, Platform: 4.7% (Base: 3.7, k8s-system: 1.0), k8s-addon: 1.1 2019-11-04T19:23:15.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125923.7 MiB, Platform: 8918.5 MiB (Base: 8257.5, k8s-system: 660.9), k8s-addon: 7397.4 2019-11-04T19:23:15.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16396.2 MiB, cgroup-rss: 16320.0 MiB, Avail: 109527.4 MiB, Total: 125923.7 MiB 2019-11-04T19:23:15.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.26%, Anon: 5233.6 MiB, Avail: 58103.2 MiB, Total: 63336.7 MiB 2019-11-04T19:23:15.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.60%, Anon: 11162.6 MiB, Avail: 52275.5 MiB, Total: 63438.1 MiB 2019-11-04T19:23:25.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.5% (avg per cpu); cpus: 36, Platform: 4.3% (Base: 3.4, k8s-system: 0.9), k8s-addon: 1.1 2019-11-04T19:23:25.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125925.0 MiB, Platform: 8920.6 MiB (Base: 8259.4, k8s-system: 661.2), k8s-addon: 7398.1 2019-11-04T19:23:25.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16397.9 MiB, cgroup-rss: 16322.9 MiB, Avail: 109527.1 MiB, Total: 125925.0 MiB 2019-11-04T19:23:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.27%, Anon: 5238.1 MiB, Avail: 58100.0 MiB, Total: 63338.1 MiB 2019-11-04T19:23:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.59%, Anon: 11159.8 MiB, Avail: 52278.3 MiB, Total: 63438.2 MiB 2019-11-04T19:23:35.278 controller-1 collectd[12276]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T19:23:35.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.0% (avg per cpu); cpus: 36, Platform: 4.8% (Base: 3.9, k8s-system: 0.9), k8s-addon: 1.1 2019-11-04T19:23:35.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125923.2 MiB, Platform: 8934.8 MiB (Base: 8273.7, k8s-system: 661.2), k8s-addon: 7397.8 2019-11-04T19:23:35.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16413.8 MiB, cgroup-rss: 16336.8 MiB, Avail: 109509.5 MiB, Total: 125923.2 MiB 2019-11-04T19:23:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.28%, Anon: 5245.7 MiB, Avail: 58091.7 MiB, Total: 63337.5 MiB 2019-11-04T19:23:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.60%, Anon: 11168.0 MiB, Avail: 52269.0 MiB, Total: 63437.0 MiB 2019-11-04T19:23:45.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.2% (avg per cpu); cpus: 36, Platform: 4.9% (Base: 3.9, k8s-system: 1.0), k8s-addon: 1.2 2019-11-04T19:23:45.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125926.1 MiB, Platform: 8940.7 MiB (Base: 8279.0, k8s-system: 661.7), k8s-addon: 7398.8 2019-11-04T19:23:45.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16417.5 MiB, cgroup-rss: 16343.6 MiB, Avail: 109508.7 MiB, Total: 125926.1 MiB 2019-11-04T19:23:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.29%, Anon: 5248.9 MiB, Avail: 58090.5 MiB, Total: 63339.4 MiB 2019-11-04T19:23:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.61%, Anon: 11168.6 MiB, Avail: 52269.3 MiB, Total: 63437.9 MiB 2019-11-04T19:23:55.285 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.7% (avg per cpu); cpus: 36, Platform: 4.4% (Base: 3.5, k8s-system: 0.9), k8s-addon: 1.2 2019-11-04T19:23:55.291 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125924.4 MiB, Platform: 8934.5 MiB (Base: 8272.8, k8s-system: 661.7), k8s-addon: 7399.4 2019-11-04T19:23:55.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16416.8 MiB, cgroup-rss: 16338.0 MiB, Avail: 109507.6 MiB, Total: 125924.4 MiB 2019-11-04T19:23:55.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.29%, Anon: 5250.2 MiB, Avail: 58087.0 MiB, Total: 63337.2 MiB 2019-11-04T19:23:55.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.60%, Anon: 11166.7 MiB, Avail: 52271.8 MiB, Total: 63438.4 MiB 2019-11-04T19:24:05.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.2% (avg per cpu); cpus: 36, Platform: 4.4% (Base: 3.5, k8s-system: 1.0), k8s-addon: 1.7 2019-11-04T19:24:05.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125922.6 MiB, Platform: 8932.2 MiB (Base: 8270.3, k8s-system: 661.9), k8s-addon: 7400.9 2019-11-04T19:24:05.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16413.4 MiB, cgroup-rss: 16339.0 MiB, Avail: 109509.2 MiB, Total: 125922.6 MiB 2019-11-04T19:24:05.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.29%, Anon: 5248.9 MiB, Avail: 58085.9 MiB, Total: 63334.8 MiB 2019-11-04T19:24:05.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.60%, Anon: 11164.5 MiB, Avail: 52274.4 MiB, Total: 63438.9 MiB 2019-11-04T19:24:15.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 8.0% (avg per cpu); cpus: 36, Platform: 6.7% (Base: 5.7, k8s-system: 1.0), k8s-addon: 1.2 2019-11-04T19:24:15.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125925.8 MiB, Platform: 8969.9 MiB (Base: 8307.9, k8s-system: 662.0), k8s-addon: 7400.8 2019-11-04T19:24:15.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16450.0 MiB, cgroup-rss: 16374.4 MiB, Avail: 109475.8 MiB, Total: 125925.8 MiB 2019-11-04T19:24:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.32%, Anon: 5269.1 MiB, Avail: 58070.8 MiB, Total: 63339.9 MiB 2019-11-04T19:24:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11181.0 MiB, Avail: 52256.1 MiB, Total: 63437.1 MiB 2019-11-04T19:24:25.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.9% (avg per cpu); cpus: 36, Platform: 4.8% (Base: 3.9, k8s-system: 0.9), k8s-addon: 1.0 2019-11-04T19:24:25.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125924.0 MiB, Platform: 8965.3 MiB (Base: 8303.3, k8s-system: 662.0), k8s-addon: 7401.7 2019-11-04T19:24:25.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16441.9 MiB, cgroup-rss: 16371.1 MiB, Avail: 109482.1 MiB, Total: 125924.0 MiB 2019-11-04T19:24:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.31%, Anon: 5261.0 MiB, Avail: 58077.2 MiB, Total: 63338.2 MiB 2019-11-04T19:24:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11180.8 MiB, Avail: 52256.1 MiB, Total: 63437.0 MiB 2019-11-04T19:24:35.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.2% (avg per cpu); cpus: 36, Platform: 4.8% (Base: 3.8, k8s-system: 1.0), k8s-addon: 1.3 2019-11-04T19:24:35.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125925.1 MiB, Platform: 8974.7 MiB (Base: 8312.6, k8s-system: 662.1), k8s-addon: 7401.5 2019-11-04T19:24:35.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16451.7 MiB, cgroup-rss: 16380.3 MiB, Avail: 109473.4 MiB, Total: 125925.1 MiB 2019-11-04T19:24:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.31%, Anon: 5263.9 MiB, Avail: 58075.7 MiB, Total: 63339.6 MiB 2019-11-04T19:24:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.64%, Anon: 11187.8 MiB, Avail: 52248.9 MiB, Total: 63436.7 MiB 2019-11-04T19:24:45.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.4% (avg per cpu); cpus: 36, Platform: 5.3% (Base: 4.3, k8s-system: 1.0), k8s-addon: 1.0 2019-11-04T19:24:45.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125927.4 MiB, Platform: 8981.4 MiB (Base: 8319.2, k8s-system: 662.2), k8s-addon: 7402.2 2019-11-04T19:24:45.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16458.8 MiB, cgroup-rss: 16387.3 MiB, Avail: 109468.6 MiB, Total: 125927.4 MiB 2019-11-04T19:24:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.31%, Anon: 5262.8 MiB, Avail: 58078.1 MiB, Total: 63340.8 MiB 2019-11-04T19:24:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.65%, Anon: 11196.1 MiB, Avail: 52241.7 MiB, Total: 63437.8 MiB 2019-11-04T19:24:55.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.5% (avg per cpu); cpus: 36, Platform: 4.2% (Base: 3.3, k8s-system: 0.9), k8s-addon: 1.2 2019-11-04T19:24:55.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125924.5 MiB, Platform: 8990.5 MiB (Base: 8328.3, k8s-system: 662.2), k8s-addon: 7402.0 2019-11-04T19:24:55.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16475.1 MiB, cgroup-rss: 16398.6 MiB, Avail: 109449.4 MiB, Total: 125924.5 MiB 2019-11-04T19:24:55.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.31%, Anon: 5266.4 MiB, Avail: 58072.9 MiB, Total: 63339.4 MiB 2019-11-04T19:24:55.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.67%, Anon: 11208.7 MiB, Avail: 52227.2 MiB, Total: 63435.9 MiB 2019-11-04T19:25:03.076 controller-1 kubelet[88595]: info I1104 19:25:03.076527 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/7a9d8187-07a3-43af-9808-df079a26ee40-etcceph") pod "ceph-pools-audit-1572895500-85zdb" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40") 2019-11-04T19:25:03.076 controller-1 kubelet[88595]: info I1104 19:25:03.076616 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-etc") pod "ceph-pools-audit-1572895500-85zdb" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40") 2019-11-04T19:25:03.076 controller-1 kubelet[88595]: info I1104 19:25:03.076660 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-pools-bin") pod "ceph-pools-audit-1572895500-85zdb" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40") 2019-11-04T19:25:03.076 controller-1 kubelet[88595]: info I1104 19:25:03.076689 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-pools-audit-token-bsfbw") pod "ceph-pools-audit-1572895500-85zdb" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40") 2019-11-04T19:25:03.188 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/7a9d8187-07a3-43af-9808-df079a26ee40/volumes/kubernetes.io~secret/ceph-pools-audit-token-bsfbw. 2019-11-04T19:25:03.371 controller-1 dockerd[12332]: info time="2019-11-04T19:25:03.371746023Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:25:03.377 controller-1 containerd[12214]: info time="2019-11-04T19:25:03.377182758Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a/shim.sock" debug=false pid=204756 2019-11-04T19:25:05.515 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.1% (avg per cpu); cpus: 36, Platform: 4.8% (Base: 3.9, k8s-system: 1.0), k8s-addon: 1.2 2019-11-04T19:25:05.544 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125914.1 MiB, Platform: 8972.6 MiB (Base: 8310.2, k8s-system: 662.5), k8s-addon: 7402.7 2019-11-04T19:25:05.544 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16453.8 MiB, cgroup-rss: 16379.5 MiB, Avail: 109460.3 MiB, Total: 125914.1 MiB 2019-11-04T19:25:05.544 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.31%, Anon: 5266.1 MiB, Avail: 58071.6 MiB, Total: 63337.7 MiB 2019-11-04T19:25:05.544 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.64%, Anon: 11187.7 MiB, Avail: 52240.1 MiB, Total: 63427.8 MiB 2019-11-04T19:25:09.313 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.313 [INFO][205614] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"ceph-pools-audit-1572895500-85zdb", ContainerID:"c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a"}} 2019-11-04T19:25:09.329 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.329 [INFO][205614] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0 ceph-pools-audit-1572895500- kube-system 7a9d8187-07a3-43af-9808-df079a26ee40 8168840 0 2019-11-04 19:25:02 +0000 UTC map[projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:ceph-pools-audit app:ceph-pools-audit controller-uid:6a10a226-0765-490a-b7a9-6d82dac1aedd job-name:ceph-pools-audit-1572895500] map[] [] nil [] } {k8s controller-1 ceph-pools-audit-1572895500-85zdb eth0 [] [] [kns.kube-system ksa.kube-system.ceph-pools-audit] calia2d3537ae67 []}} ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Namespace="kube-system" Pod="ceph-pools-audit-1572895500-85zdb" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-" 2019-11-04T19:25:09.329 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.329 [INFO][205614] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Namespace="kube-system" Pod="ceph-pools-audit-1572895500-85zdb" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:09.332 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.332 [INFO][205614] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kube-system,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/kube-system,UID:5d016a6c-19e8-4b97-88a9-b6113a3cb736,ResourceVersion:5,Generation:0,CreationTimestamp:2019-10-25 15:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:25:09.334 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.334 [INFO][205614] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ceph-pools-audit-1572895500-85zdb,GenerateName:ceph-pools-audit-1572895500-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/ceph-pools-audit-1572895500-85zdb,UID:7a9d8187-07a3-43af-9808-df079a26ee40,ResourceVersion:8168840,Generation:0,CreationTimestamp:2019-11-04 19:25:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: ceph-pools-audit,controller-uid: 6a10a226-0765-490a-b7a9-6d82dac1aedd,job-name: ceph-pools-audit-1572895500,},Annotations:map[string]string{},OwnerReferences:[{batch/v1 Job ceph-pools-audit-1572895500 6a10a226-0765-490a-b7a9-6d82dac1aedd 0xc000814f0b 0xc000814f0c}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{ceph-pools-bin {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:ceph-pools-bin,},Items:[],DefaultMode:*365,Optional:nil,} nil nil nil nil nil nil nil nil}} {etcceph {nil &EmptyDirVolumeSource{Medium:,SizeLimit:,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {ceph-etc {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:ceph-etc,},Items:[],DefaultMode:*292,Optional:nil,} nil nil nil nil nil nil nil nil}} {ceph-pools-audit-token-bsfbw {nil nil nil nil nil &SecretVolumeSource{SecretName:ceph-pools-audit-token-bsfbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{ceph-pools-audit-ceph-store registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 [/tmp/ceph-pools-audit.sh] [] [] [] [{RBD_POOL_REPLICATION 2 nil} {RBD_POOL_MIN_REPLICATION 1 nil} {RBD_POOL_CRUSH_RULE_NAME storage_tier_ruleset nil}] {map[] map[]} [{ceph-pools-bin true /tmp/ceph-pools-audit.sh ceph-pools-audit.sh } {etcceph false /etc/ceph } {ceph-etc true /etc/ceph/ceph.conf ceph.conf } {ceph-pools-audit-token-bsfbw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:OnFailure,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:ceph-pools-audit,DeprecatedServiceAccount:ceph-pools-audit,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[{default-registry-key}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000815130} {node.kubernetes.io/unreachable Exists NoExecute 0xc000815150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:25:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:25:02 +0000 UTC ContainersNotReady containers with unready status: [ceph-pools-audit-ceph-store]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:25:02 +0000 UTC ContainersNotReady containers with unready status: [ceph-pools-audit-ceph-store]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:25:02 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:25:02 +0000 UTC,ContainerStatuses:[{ceph-pools-audit-ceph-store {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T19:25:09.353 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.353 [INFO][205644] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" HandleID="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Workload="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:09.360 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.360 [INFO][205644] ipam_plugin.go 220: Calico CNI IPAM handle=chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" HandleID="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Workload="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:09.360 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.360 [INFO][205644] ipam_plugin.go 230: Auto assigning IP ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" HandleID="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Workload="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc000354320), Attrs:map[string]string{"node":"controller-1", "pod":"ceph-pools-audit-1572895500-85zdb", "namespace":"kube-system"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:25:09.360 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.360 [INFO][205644] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:25:09.364 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.364 [INFO][205644] ipam.go 309: Looking up existing affinities for host handle="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" host="controller-1" 2019-11-04T19:25:09.368 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.368 [INFO][205644] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" host="controller-1" 2019-11-04T19:25:09.369 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.369 [INFO][205644] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:25:09.373 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.373 [INFO][205644] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:25:09.373 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.373 [INFO][205644] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" host="controller-1" 2019-11-04T19:25:09.374 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.374 [INFO][205644] ipam.go 1244: Creating new handle: chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a 2019-11-04T19:25:09.377 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.376 [INFO][205644] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" host="controller-1" 2019-11-04T19:25:09.379 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.379 [INFO][205644] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e310/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" host="controller-1" 2019-11-04T19:25:09.379 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.379 [INFO][205644] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e310/122] handle="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" host="controller-1" 2019-11-04T19:25:09.380 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.380 [INFO][205644] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e310/122] handle="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" host="controller-1" 2019-11-04T19:25:09.380 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.380 [INFO][205644] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e310/122] ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" HandleID="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Workload="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:09.380 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.380 [INFO][205644] ipam_plugin.go 258: IPAM Result ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" HandleID="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Workload="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc000428300)} 2019-11-04T19:25:09.382 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.382 [INFO][205614] k8s.go 361: Populated endpoint ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Namespace="kube-system" Pod="ceph-pools-audit-1572895500-85zdb" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0", GenerateName:"ceph-pools-audit-1572895500-", Namespace:"kube-system", SelfLink:"", UID:"7a9d8187-07a3-43af-9808-df079a26ee40", ResourceVersion:"8168840", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492302, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit", "app":"ceph-pools-audit", "controller-uid":"6a10a226-0765-490a-b7a9-6d82dac1aedd", "job-name":"ceph-pools-audit-1572895500", "projectcalico.org/namespace":"kube-system"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"ceph-pools-audit-1572895500-85zdb", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e310/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"calia2d3537ae67", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:25:09.382 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.382 [INFO][205614] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e310/128] ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Namespace="kube-system" Pod="ceph-pools-audit-1572895500-85zdb" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:09.382 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.382 [INFO][205614] network_linux.go 76: Setting the host side veth name to calia2d3537ae67 ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Namespace="kube-system" Pod="ceph-pools-audit-1572895500-85zdb" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:09.385 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.385 [INFO][205614] network_linux.go 411: Disabling IPv6 forwarding ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Namespace="kube-system" Pod="ceph-pools-audit-1572895500-85zdb" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:09.434 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.434 [INFO][205614] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Namespace="kube-system" Pod="ceph-pools-audit-1572895500-85zdb" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0", GenerateName:"ceph-pools-audit-1572895500-", Namespace:"kube-system", SelfLink:"", UID:"7a9d8187-07a3-43af-9808-df079a26ee40", ResourceVersion:"8168840", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492302, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ceph-pools-audit", "controller-uid":"6a10a226-0765-490a-b7a9-6d82dac1aedd", "job-name":"ceph-pools-audit-1572895500", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a", Pod:"ceph-pools-audit-1572895500-85zdb", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e310/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"calia2d3537ae67", MAC:"5e:1f:42:43:73:80", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:25:09.437 controller-1 kubelet[88595]: info 2019-11-04 19:25:09.437 [INFO][205614] k8s.go 420: Wrote updated endpoint to datastore ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Namespace="kube-system" Pod="ceph-pools-audit-1572895500-85zdb" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:09.486 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/7a9d8187-07a3-43af-9808-df079a26ee40/volume-subpaths/ceph-pools-bin/ceph-pools-audit-ceph-store/0. 2019-11-04T19:25:09.555 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/7a9d8187-07a3-43af-9808-df079a26ee40/volume-subpaths/ceph-pools-bin/ceph-pools-audit-ceph-store/0. 2019-11-04T19:25:09.598 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/7a9d8187-07a3-43af-9808-df079a26ee40/volume-subpaths/ceph-etc/ceph-pools-audit-ceph-store/2. 2019-11-04T19:25:09.623 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/7a9d8187-07a3-43af-9808-df079a26ee40/volume-subpaths/ceph-etc/ceph-pools-audit-ceph-store/2. 2019-11-04T19:25:09.667 controller-1 containerd[12214]: info time="2019-11-04T19:25:09.667642122Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c79f4e9d5af893f7772ce4272e6e81457dc00d9ebd760332218a30c93d28c100/shim.sock" debug=false pid=205708 2019-11-04T19:25:12.000 controller-1 ntpd[87625]: info Listen normally on 31 calia2d3537ae67 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:25:12.000 controller-1 ntpd[87625]: debug new interface(s) found: waking up resolver 2019-11-04T19:25:15.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.8% (avg per cpu); cpus: 36, Platform: 4.6% (Base: 3.4, k8s-system: 1.1), k8s-addon: 1.2 2019-11-04T19:25:15.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125914.1 MiB, Platform: 8986.0 MiB (Base: 8311.7, k8s-system: 674.3), k8s-addon: 7403.6 2019-11-04T19:25:15.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16468.6 MiB, cgroup-rss: 16393.7 MiB, Avail: 109445.5 MiB, Total: 125914.1 MiB 2019-11-04T19:25:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.34%, Anon: 5285.3 MiB, Avail: 58054.1 MiB, Total: 63339.4 MiB 2019-11-04T19:25:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11183.3 MiB, Avail: 52243.3 MiB, Total: 63426.6 MiB 2019-11-04T19:25:25.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.5% (avg per cpu); cpus: 36, Platform: 4.4% (Base: 3.4, k8s-system: 1.0), k8s-addon: 1.0 2019-11-04T19:25:25.291 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125915.5 MiB, Platform: 8976.4 MiB (Base: 8302.2, k8s-system: 674.2), k8s-addon: 7403.9 2019-11-04T19:25:25.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16455.8 MiB, cgroup-rss: 16384.4 MiB, Avail: 109459.8 MiB, Total: 125915.5 MiB 2019-11-04T19:25:25.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.33%, Anon: 5277.9 MiB, Avail: 58062.1 MiB, Total: 63340.0 MiB 2019-11-04T19:25:25.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.62%, Anon: 11177.8 MiB, Avail: 52249.2 MiB, Total: 63427.1 MiB 2019-11-04T19:25:34.003 controller-1 containerd[12214]: info time="2019-11-04T19:25:34.003035862Z" level=info msg="shim reaped" id=c79f4e9d5af893f7772ce4272e6e81457dc00d9ebd760332218a30c93d28c100 2019-11-04T19:25:34.013 controller-1 dockerd[12332]: info time="2019-11-04T19:25:34.013090849Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:25:34.456 controller-1 kubelet[88595]: info I1104 19:25:34.456505 88595 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-pools-bin") pod "7a9d8187-07a3-43af-9808-df079a26ee40" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40") 2019-11-04T19:25:34.456 controller-1 kubelet[88595]: info I1104 19:25:34.456553 88595 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-etc") pod "7a9d8187-07a3-43af-9808-df079a26ee40" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40") 2019-11-04T19:25:34.456 controller-1 kubelet[88595]: info I1104 19:25:34.456583 88595 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-pools-audit-token-bsfbw") pod "7a9d8187-07a3-43af-9808-df079a26ee40" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40") 2019-11-04T19:25:34.456 controller-1 kubelet[88595]: info I1104 19:25:34.456612 88595 reconciler.go:181] operationExecutor.UnmountVolume started for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/7a9d8187-07a3-43af-9808-df079a26ee40-etcceph") pod "7a9d8187-07a3-43af-9808-df079a26ee40" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40") 2019-11-04T19:25:34.456 controller-1 kubelet[88595]: info W1104 19:25:34.456710 88595 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/7a9d8187-07a3-43af-9808-df079a26ee40/volumes/kubernetes.io~empty-dir/etcceph: ClearQuota called, but quotas disabled 2019-11-04T19:25:34.456 controller-1 kubelet[88595]: info I1104 19:25:34.456796 88595 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a9d8187-07a3-43af-9808-df079a26ee40-etcceph" (OuterVolumeSpecName: "etcceph") pod "7a9d8187-07a3-43af-9808-df079a26ee40" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40"). InnerVolumeSpecName "etcceph". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" 2019-11-04T19:25:34.472 controller-1 kubelet[88595]: info W1104 19:25:34.472734 88595 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/7a9d8187-07a3-43af-9808-df079a26ee40/volumes/kubernetes.io~configmap/ceph-pools-bin: ClearQuota called, but quotas disabled 2019-11-04T19:25:34.472 controller-1 kubelet[88595]: info I1104 19:25:34.472913 88595 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-pools-bin" (OuterVolumeSpecName: "ceph-pools-bin") pod "7a9d8187-07a3-43af-9808-df079a26ee40" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40"). InnerVolumeSpecName "ceph-pools-bin". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T19:25:34.475 controller-1 kubelet[88595]: info W1104 19:25:34.475741 88595 empty_dir.go:421] Warning: Failed to clear quota on /var/lib/kubelet/pods/7a9d8187-07a3-43af-9808-df079a26ee40/volumes/kubernetes.io~configmap/ceph-etc: ClearQuota called, but quotas disabled 2019-11-04T19:25:34.475 controller-1 kubelet[88595]: info I1104 19:25:34.475937 88595 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-etc" (OuterVolumeSpecName: "ceph-etc") pod "7a9d8187-07a3-43af-9808-df079a26ee40" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40"). InnerVolumeSpecName "ceph-etc". PluginName "kubernetes.io/configmap", VolumeGidValue "" 2019-11-04T19:25:34.476 controller-1 kubelet[88595]: info I1104 19:25:34.476629 88595 operation_generator.go:831] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-pools-audit-token-bsfbw" (OuterVolumeSpecName: "ceph-pools-audit-token-bsfbw") pod "7a9d8187-07a3-43af-9808-df079a26ee40" (UID: "7a9d8187-07a3-43af-9808-df079a26ee40"). InnerVolumeSpecName "ceph-pools-audit-token-bsfbw". PluginName "kubernetes.io/secret", VolumeGidValue "" 2019-11-04T19:25:34.508 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.508 [INFO][210137] plugin.go 442: Extracted identifiers ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Node="controller-1" Orchestrator="k8s" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:34.514 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.514 [WARNING][210137] workloadendpoint.go 74: Operation Delete is not supported on WorkloadEndpoint type 2019-11-04T19:25:34.514 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.514 [INFO][210137] k8s.go 470: Endpoint deletion will be handled by Kubernetes deletion of the Pod. ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0", GenerateName:"ceph-pools-audit-1572895500-", Namespace:"kube-system", SelfLink:"", UID:"7a9d8187-07a3-43af-9808-df079a26ee40", ResourceVersion:"8169143", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492302, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/serviceaccount":"ceph-pools-audit", "app":"ceph-pools-audit", "controller-uid":"6a10a226-0765-490a-b7a9-6d82dac1aedd", "job-name":"ceph-pools-audit-1572895500", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"ceph-pools-audit-1572895500-85zdb", Endpoint:"eth0", IPNetworks:[]string(nil), IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"calia2d3537ae67", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:25:34.514 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.514 [INFO][210137] k8s.go 477: Releasing IP address(es) ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" 2019-11-04T19:25:34.514 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.514 [INFO][210137] utils.go 171: Calico CNI releasing IP address ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" 2019-11-04T19:25:34.533 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.533 [INFO][210155] ipam_plugin.go 299: Releasing address using handleID ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" HandleID="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Workload="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:34.533 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.533 [INFO][210155] ipam.go 1145: Releasing all IPs with handle 'chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a' 2019-11-04T19:25:34.555 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.555 [INFO][210155] ipam_plugin.go 308: Released address using handleID ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" HandleID="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Workload="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:34.555 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.555 [INFO][210155] ipam_plugin.go 317: Releasing address using workloadID ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" HandleID="chain.c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" Workload="controller--1-k8s-ceph--pools--audit--1572895500--85zdb-eth0" 2019-11-04T19:25:34.555 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.555 [INFO][210155] ipam.go 1145: Releasing all IPs with handle 'kube-system.ceph-pools-audit-1572895500-85zdb' 2019-11-04T19:25:34.557 controller-1 kubelet[88595]: info I1104 19:25:34.556979 88595 reconciler.go:301] Volume detached for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-etc") on node "controller-1" DevicePath "" 2019-11-04T19:25:34.557 controller-1 kubelet[88595]: info I1104 19:25:34.556998 88595 reconciler.go:301] Volume detached for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-pools-audit-token-bsfbw") on node "controller-1" DevicePath "" 2019-11-04T19:25:34.557 controller-1 kubelet[88595]: info I1104 19:25:34.557006 88595 reconciler.go:301] Volume detached for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/7a9d8187-07a3-43af-9808-df079a26ee40-etcceph") on node "controller-1" DevicePath "" 2019-11-04T19:25:34.557 controller-1 kubelet[88595]: info I1104 19:25:34.557015 88595 reconciler.go:301] Volume detached for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/7a9d8187-07a3-43af-9808-df079a26ee40-ceph-pools-bin") on node "controller-1" DevicePath "" 2019-11-04T19:25:34.557 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.557 [INFO][210137] k8s.go 481: Cleaning up netns ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" 2019-11-04T19:25:34.558 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.557 [INFO][210137] network_linux.go 450: Calico CNI deleting device in netns /proc/204791/ns/net ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" 2019-11-04T19:25:34.000 controller-1 lldpd[12281]: warning removal request for address of fe80::ecee:eeff:feee:eeee%28, but no knowledge of it 2019-11-04T19:25:34.630 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.630 [INFO][210137] network_linux.go 467: Calico CNI deleted device in netns /proc/204791/ns/net ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" 2019-11-04T19:25:34.630 controller-1 kubelet[88595]: info 2019-11-04 19:25:34.630 [INFO][210137] k8s.go 493: Teardown processing complete. ContainerID="c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" 2019-11-04T19:25:34.766 controller-1 containerd[12214]: info time="2019-11-04T19:25:34.766474064Z" level=info msg="shim reaped" id=c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a 2019-11-04T19:25:34.776 controller-1 dockerd[12332]: info time="2019-11-04T19:25:34.776134504Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" 2019-11-04T19:25:35.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.9% (avg per cpu); cpus: 36, Platform: 5.5% (Base: 4.4, k8s-system: 1.1), k8s-addon: 1.1 2019-11-04T19:25:35.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125913.7 MiB, Platform: 8971.9 MiB (Base: 8309.6, k8s-system: 662.3), k8s-addon: 7401.0 2019-11-04T19:25:35.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16451.4 MiB, cgroup-rss: 16377.0 MiB, Avail: 109462.2 MiB, Total: 125913.7 MiB 2019-11-04T19:25:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.32%, Anon: 5270.5 MiB, Avail: 58064.6 MiB, Total: 63335.1 MiB 2019-11-04T19:25:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11180.9 MiB, Avail: 52249.1 MiB, Total: 63430.1 MiB 2019-11-04T19:25:35.428 controller-1 kubelet[88595]: info W1104 19:25:35.428783 88595 pod_container_deletor.go:75] Container "c6ca56180b4ebc2f49e6632423a8f991b30a2a383261ffa1685973ae4a46945a" not found in pod's containers 2019-11-04T19:25:36.000 controller-1 ntpd[87625]: info Deleting interface #31 calia2d3537ae67, fe80::ecee:eeff:feee:eeee#123, interface stats: received=0, sent=0, dropped=0, active_time=24 secs 2019-11-04T19:25:39.000 controller-1 dnsmasq-dhcp[111976]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:9e:65:b8 2019-11-04T19:25:39.000 controller-1 dnsmasq-dhcp[111976]: info DHCPREPLY(vlan108) fd00:204::59b8:d0e5:39e:baa3 00:03:00:01:3c:fd:fe:9e:65:b8 compute-1 2019-11-04T19:25:40.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:25:40.185 211008 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:65:b8' with ip 'fd00:204::59b8:d0e5:39e:baa3' 2019-11-04T19:25:45.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 7.3% (avg per cpu); cpus: 36, Platform: 6.1% (Base: 5.2, k8s-system: 1.0), k8s-addon: 1.1 2019-11-04T19:25:45.291 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125919.4 MiB, Platform: 8971.6 MiB (Base: 8309.3, k8s-system: 662.3), k8s-addon: 7400.8 2019-11-04T19:25:45.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16447.6 MiB, cgroup-rss: 16376.5 MiB, Avail: 109471.8 MiB, Total: 125919.4 MiB 2019-11-04T19:25:45.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.32%, Anon: 5272.5 MiB, Avail: 58064.1 MiB, Total: 63336.6 MiB 2019-11-04T19:25:45.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.62%, Anon: 11175.1 MiB, Avail: 52259.0 MiB, Total: 63434.1 MiB 2019-11-04T19:25:55.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.1% (avg per cpu); cpus: 36, Platform: 3.9% (Base: 3.0, k8s-system: 0.9), k8s-addon: 1.2 2019-11-04T19:25:55.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125918.3 MiB, Platform: 8977.5 MiB (Base: 8315.1, k8s-system: 662.4), k8s-addon: 7401.4 2019-11-04T19:25:55.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16457.4 MiB, cgroup-rss: 16383.0 MiB, Avail: 109460.9 MiB, Total: 125918.3 MiB 2019-11-04T19:25:55.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.33%, Anon: 5276.6 MiB, Avail: 58059.4 MiB, Total: 63336.0 MiB 2019-11-04T19:25:55.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11180.8 MiB, Avail: 52252.7 MiB, Total: 63433.4 MiB 2019-11-04T19:26:05.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.6% (avg per cpu); cpus: 36, Platform: 5.2% (Base: 4.2, k8s-system: 1.0), k8s-addon: 1.3 2019-11-04T19:26:05.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125917.6 MiB, Platform: 8976.2 MiB (Base: 8313.5, k8s-system: 662.7), k8s-addon: 7400.3 2019-11-04T19:26:05.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16454.7 MiB, cgroup-rss: 16380.5 MiB, Avail: 109462.9 MiB, Total: 125917.6 MiB 2019-11-04T19:26:05.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.33%, Anon: 5277.0 MiB, Avail: 58056.7 MiB, Total: 63333.7 MiB 2019-11-04T19:26:05.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.62%, Anon: 11177.7 MiB, Avail: 52257.4 MiB, Total: 63435.1 MiB 2019-11-04T19:26:15.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.5% (avg per cpu); cpus: 36, Platform: 4.3% (Base: 3.3, k8s-system: 1.0), k8s-addon: 1.1 2019-11-04T19:26:15.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125921.9 MiB, Platform: 8979.6 MiB (Base: 8316.7, k8s-system: 662.9), k8s-addon: 7400.5 2019-11-04T19:26:15.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16459.4 MiB, cgroup-rss: 16384.3 MiB, Avail: 109462.5 MiB, Total: 125921.9 MiB 2019-11-04T19:26:15.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.33%, Anon: 5275.2 MiB, Avail: 58063.4 MiB, Total: 63338.5 MiB 2019-11-04T19:26:15.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11184.2 MiB, Avail: 52250.5 MiB, Total: 63434.7 MiB 2019-11-04T19:26:25.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.5% (avg per cpu); cpus: 36, Platform: 5.4% (Base: 4.5, k8s-system: 0.9), k8s-addon: 1.0 2019-11-04T19:26:25.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125921.2 MiB, Platform: 8975.7 MiB (Base: 8312.7, k8s-system: 662.9), k8s-addon: 7400.1 2019-11-04T19:26:25.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16451.1 MiB, cgroup-rss: 16380.0 MiB, Avail: 109470.1 MiB, Total: 125921.2 MiB 2019-11-04T19:26:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.32%, Anon: 5268.3 MiB, Avail: 58071.3 MiB, Total: 63339.6 MiB 2019-11-04T19:26:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11182.8 MiB, Avail: 52250.0 MiB, Total: 63432.8 MiB 2019-11-04T19:26:35.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.0% (avg per cpu); cpus: 36, Platform: 4.8% (Base: 3.8, k8s-system: 1.0), k8s-addon: 1.0 2019-11-04T19:26:35.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125921.1 MiB, Platform: 8999.2 MiB (Base: 8336.2, k8s-system: 663.0), k8s-addon: 7400.7 2019-11-04T19:26:35.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16479.5 MiB, cgroup-rss: 16404.0 MiB, Avail: 109441.6 MiB, Total: 125921.1 MiB 2019-11-04T19:26:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.34%, Anon: 5280.7 MiB, Avail: 58056.4 MiB, Total: 63337.1 MiB 2019-11-04T19:26:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.65%, Anon: 11198.8 MiB, Avail: 52236.4 MiB, Total: 63435.2 MiB 2019-11-04T19:26:40.000 controller-1 dnsmasq-dhcp[111976]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:a0:19:68 2019-11-04T19:26:40.000 controller-1 dnsmasq-dhcp[111976]: info DHCPREPLY(vlan108) fd00:204::f7e1:1a09:6ba7:92e2 00:03:00:01:3c:fd:fe:a0:19:68 compute-7 2019-11-04T19:26:40.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:26:40.600 220409 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:a0:19:68' with ip 'fd00:204::f7e1:1a09:6ba7:92e2' 2019-11-04T19:26:45.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.3% (avg per cpu); cpus: 36, Platform: 5.2% (Base: 4.2, k8s-system: 1.0), k8s-addon: 1.0 2019-11-04T19:26:45.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125923.8 MiB, Platform: 8995.1 MiB (Base: 8332.0, k8s-system: 663.1), k8s-addon: 7400.4 2019-11-04T19:26:45.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16469.6 MiB, cgroup-rss: 16399.6 MiB, Avail: 109454.3 MiB, Total: 125923.8 MiB 2019-11-04T19:26:45.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.32%, Anon: 5269.7 MiB, Avail: 58070.5 MiB, Total: 63340.2 MiB 2019-11-04T19:26:45.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.66%, Anon: 11199.8 MiB, Avail: 52235.0 MiB, Total: 63434.8 MiB 2019-11-04T19:26:55.285 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.4% (avg per cpu); cpus: 36, Platform: 4.0% (Base: 3.1, k8s-system: 0.9), k8s-addon: 1.3 2019-11-04T19:26:55.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125925.3 MiB, Platform: 8975.3 MiB (Base: 8312.2, k8s-system: 663.2), k8s-addon: 7400.9 2019-11-04T19:26:55.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16456.3 MiB, cgroup-rss: 16380.3 MiB, Avail: 109468.9 MiB, Total: 125925.3 MiB 2019-11-04T19:26:55.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.31%, Anon: 5262.5 MiB, Avail: 58077.1 MiB, Total: 63339.6 MiB 2019-11-04T19:26:55.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.65%, Anon: 11193.8 MiB, Avail: 52243.1 MiB, Total: 63436.9 MiB 2019-11-04T19:27:05.278 controller-1 collectd[12276]: info degrade notifier: {"service":"collectd_notifier","hostname":"controller-1","degrade":"clear","resource":""} 2019-11-04T19:27:05.285 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.6% (avg per cpu); cpus: 36, Platform: 4.4% (Base: 3.4, k8s-system: 1.0), k8s-addon: 1.1 2019-11-04T19:27:05.291 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125921.1 MiB, Platform: 8972.3 MiB (Base: 8309.0, k8s-system: 663.3), k8s-addon: 7400.6 2019-11-04T19:27:05.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16451.5 MiB, cgroup-rss: 16377.0 MiB, Avail: 109469.6 MiB, Total: 125921.1 MiB 2019-11-04T19:27:05.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.31%, Anon: 5261.9 MiB, Avail: 58076.6 MiB, Total: 63338.5 MiB 2019-11-04T19:27:05.292 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.64%, Anon: 11189.6 MiB, Avail: 52244.3 MiB, Total: 63433.8 MiB 2019-11-04T19:27:15.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.1% (avg per cpu); cpus: 36, Platform: 4.7% (Base: 3.7, k8s-system: 1.0), k8s-addon: 1.3 2019-11-04T19:27:15.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125920.4 MiB, Platform: 8973.5 MiB (Base: 8310.0, k8s-system: 663.4), k8s-addon: 7401.2 2019-11-04T19:27:15.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16453.8 MiB, cgroup-rss: 16378.8 MiB, Avail: 109466.5 MiB, Total: 125920.4 MiB 2019-11-04T19:27:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.30%, Anon: 5260.3 MiB, Avail: 58079.5 MiB, Total: 63339.8 MiB 2019-11-04T19:27:15.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.65%, Anon: 11193.6 MiB, Avail: 52240.5 MiB, Total: 63434.1 MiB 2019-11-04T19:27:18.356 controller-1 systemd[1]: info Starting Cleanup of Temporary Directories... 2019-11-04T19:27:18.378 controller-1 systemd[1]: info Started Cleanup of Temporary Directories. 2019-11-04T19:27:25.285 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.3% (avg per cpu); cpus: 36, Platform: 4.1% (Base: 3.3, k8s-system: 0.8), k8s-addon: 1.0 2019-11-04T19:27:25.291 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125922.8 MiB, Platform: 8964.7 MiB (Base: 8301.3, k8s-system: 663.5), k8s-addon: 7401.9 2019-11-04T19:27:25.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16443.2 MiB, cgroup-rss: 16370.8 MiB, Avail: 109479.5 MiB, Total: 125922.8 MiB 2019-11-04T19:27:25.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.31%, Anon: 5262.0 MiB, Avail: 58076.8 MiB, Total: 63338.8 MiB 2019-11-04T19:27:25.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11181.2 MiB, Avail: 52253.9 MiB, Total: 63435.1 MiB 2019-11-04T19:27:33.000 controller-1 dnsmasq-dhcp[111976]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:a0:15:60 2019-11-04T19:27:33.000 controller-1 dnsmasq-dhcp[111976]: info DHCPREPLY(vlan108) fd00:204::d30:8281:294e:1413 00:03:00:01:3c:fd:fe:a0:15:60 compute-12 2019-11-04T19:27:34.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:27:34.279 228685 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:a0:15:60' with ip 'fd00:204::d30:8281:294e:1413' 2019-11-04T19:27:35.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.4% (avg per cpu); cpus: 36, Platform: 5.1% (Base: 4.0, k8s-system: 1.0), k8s-addon: 1.2 2019-11-04T19:27:35.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125919.9 MiB, Platform: 8982.2 MiB (Base: 8318.7, k8s-system: 663.5), k8s-addon: 7401.9 2019-11-04T19:27:35.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16462.8 MiB, cgroup-rss: 16388.2 MiB, Avail: 109457.1 MiB, Total: 125919.9 MiB 2019-11-04T19:27:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.31%, Anon: 5264.7 MiB, Avail: 58072.1 MiB, Total: 63336.8 MiB 2019-11-04T19:27:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.65%, Anon: 11198.1 MiB, Avail: 52236.3 MiB, Total: 63434.4 MiB 2019-11-04T19:27:36.422 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T19:27:36.430 controller-1 systemd[1]: info Started Session 2 of user root. 2019-11-04T19:27:45.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.4% (avg per cpu); cpus: 36, Platform: 5.4% (Base: 4.6, k8s-system: 0.8), k8s-addon: 0.9 2019-11-04T19:27:45.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125918.1 MiB, Platform: 8969.6 MiB (Base: 8306.0, k8s-system: 663.6), k8s-addon: 7391.1 2019-11-04T19:27:45.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16436.5 MiB, cgroup-rss: 16364.8 MiB, Avail: 109481.5 MiB, Total: 125918.1 MiB 2019-11-04T19:27:45.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.30%, Anon: 5259.6 MiB, Avail: 58075.6 MiB, Total: 63335.2 MiB 2019-11-04T19:27:45.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.62%, Anon: 11176.9 MiB, Avail: 52257.3 MiB, Total: 63434.2 MiB 2019-11-04T19:27:55.290 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.7% (avg per cpu); cpus: 36, Platform: 4.4% (Base: 3.4, k8s-system: 1.0), k8s-addon: 1.2 2019-11-04T19:27:55.296 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125924.4 MiB, Platform: 8975.5 MiB (Base: 8312.1, k8s-system: 663.3), k8s-addon: 7390.8 2019-11-04T19:27:55.297 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16445.8 MiB, cgroup-rss: 16373.6 MiB, Avail: 109478.5 MiB, Total: 125924.4 MiB 2019-11-04T19:27:55.297 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.31%, Anon: 5262.2 MiB, Avail: 58077.4 MiB, Total: 63339.6 MiB 2019-11-04T19:27:55.297 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11184.1 MiB, Avail: 52251.9 MiB, Total: 63436.0 MiB 2019-11-04T19:27:55.591 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T19:28:05.286 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.8% (avg per cpu); cpus: 36, Platform: 4.5% (Base: 3.5, k8s-system: 1.0), k8s-addon: 1.2 2019-11-04T19:28:05.293 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125920.0 MiB, Platform: 8971.0 MiB (Base: 8307.9, k8s-system: 663.1), k8s-addon: 7387.0 2019-11-04T19:28:05.294 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16436.9 MiB, cgroup-rss: 16362.1 MiB, Avail: 109483.2 MiB, Total: 125920.0 MiB 2019-11-04T19:28:05.294 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.30%, Anon: 5258.8 MiB, Avail: 58076.9 MiB, Total: 63335.8 MiB 2019-11-04T19:28:05.294 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.62%, Anon: 11178.0 MiB, Avail: 52257.4 MiB, Total: 63435.5 MiB 2019-11-04T19:28:10.713 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T19:28:10.725 controller-1 systemd[1]: info Started Session 2 of user root. 2019-11-04T19:28:10.768 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T19:28:15.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.4% (avg per cpu); cpus: 36, Platform: 4.2% (Base: 3.3, k8s-system: 0.9), k8s-addon: 1.1 2019-11-04T19:28:15.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125923.4 MiB, Platform: 8975.9 MiB (Base: 8312.8, k8s-system: 663.1), k8s-addon: 7377.7 2019-11-04T19:28:15.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16433.8 MiB, cgroup-rss: 16357.7 MiB, Avail: 109489.6 MiB, Total: 125923.4 MiB 2019-11-04T19:28:15.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.29%, Anon: 5248.4 MiB, Avail: 58090.6 MiB, Total: 63338.9 MiB 2019-11-04T19:28:15.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11185.4 MiB, Avail: 52250.2 MiB, Total: 63435.7 MiB 2019-11-04T19:28:17.000 controller-1 dnsmasq-dhcp[111976]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:9f:70:50 2019-11-04T19:28:17.000 controller-1 dnsmasq-dhcp[111976]: info DHCPREPLY(vlan108) fd00:204::37ad:cb:6285:8372 00:03:00:01:3c:fd:fe:9f:70:50 compute-3 2019-11-04T19:28:17.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:28:17.639 236095 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9f:70:50' with ip 'fd00:204::37ad:cb:6285:8372' 2019-11-04T19:28:19.398 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T19:28:19.410 controller-1 systemd[1]: info Started Session 2 of user root. 2019-11-04T19:28:19.437 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T19:28:24.236 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T19:28:24.249 controller-1 systemd[1]: info Started Session 2 of user root. 2019-11-04T19:28:24.293 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T19:28:24.000 controller-1 dnsmasq-dhcp[111976]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:9e:65:28 2019-11-04T19:28:24.000 controller-1 dnsmasq-dhcp[111976]: info DHCPREPLY(vlan108) fd00:204::552b:2bbe:c8fa:502f 00:03:00:01:3c:fd:fe:9e:65:28 compute-2 2019-11-04T19:28:25.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:28:25.130 237034 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:65:28' with ip 'fd00:204::552b:2bbe:c8fa:502f' 2019-11-04T19:28:25.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.7% (avg per cpu); cpus: 36, Platform: 5.4% (Base: 4.4, k8s-system: 1.0), k8s-addon: 1.1 2019-11-04T19:28:25.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125920.0 MiB, Platform: 8971.1 MiB (Base: 8307.8, k8s-system: 663.3), k8s-addon: 7379.1 2019-11-04T19:28:25.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16426.6 MiB, cgroup-rss: 16354.3 MiB, Avail: 109493.4 MiB, Total: 125920.0 MiB 2019-11-04T19:28:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.28%, Anon: 5243.6 MiB, Avail: 58093.4 MiB, Total: 63337.0 MiB 2019-11-04T19:28:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11183.0 MiB, Avail: 52251.2 MiB, Total: 63434.2 MiB 2019-11-04T19:28:35.283 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.9% (avg per cpu); cpus: 36, Platform: 4.8% (Base: 3.8, k8s-system: 1.0), k8s-addon: 1.1 2019-11-04T19:28:35.289 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125920.4 MiB, Platform: 8980.8 MiB (Base: 8317.4, k8s-system: 663.3), k8s-addon: 7374.6 2019-11-04T19:28:35.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16434.5 MiB, cgroup-rss: 16359.5 MiB, Avail: 109486.0 MiB, Total: 125920.4 MiB 2019-11-04T19:28:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.30%, Anon: 5255.1 MiB, Avail: 58082.3 MiB, Total: 63337.4 MiB 2019-11-04T19:28:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.62%, Anon: 11179.4 MiB, Avail: 52254.9 MiB, Total: 63434.3 MiB 2019-11-04T19:28:36.513 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T19:28:36.524 controller-1 systemd[1]: info Started Session 2 of user root. 2019-11-04T19:28:37.370 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T19:28:42.000 controller-1 dnsmasq-dhcp[111976]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:9e:67:70 2019-11-04T19:28:42.000 controller-1 dnsmasq-dhcp[111976]: info DHCPREPLY(vlan108) fd00:204::61d2:4f2f:eb61:78a1 00:03:00:01:3c:fd:fe:9e:67:70 compute-17 2019-11-04T19:28:42.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:28:42.804 239549 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:67:70' with ip 'fd00:204::61d2:4f2f:eb61:78a1' 2019-11-04T19:28:45.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.1% (avg per cpu); cpus: 36, Platform: 4.9% (Base: 4.1, k8s-system: 0.9), k8s-addon: 1.0 2019-11-04T19:28:45.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125920.3 MiB, Platform: 8976.9 MiB (Base: 8313.5, k8s-system: 663.4), k8s-addon: 7375.2 2019-11-04T19:28:45.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16427.8 MiB, cgroup-rss: 16356.2 MiB, Avail: 109492.5 MiB, Total: 125920.3 MiB 2019-11-04T19:28:45.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.28%, Anon: 5246.9 MiB, Avail: 58090.8 MiB, Total: 63337.7 MiB 2019-11-04T19:28:45.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11180.9 MiB, Avail: 52253.2 MiB, Total: 63434.1 MiB 2019-11-04T19:28:47.287 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T19:28:47.298 controller-1 systemd[1]: info Started Session 2 of user root. 2019-11-04T19:28:47.341 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T19:28:55.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.9% (avg per cpu); cpus: 36, Platform: 4.7% (Base: 3.7, k8s-system: 1.0), k8s-addon: 1.1 2019-11-04T19:28:55.291 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125922.4 MiB, Platform: 8990.7 MiB (Base: 8327.4, k8s-system: 663.4), k8s-addon: 7375.0 2019-11-04T19:28:55.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16444.1 MiB, cgroup-rss: 16369.8 MiB, Avail: 109478.3 MiB, Total: 125922.4 MiB 2019-11-04T19:28:55.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.29%, Anon: 5252.8 MiB, Avail: 58085.6 MiB, Total: 63338.4 MiB 2019-11-04T19:28:55.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.64%, Anon: 11191.3 MiB, Avail: 52243.9 MiB, Total: 63435.2 MiB 2019-11-04T19:29:04.000 controller-1 dnsmasq-dhcp[111976]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:9e:66:a8 2019-11-04T19:29:04.000 controller-1 dnsmasq-dhcp[111976]: info DHCPREPLY(vlan108) fd00:204::e6c5:664e:a972:2e57 00:03:00:01:3c:fd:fe:9e:66:a8 compute-19 2019-11-04T19:29:05.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:29:05.014 243659 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:66:a8' with ip 'fd00:204::e6c5:664e:a972:2e57' 2019-11-04T19:29:05.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.9% (avg per cpu); cpus: 36, Platform: 4.5% (Base: 3.6, k8s-system: 0.9), k8s-addon: 1.3 2019-11-04T19:29:05.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125920.1 MiB, Platform: 8977.9 MiB (Base: 8314.6, k8s-system: 663.4), k8s-addon: 7375.5 2019-11-04T19:29:05.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16432.6 MiB, cgroup-rss: 16357.5 MiB, Avail: 109487.6 MiB, Total: 125920.1 MiB 2019-11-04T19:29:05.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.29%, Anon: 5252.5 MiB, Avail: 58083.6 MiB, Total: 63336.1 MiB 2019-11-04T19:29:05.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.62%, Anon: 11180.1 MiB, Avail: 52255.2 MiB, Total: 63435.3 MiB 2019-11-04T19:29:11.000 controller-1 dnsmasq-dhcp[111976]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:9e:67:20 2019-11-04T19:29:11.000 controller-1 dnsmasq-dhcp[111976]: info DHCPREPLY(vlan108) fd00:204::a4e4:77a2:377e:a63c 00:03:00:01:3c:fd:fe:9e:67:20 compute-18 2019-11-04T19:29:11.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:29:11.633 244423 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:67:20' with ip 'fd00:204::a4e4:77a2:377e:a63c' 2019-11-04T19:29:15.285 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.3% (avg per cpu); cpus: 36, Platform: 4.1% (Base: 3.2, k8s-system: 0.9), k8s-addon: 1.1 2019-11-04T19:29:15.293 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125920.0 MiB, Platform: 8980.1 MiB (Base: 8316.7, k8s-system: 663.4), k8s-addon: 7375.3 2019-11-04T19:29:15.294 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16434.9 MiB, cgroup-rss: 16359.5 MiB, Avail: 109485.1 MiB, Total: 125920.0 MiB 2019-11-04T19:29:15.294 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.29%, Anon: 5249.4 MiB, Avail: 58090.6 MiB, Total: 63340.0 MiB 2019-11-04T19:29:15.294 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11185.5 MiB, Avail: 52245.7 MiB, Total: 63431.2 MiB 2019-11-04T19:29:17.000 controller-1 dnsmasq-dhcp[111976]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:a0:11:00 2019-11-04T19:29:17.000 controller-1 dnsmasq-dhcp[111976]: info DHCPREPLY(vlan108) fd00:204::2966:7701:a798:3e3a 00:03:00:01:3c:fd:fe:a0:11:00 compute-6 2019-11-04T19:29:17.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:29:17.987 245347 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:a0:11:00' with ip 'fd00:204::2966:7701:a798:3e3a' 2019-11-04T19:29:25.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.3% (avg per cpu); cpus: 36, Platform: 5.1% (Base: 4.1, k8s-system: 1.0), k8s-addon: 1.1 2019-11-04T19:29:25.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125919.3 MiB, Platform: 8974.4 MiB (Base: 8311.0, k8s-system: 663.4), k8s-addon: 7375.8 2019-11-04T19:29:25.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.0%, Anon: 16427.0 MiB, cgroup-rss: 16354.3 MiB, Avail: 109492.3 MiB, Total: 125919.3 MiB 2019-11-04T19:29:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.28%, Anon: 5244.2 MiB, Avail: 58092.4 MiB, Total: 63336.7 MiB 2019-11-04T19:29:25.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11182.8 MiB, Avail: 52251.1 MiB, Total: 63433.9 MiB 2019-11-04T19:29:35.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.8% (avg per cpu); cpus: 36, Platform: 4.7% (Base: 3.8, k8s-system: 0.9), k8s-addon: 1.0 2019-11-04T19:29:35.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125918.6 MiB, Platform: 8981.5 MiB (Base: 8318.0, k8s-system: 663.6), k8s-addon: 7376.4 2019-11-04T19:29:35.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16435.6 MiB, cgroup-rss: 16361.9 MiB, Avail: 109483.0 MiB, Total: 125918.6 MiB 2019-11-04T19:29:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.30%, Anon: 5255.6 MiB, Avail: 58079.1 MiB, Total: 63334.7 MiB 2019-11-04T19:29:35.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.62%, Anon: 11180.0 MiB, Avail: 52255.1 MiB, Total: 63435.1 MiB 2019-11-04T19:29:45.284 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.2% (avg per cpu); cpus: 36, Platform: 5.1% (Base: 4.2, k8s-system: 0.9), k8s-addon: 1.0 2019-11-04T19:29:45.290 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125917.7 MiB, Platform: 8983.3 MiB (Base: 8319.8, k8s-system: 663.6), k8s-addon: 7375.9 2019-11-04T19:29:45.290 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16434.6 MiB, cgroup-rss: 16363.4 MiB, Avail: 109483.1 MiB, Total: 125917.7 MiB 2019-11-04T19:29:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.29%, Anon: 5250.0 MiB, Avail: 58085.1 MiB, Total: 63335.0 MiB 2019-11-04T19:29:45.290 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.63%, Anon: 11184.6 MiB, Avail: 52249.2 MiB, Total: 63433.8 MiB 2019-11-04T19:29:54.541 controller-1 systemd[1]: info Created slice User Slice of root. 2019-11-04T19:29:54.557 controller-1 systemd[1]: info Started Session 2 of user root. 2019-11-04T19:29:55.285 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.7% (avg per cpu); cpus: 36, Platform: 4.1% (Base: 3.1, k8s-system: 1.0), k8s-addon: 1.5 2019-11-04T19:29:55.292 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125918.4 MiB, Platform: 8995.3 MiB (Base: 8331.8, k8s-system: 663.6), k8s-addon: 7376.4 2019-11-04T19:29:55.292 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16450.3 MiB, cgroup-rss: 16373.6 MiB, Avail: 109468.1 MiB, Total: 125918.4 MiB 2019-11-04T19:29:55.292 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.29%, Anon: 5252.6 MiB, Avail: 58084.2 MiB, Total: 63336.8 MiB 2019-11-04T19:29:55.292 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.65%, Anon: 11197.7 MiB, Avail: 52235.2 MiB, Total: 63432.9 MiB 2019-11-04T19:29:57.000 controller-1 dnsmasq-dhcp[111976]: info DHCPRENEW(vlan108) 00:03:00:01:3c:fd:fe:9e:65:f0 2019-11-04T19:29:57.000 controller-1 dnsmasq-dhcp[111976]: info DHCPREPLY(vlan108) fd00:204::92d9:578f:f7c:e755 00:03:00:01:3c:fd:fe:9e:65:f0 compute-13 2019-11-04T19:29:57.000 controller-1 dnsmasq-script[111976]: debug sysinv 2019-11-04 19:29:57.943 251695 INFO sysinv.cmd.dnsmasq_lease_update [-] Called 'old' for mac '3c:fd:fe:9e:65:f0' with ip 'fd00:204::92d9:578f:f7c:e755' 2019-11-04T19:30:01.430 controller-1 systemd[1]: info Started Session 5 of user root. 2019-11-04T19:30:03.426 controller-1 kubelet[88595]: info I1104 19:30:03.426729 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "etcceph" (UniqueName: "kubernetes.io/empty-dir/05ef54df-35e7-4218-8b7f-bd69bdbc6947-etcceph") pod "ceph-pools-audit-1572895800-74bml" (UID: "05ef54df-35e7-4218-8b7f-bd69bdbc6947") 2019-11-04T19:30:03.426 controller-1 kubelet[88595]: info I1104 19:30:03.426791 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-etc" (UniqueName: "kubernetes.io/configmap/05ef54df-35e7-4218-8b7f-bd69bdbc6947-ceph-etc") pod "ceph-pools-audit-1572895800-74bml" (UID: "05ef54df-35e7-4218-8b7f-bd69bdbc6947") 2019-11-04T19:30:03.426 controller-1 kubelet[88595]: info I1104 19:30:03.426897 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-pools-bin" (UniqueName: "kubernetes.io/configmap/05ef54df-35e7-4218-8b7f-bd69bdbc6947-ceph-pools-bin") pod "ceph-pools-audit-1572895800-74bml" (UID: "05ef54df-35e7-4218-8b7f-bd69bdbc6947") 2019-11-04T19:30:03.426 controller-1 kubelet[88595]: info I1104 19:30:03.426940 88595 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ceph-pools-audit-token-bsfbw" (UniqueName: "kubernetes.io/secret/05ef54df-35e7-4218-8b7f-bd69bdbc6947-ceph-pools-audit-token-bsfbw") pod "ceph-pools-audit-1572895800-74bml" (UID: "05ef54df-35e7-4218-8b7f-bd69bdbc6947") 2019-11-04T19:30:03.541 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/05ef54df-35e7-4218-8b7f-bd69bdbc6947/volumes/kubernetes.io~secret/ceph-pools-audit-token-bsfbw. 2019-11-04T19:30:03.687 controller-1 dockerd[12332]: info time="2019-11-04T19:30:03.687539262Z" level=info msg="No non-localhost DNS nameservers are left in resolv.conf. Using default external servers: [nameserver 8.8.8.8 nameserver 8.8.4.4]" 2019-11-04T19:30:03.693 controller-1 containerd[12214]: info time="2019-11-04T19:30:03.693197207Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00/shim.sock" debug=false pid=253038 2019-11-04T19:30:05.464 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 6.3% (avg per cpu); cpus: 36, Platform: 5.0% (Base: 4.2, k8s-system: 0.9), k8s-addon: 1.2 2019-11-04T19:30:05.551 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125910.3 MiB, Platform: 8989.9 MiB (Base: 8326.3, k8s-system: 663.6), k8s-addon: 7376.0 2019-11-04T19:30:05.551 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16444.6 MiB, cgroup-rss: 16370.0 MiB, Avail: 109465.8 MiB, Total: 125910.3 MiB 2019-11-04T19:30:05.551 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.29%, Anon: 5251.3 MiB, Avail: 58076.6 MiB, Total: 63327.9 MiB 2019-11-04T19:30:05.551 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.65%, Anon: 11193.3 MiB, Avail: 52240.5 MiB, Total: 63433.7 MiB 2019-11-04T19:30:09.610 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.610 [INFO][253883] plugin.go 75: Extracted identifiers EndpointIDs=&utils.WEPIdentifiers{Namespace:"kube-system", WEPName:"", WorkloadEndpointIdentifiers:names.WorkloadEndpointIdentifiers{Node:"controller-1", Orchestrator:"k8s", Endpoint:"eth0", Workload:"", Pod:"ceph-pools-audit-1572895800-74bml", ContainerID:"936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00"}} 2019-11-04T19:30:09.626 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.626 [INFO][253883] plugin.go 166: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0 ceph-pools-audit-1572895800- kube-system 05ef54df-35e7-4218-8b7f-bd69bdbc6947 8171719 0 2019-11-04 19:30:03 +0000 UTC map[app:ceph-pools-audit controller-uid:19c79902-36c8-402a-adf6-965e2f19026c job-name:ceph-pools-audit-1572895800 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:ceph-pools-audit] map[] [] nil [] } {k8s controller-1 ceph-pools-audit-1572895800-74bml eth0 [] [] [kns.kube-system ksa.kube-system.ceph-pools-audit] cali2540c9589e3 []}} ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Namespace="kube-system" Pod="ceph-pools-audit-1572895800-74bml" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895800--74bml-" 2019-11-04T19:30:09.626 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.626 [INFO][253883] k8s.go 60: Extracted identifiers for CmdAddK8s ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Namespace="kube-system" Pod="ceph-pools-audit-1572895800-74bml" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" 2019-11-04T19:30:09.628 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.628 [INFO][253883] k8s.go 781: namespace info &Namespace{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:kube-system,GenerateName:,Namespace:,SelfLink:/api/v1/namespaces/kube-system,UID:5d016a6c-19e8-4b97-88a9-b6113a3cb736,ResourceVersion:5,Generation:0,CreationTimestamp:2019-10-25 15:09:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NamespaceSpec{Finalizers:[kubernetes],},Status:NamespaceStatus{Phase:Active,},} 2019-11-04T19:30:09.629 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.629 [INFO][253883] k8s.go 790: pod info &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:ceph-pools-audit-1572895800-74bml,GenerateName:ceph-pools-audit-1572895800-,Namespace:kube-system,SelfLink:/api/v1/namespaces/kube-system/pods/ceph-pools-audit-1572895800-74bml,UID:05ef54df-35e7-4218-8b7f-bd69bdbc6947,ResourceVersion:8171719,Generation:0,CreationTimestamp:2019-11-04 19:30:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{app: ceph-pools-audit,controller-uid: 19c79902-36c8-402a-adf6-965e2f19026c,job-name: ceph-pools-audit-1572895800,},Annotations:map[string]string{},OwnerReferences:[{batch/v1 Job ceph-pools-audit-1572895800 19c79902-36c8-402a-adf6-965e2f19026c 0xc00056bc4b 0xc00056bc4c}],Finalizers:[],ClusterName:,Initializers:nil,},Spec:PodSpec{Volumes:[{ceph-pools-bin {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:ceph-pools-bin,},Items:[],DefaultMode:*365,Optional:nil,} nil nil nil nil nil nil nil nil}} {etcceph {nil &EmptyDirVolumeSource{Medium:,SizeLimit:,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}} {ceph-etc {nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil &ConfigMapVolumeSource{LocalObjectReference:LocalObjectReference{Name:ceph-etc,},Items:[],DefaultMode:*292,Optional:nil,} nil nil nil nil nil nil nil nil}} {ceph-pools-audit-token-bsfbw {nil nil nil nil nil &SecretVolumeSource{SecretName:ceph-pools-audit-token-bsfbw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{ceph-pools-audit-ceph-store registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 [/tmp/ceph-pools-audit.sh] [] [] [] [{RBD_POOL_REPLICATION 2 nil} {RBD_POOL_MIN_REPLICATION 1 nil} {RBD_POOL_CRUSH_RULE_NAME storage_tier_ruleset nil}] {map[] map[]} [{ceph-pools-bin true /tmp/ceph-pools-audit.sh ceph-pools-audit.sh } {etcceph false /etc/ceph } {ceph-etc true /etc/ceph/ceph.conf ceph.conf } {ceph-pools-audit-token-bsfbw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:OnFailure,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{node-role.kubernetes.io/master: ,},ServiceAccountName:ceph-pools-audit,DeprecatedServiceAccount:ceph-pools-audit,NodeName:controller-1,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],},ImagePullSecrets:[{default-registry-key}],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00056be70} {node.kubernetes.io/unreachable Exists NoExecute 0xc00056be90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:30:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:30:03 +0000 UTC ContainersNotReady containers with unready status: [ceph-pools-audit-ceph-store]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:30:03 +0000 UTC ContainersNotReady containers with unready status: [ceph-pools-audit-ceph-store]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-04 19:30:03 +0000 UTC }],Message:,Reason:,HostIP:fd00:204::4,PodIP:,StartTime:2019-11-04 19:30:03 +0000 UTC,ContainerStatuses:[{ceph-pools-audit-ceph-store {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 registry.local:9001/docker.io/starlingx/ceph-config-helper:v1.15.0 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} 2019-11-04T19:30:09.648 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.647 [INFO][253910] ipam_plugin.go 208: Calico CNI IPAM request count IPv4=0 IPv6=1 ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" HandleID="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Workload="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" 2019-11-04T19:30:09.656 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.656 [INFO][253910] ipam_plugin.go 220: Calico CNI IPAM handle=chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00 ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" HandleID="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Workload="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" 2019-11-04T19:30:09.656 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.656 [INFO][253910] ipam_plugin.go 230: Auto assigning IP ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" HandleID="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Workload="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:0, Num6:1, HandleID:(*string)(0xc0002d8560), Attrs:map[string]string{"node":"controller-1", "pod":"ceph-pools-audit-1572895800-74bml", "namespace":"kube-system"}, Hostname:"controller-1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0} 2019-11-04T19:30:09.656 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.656 [INFO][253910] ipam.go 83: Auto-assign 0 ipv4, 1 ipv6 addrs for host 'controller-1' 2019-11-04T19:30:09.660 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.660 [INFO][253910] ipam.go 309: Looking up existing affinities for host handle="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" host="controller-1" 2019-11-04T19:30:09.664 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.664 [INFO][253910] ipam.go 373: Trying affinity for fd00:206::a4ce:fec1:5423:e300/122 handle="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" host="controller-1" 2019-11-04T19:30:09.665 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.665 [INFO][253910] ipam.go 131: Attempting to load block cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:30:09.668 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.667 [INFO][253910] ipam.go 208: Affinity is confirmed and block has been loaded cidr=fd00:206::a4ce:fec1:5423:e300/122 host="controller-1" 2019-11-04T19:30:09.668 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.668 [INFO][253910] ipam.go 789: Attempting to assign 1 addresses from block block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" host="controller-1" 2019-11-04T19:30:09.669 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.669 [INFO][253910] ipam.go 1244: Creating new handle: chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00 2019-11-04T19:30:09.671 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.671 [INFO][253910] ipam.go 812: Writing block in order to claim IPs block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" host="controller-1" 2019-11-04T19:30:09.674 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.673 [INFO][253910] ipam.go 825: Successfully claimed IPs: [fd00:206::a4ce:fec1:5423:e301/122] block=fd00:206::a4ce:fec1:5423:e300/122 handle="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" host="controller-1" 2019-11-04T19:30:09.674 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.674 [INFO][253910] ipam.go 405: Block 'fd00:206::a4ce:fec1:5423:e300/122' provided addresses: [fd00:206::a4ce:fec1:5423:e301/122] handle="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" host="controller-1" 2019-11-04T19:30:09.675 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.675 [INFO][253910] ipam.go 561: Auto-assigned 1 out of 1 IPv6s: [fd00:206::a4ce:fec1:5423:e301/122] handle="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" host="controller-1" 2019-11-04T19:30:09.675 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.675 [INFO][253910] ipam_plugin.go 232: Calico CNI IPAM assigned addresses IPv4=[] IPv6=[fd00:206::a4ce:fec1:5423:e301/122] ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" HandleID="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Workload="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" 2019-11-04T19:30:09.675 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.675 [INFO][253910] ipam_plugin.go 258: IPAM Result ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" HandleID="chain.936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Workload="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" result.IPs=[]*current.IPConfig{(*current.IPConfig)(0xc0004282a0)} 2019-11-04T19:30:09.676 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.676 [INFO][253883] k8s.go 361: Populated endpoint ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Namespace="kube-system" Pod="ceph-pools-audit-1572895800-74bml" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0", GenerateName:"ceph-pools-audit-1572895800-", Namespace:"kube-system", SelfLink:"", UID:"05ef54df-35e7-4218-8b7f-bd69bdbc6947", ResourceVersion:"8171719", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492603, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit", "app":"ceph-pools-audit", "controller-uid":"19c79902-36c8-402a-adf6-965e2f19026c", "job-name":"ceph-pools-audit-1572895800"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"", Pod:"ceph-pools-audit-1572895800-74bml", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e301/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali2540c9589e3", MAC:"", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:30:09.676 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.676 [INFO][253883] k8s.go 362: Calico CNI using IPs: [fd00:206::a4ce:fec1:5423:e301/128] ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Namespace="kube-system" Pod="ceph-pools-audit-1572895800-74bml" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" 2019-11-04T19:30:09.676 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.676 [INFO][253883] network_linux.go 76: Setting the host side veth name to cali2540c9589e3 ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Namespace="kube-system" Pod="ceph-pools-audit-1572895800-74bml" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" 2019-11-04T19:30:09.679 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.679 [INFO][253883] network_linux.go 411: Disabling IPv6 forwarding ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Namespace="kube-system" Pod="ceph-pools-audit-1572895800-74bml" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" 2019-11-04T19:30:09.719 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.719 [INFO][253883] k8s.go 388: Added Mac, interface name, and active container ID to endpoint ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Namespace="kube-system" Pod="ceph-pools-audit-1572895800-74bml" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0", GenerateName:"ceph-pools-audit-1572895800-", Namespace:"kube-system", SelfLink:"", UID:"05ef54df-35e7-4218-8b7f-bd69bdbc6947", ResourceVersion:"8171719", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63708492603, loc:(*time.Location)(0x232eae0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"ceph-pools-audit", "controller-uid":"19c79902-36c8-402a-adf6-965e2f19026c", "job-name":"ceph-pools-audit-1572895800", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"ceph-pools-audit"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"controller-1", ContainerID:"936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00", Pod:"ceph-pools-audit-1572895800-74bml", Endpoint:"eth0", IPNetworks:[]string{"fd00:206::a4ce:fec1:5423:e301/128"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.ceph-pools-audit"}, InterfaceName:"cali2540c9589e3", MAC:"36:1a:f9:3b:ca:83", Ports:[]v3.EndpointPort(nil)}} 2019-11-04T19:30:09.721 controller-1 kubelet[88595]: info 2019-11-04 19:30:09.721 [INFO][253883] k8s.go 420: Wrote updated endpoint to datastore ContainerID="936b41d11a41e1bfb94f960389a1cc79de7b4ae4bb2b2c8c5e064f66e9034b00" Namespace="kube-system" Pod="ceph-pools-audit-1572895800-74bml" WorkloadEndpoint="controller--1-k8s-ceph--pools--audit--1572895800--74bml-eth0" 2019-11-04T19:30:09.771 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/05ef54df-35e7-4218-8b7f-bd69bdbc6947/volume-subpaths/ceph-pools-bin/ceph-pools-audit-ceph-store/0. 2019-11-04T19:30:09.838 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/05ef54df-35e7-4218-8b7f-bd69bdbc6947/volume-subpaths/ceph-pools-bin/ceph-pools-audit-ceph-store/0. 2019-11-04T19:30:09.882 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/05ef54df-35e7-4218-8b7f-bd69bdbc6947/volume-subpaths/ceph-etc/ceph-pools-audit-ceph-store/2. 2019-11-04T19:30:09.907 controller-1 systemd[1]: info Started Kubernetes transient mount for /var/lib/kubelet/pods/05ef54df-35e7-4218-8b7f-bd69bdbc6947/volume-subpaths/ceph-etc/ceph-pools-audit-ceph-store/2. 2019-11-04T19:30:09.953 controller-1 containerd[12214]: info time="2019-11-04T19:30:09.953763049Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7e168add0150be3ed79aacc4dd6ec8b6544b482230b6ef90074655bff81a16f1/shim.sock" debug=false pid=253977 2019-11-04T19:30:13.034 controller-1 systemd[1]: info Removed slice User Slice of root. 2019-11-04T19:30:13.000 controller-1 ntpd[87625]: info Listen normally on 32 cali2540c9589e3 fe80::ecee:eeff:feee:eeee UDP 123 2019-11-04T19:30:13.000 controller-1 ntpd[87625]: debug new interface(s) found: waking up resolver 2019-11-04T19:30:15.285 controller-1 collectd[12276]: info platform cpu usage plugin Usage: 5.9% (avg per cpu); cpus: 36, Platform: 4.6% (Base: 3.5, k8s-system: 1.1), k8s-addon: 1.2 2019-11-04T19:30:15.291 controller-1 collectd[12276]: info platform memory usage: Usage: 7.1%; Reserved: 125908.6 MiB, Platform: 8998.4 MiB (Base: 8323.1, k8s-system: 675.3), k8s-addon: 7370.2 2019-11-04T19:30:15.291 controller-1 collectd[12276]: info 4K memory usage: Anon: 13.1%, Anon: 16448.5 MiB, cgroup-rss: 16372.7 MiB, Avail: 109460.0 MiB, Total: 125908.6 MiB 2019-11-04T19:30:15.291 controller-1 collectd[12276]: info 4K numa memory usage: node0, Anon: 8.29%, Anon: 5250.0 MiB, Avail: 58076.7 MiB, Total: 63326.7 MiB 2019-11-04T19:30:15.291 controller-1 collectd[12276]: info 4K numa memory usage: node1, Anon: 17.65%, Anon: 11198.6 MiB, Avail: 52234.6 MiB, Total: 63433.1 MiB