Public node is not adding k8s cluster, kubelet error

Bug #1501495 reported by Gangadhar Sunkara
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
murano-apps
Fix Committed
High
Unassigned

Bug Description

We have deployed Murano k8s cluster. But when we was trying to add a public node(newkube) to cluster it is failing.

Env:
Murano: Kilo
K8S: v0.15.0
OS: ubuntu 14.04

Logs on the newkube:
ETCD looks running fine.
#/opt/bin/etcd --name newkube --initial-cluster-state existing --initial-cluster newkube=http://172.16.2.70:7001,kube-1=http://10.0.3.3:7001,kube-2=http://10.0.3.4:7001,gateway-1=http://10.0.3.5:7001 --data-dir /var/lib/etcd --snapshot-count 1000 --listen-peer-urls http://172.16.2.70:7001,http://127.0.0.1:7001 --listen-client-urls http://172.16.2.70:4001,http://127.0.0.1:4001 --initial-advertise-peer-urls http://172.16.2.70:7001 --advertise-client-urls http://172.16.2.70:4001,http://127.0.0.1:4001
...
2015/09/30 12:44:18 raft: 80b6a31bdcfc7ee [logterm: 0, index: 3107] rejected msgApp [logterm: 2, index: 3107] from 9f421141ec11f26
2015/09/30 12:44:18 raft.node: 80b6a31bdcfc7ee elected leader 9f421141ec11f26 at term 2
2015/09/30 12:44:18 raft: 80b6a31bdcfc7ee [commit: 0, lastindex: 0, lastterm: 0] starts to restore snapshot [index: 3003, term: 2]
2015/09/30 12:44:18 raftlog: log [committed=1, applied=0, unstable.offset=0, len(unstable.Entries)=0] starts to restore snapshot [index: 3003, term: 2]
2015/09/30 12:44:18 raft: 80b6a31bdcfc7ee restored progress of 9f421141ec11f26 [next = 3004, match = 0, wait = 0]
2015/09/30 12:44:18 raft: 80b6a31bdcfc7ee restored progress of 23e1b45fe7e6d2b2 [next = 3004, match = 0, wait = 0]
2015/09/30 12:44:18 raft: 80b6a31bdcfc7ee restored progress of 9ae7fe16648ec0bd [next = 3004, match = 0, wait = 0]
2015/09/30 12:44:18 raft: 80b6a31bdcfc7ee [commit: 0] restored snapshot [index: 3003, term: 2]
2015/09/30 12:44:18 etcdserver: saved incoming snapshot at index 3003
2015/09/30 12:44:18 rafthttp: starting client stream to 9f421141ec11f26 at term 2
2015/09/30 12:44:18 etcdserver: recovered from incoming snapshot at index 3003
2015/09/30 12:44:19 raft: 80b6a31bdcfc7ee [commit: 3003] ignored snapshot [index: 3003, term: 2]
2015/09/30 12:44:20 etcdserver: added local member 80b6a31bdcfc7ee [http://172.16.2.70:7001] to cluster 8cd31bc587bd058e
2015/09/30 12:44:20 etcdserver: published {Name:newkube ClientURLs:[http://127.0.0.1:4001 http://172.16.2.70:4001]} to cluster 8cd31bc587bd058e

Running kube-proxy runnig fine:
# /opt/bin/kube-proxy --master=http://10.0.3.3:8080
I0930 13:14:39.314741 12753 proxier.go:336] Setting Proxy IP to 10.20.0.77
I0930 13:14:39.316154 12753 proxier.go:341] Initializing iptables
I0930 13:14:39.345300 12753 server.go:98] Using API calls to get config http://10.0.3.3:8080
I0930 13:14:39.361592 12753 proxier.go:595] Opened iptables from-containers portal for service "default/kubernetes:" on TCP 11.1.0.2:443
I0930 13:14:39.367831 12753 proxier.go:606] Opened iptables from-host portal for service "default/kubernetes:" on TCP 11.1.0.2:443
I0930 13:14:39.374220 12753 proxier.go:595] Opened iptables from-containers portal for service "default/kubernetes-ro:" on TCP 11.1.0.1:80
I0930 13:14:39.382113 12753 proxier.go:606] Opened iptables from-host portal for service "default/kubernetes-ro:" on TCP 11.1.0.1:80

Running kubelet:
# /opt/bin/kubelet --address=172.16.2.70 --port=10250 --hostname_override=172.16.2.70 --api_servers=10.0.3.3:8080
...
I0930 12:58:14.899334 1801 plugins.go:56] Registering credential provider: .dockercfg
I0930 12:58:14.957340 1801 status_manager.go:56] Starting to sync pod status with apiserver
I0930 12:58:14.957475 1801 kubelet.go:1606] Starting kubelet main sync loop.
I0930 12:58:14.958306 1801 kubelet.go:560] Starting node status updates
E0930 12:58:15.101561 1801 kubelet.go:1735] error updating node status, will retry: error getting node "172.16.2.70": minion "172.16.2.70" not found
E0930 12:58:15.127706 1801 kubelet.go:1735] error updating node status, will retry: error getting node "172.16.2.70": minion "172.16.2.70" not found
E0930 12:58:15.142806 1801 kubelet.go:1735] error updating node status, will retry: error getting node "172.16.2.70": minion "172.16.2.70" not found

When we contacted Google to help resolve the kubelet start issue, they asked to update the K8s to newer version.

Tags: k8s
description: updated
Changed in murano-apps:
importance: Undecided → High
milestone: none → mitaka-1
Revision history for this message
Serg Melikyan (smelikyan) wrote :
Changed in murano-apps:
status: New → Incomplete
Changed in murano-apps:
milestone: mitaka-1 → mitaka-2
Changed in murano-apps:
milestone: mitaka-2 → mitaka-3
Changed in murano-apps:
milestone: mitaka-3 → mitaka-rc1
Changed in murano-apps:
milestone: mitaka-rc1 → newton-1
omar (oshykhkerimov)
Changed in murano-apps:
assignee: nobody → omar (oshykhkerimov)
omar (oshykhkerimov)
Changed in murano-apps:
assignee: omar (oshykhkerimov) → nobody
tags: added: k8s
Revision history for this message
Sergey Kraynev (skraynev) wrote :

Close the bug due to 1 year without response. Looks like issue went out. Please re-open bug if you meet it again.

Changed in murano-apps:
status: Incomplete → Fix Committed
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.