Juju -> LXD Cluster - Waiting for kubelet to start
Affects | Status | Importance | Assigned to | Milestone | |
---|---|---|---|---|---|
AWS Integrator Charm |
Fix Released
|
Undecided
|
Unassigned |
Bug Description
Using the instructions found here: https:/
I have built a five machine lxd cluster using MaaS. I've created a passthrough bridge on br0 on each node and am using the static IP's assigned by mass to displace the primary interface. After that I manually created the lxd cluster and added the subsequent nodes.
I then added the lxd cluster as a cloud object in Juju, provisioned credentials, and the bootstrapped juju against the lxd cluster. This all seems to work well.
I then use the lxd-profile.yaml and the instructions listed above to modify the lxc profile on my my maas/juju machine. I deploy the kubernetes cluster against the lxd cluster with juju deploy cs:bundle/
I then apply the proxy config listed in the instructions to modify the worker network profile with juju config -m "$JUJU_
The cluster comes up and gets most of the way until it sticks here.
Every 2.0s: juju status --color lv-maas-01: Wed Jun 26 21:47:03 2019
Model Controller Cloud/Region Version SLA Timestamp
kubernetes lxd-cluster-default lxd-cluster/default 2.6.4 unsupported 21:47:03Z
App Version Status Scale Charm Store Rev OS Notes
easyrsa 3.0.1 active 1 easyrsa jujucharms 222 ubuntu
etcd 3.2.10 active 3 etcd jujucharms 397 ubuntu
flannel 0.10.0 active 5 flannel jujucharms 386 ubuntu
kubeapi-
kubernetes-master 1.13.7 waiting 2 kubernetes-master jujucharms 604 ubuntu
kubernetes-worker 1.13.7 waiting 3 kubernetes-worker jujucharms 472 ubuntu exposed
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 <myip> Certificate Authority connected.
etcd/0* active idle 1 <myip> 2379/tcp Healthy with 3 known peers
etcd/1 active idle 2 <myip> 2379/tcp Healthy with 3 known peers
etcd/2 active idle 3 <myip> 2379/tcp Healthy with 3 known peers
kubeapi-
kubernetes-
flannel/2 active idle <myip> Flannel subnet 10.1.69.1/24
kubernetes-master/1 waiting idle 6 <myip> 6443/tcp Waiting for 7 kube-system pods to start
flannel/3 active idle <myip> Flannel subnet 10.1.7.1/24
kubernetes-worker/0 waiting idle 7 <myip> 80/tcp,443/tcp Waiting for kubelet to start.
flannel/1 active idle <myip> Flannel subnet 10.1.90.1/24
kubernetes-
flannel/0* active idle <myip> Flannel subnet 10.1.77.1/24
kubernetes-worker/2 waiting idle 9 <myip> 80/tcp,443/tcp Waiting for kubelet to start.
flannel/4 active idle <myip> Flannel subnet 10.1.97.1/24
Machine State DNS Inst id Series AZ Message
0 started <myip> juju-c4ad65-0 bionic Running
1 started <myip> juju-c4ad65-1 bionic Running
2 started <myip> juju-c4ad65-2 bionic Running
3 started <myip> juju-c4ad65-3 bionic Running
4 started <myip> juju-c4ad65-4 bionic Running
5 started <myip> juju-c4ad65-5 bionic Running
6 started <myip> juju-c4ad65-6 bionic Running
7 started <myip> juju-c4ad65-7 bionic Running
8 started <myip> juju-c4ad65-8 bionic Running
9 started <myip> juju-c4ad65-9 bionic Running
Please let me know what I should check to verify the config. Thank you.
description: | updated |
Thanks for the bug report. This certainly should work. It would be great to get a cdk-field-agent run for this bug to help out. If not that, can we get the output of `kubectl describe no` and `kubectl describe po -A` run from the master unit.