2019-06-26 21:56:14 |
Joel Johnston |
description |
Using the instructions found here: https://github.com/charmed-kubernetes/bundle/wiki/Deploying-on-LXD
I have built a five machine lxd cluster using MaaS. I've created a passthrough bridge on br0 on each node and am using the static IP's assigned by mass to displace the primary interface. After that I manually created the lxd cluster and added the subsequent nodes.
I then added the lxd cluster as a cloud object in Juju, provisioned credentials, and the bootstrapped juju against the lxd cluster. This all seems to work well.
I then use the lxd-profile.yaml and the instructions listed above to modify the lxc profile on my my maas/juju machine. I deploy the kubernetes cluster against the lxd cluster with juju deploy cs:bundle/canonical-kubernetes-592
I then apply the proxy config listed in the instructions to modify the worker network profile with juju config -m "$JUJU_CONTROLLER:$JUJU_MODEL" kubernetes-worker proxy-extra-args="proxy-mode=userspace"
The cluster comes up and gets most of the way until it sticks here.
Every 2.0s: juju status --color lv-maas-01: Wed Jun 26 21:47:03 2019
Model Controller Cloud/Region Version SLA Timestamp
kubernetes lxd-cluster-default lxd-cluster/default 2.6.4 unsupported 21:47:03Z
App Version Status Scale Charm Store Rev OS Notes
easyrsa 3.0.1 active 1 easyrsa jujucharms 222 ubuntu
etcd 3.2.10 active 3 etcd jujucharms 397 ubuntu
flannel 0.10.0 active 5 flannel jujucharms 386 ubuntu
kubeapi-load-balancer 1.14.0 active 1 kubeapi-load-balancer jujucharms 583 ubuntu exposed
kubernetes-master 1.13.7 waiting 2 kubernetes-master jujucharms 604 ubuntu
kubernetes-worker 1.13.7 waiting 3 kubernetes-worker jujucharms 472 ubuntu exposed
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 10.192.14.121 Certificate Authority connected.
etcd/0* active idle 1 10.192.14.122 2379/tcp Healthy with 3 known peers
etcd/1 active idle 2 10.192.14.114 2379/tcp Healthy with 3 known peers
etcd/2 active idle 3 10.192.14.125 2379/tcp Healthy with 3 known peers
kubeapi-load-balancer/0* active idle 4 10.192.14.112 443/tcp Loadbalancer ready.
kubernetes-master/0* waiting idle 5 10.192.14.123 6443/tcp Waiting for 7 kube-system pods to start
flannel/2 active idle 10.192.14.123 Flannel subnet 10.1.69.1/24
kubernetes-master/1 waiting idle 6 10.192.14.103 6443/tcp Waiting for 7 kube-system pods to start
flannel/3 active idle 10.192.14.103 Flannel subnet 10.1.7.1/24
kubernetes-worker/0 waiting idle 7 10.192.14.126 80/tcp,443/tcp Waiting for kubelet to start.
flannel/1 active idle 10.192.14.126 Flannel subnet 10.1.90.1/24
kubernetes-worker/1* waiting idle 8 10.192.14.106 80/tcp,443/tcp Waiting for kubelet to start.
flannel/0* active idle 10.192.14.106 Flannel subnet 10.1.77.1/24
kubernetes-worker/2 waiting idle 9 10.192.14.96 80/tcp,443/tcp Waiting for kubelet to start.
flannel/4 active idle 10.192.14.96 Flannel subnet 10.1.97.1/24
Machine State DNS Inst id Series AZ Message
0 started 10.192.14.121 juju-c4ad65-0 bionic Running
1 started 10.192.14.122 juju-c4ad65-1 bionic Running
2 started 10.192.14.114 juju-c4ad65-2 bionic Running
3 started 10.192.14.125 juju-c4ad65-3 bionic Running
4 started 10.192.14.112 juju-c4ad65-4 bionic Running
5 started 10.192.14.123 juju-c4ad65-5 bionic Running
6 started 10.192.14.103 juju-c4ad65-6 bionic Running
7 started 10.192.14.126 juju-c4ad65-7 bionic Running
8 started 10.192.14.106 juju-c4ad65-8 bionic Running
9 started 10.192.14.96 juju-c4ad65-9 bionic Running
Please let me know what I should check to verify the config. Thank you. |
Using the instructions found here: https://github.com/charmed-kubernetes/bundle/wiki/Deploying-on-LXD
I have built a five machine lxd cluster using MaaS. I've created a passthrough bridge on br0 on each node and am using the static IP's assigned by mass to displace the primary interface. After that I manually created the lxd cluster and added the subsequent nodes.
I then added the lxd cluster as a cloud object in Juju, provisioned credentials, and the bootstrapped juju against the lxd cluster. This all seems to work well.
I then use the lxd-profile.yaml and the instructions listed above to modify the lxc profile on my my maas/juju machine. I deploy the kubernetes cluster against the lxd cluster with juju deploy cs:bundle/canonical-kubernetes-592
I then apply the proxy config listed in the instructions to modify the worker network profile with juju config -m "$JUJU_CONTROLLER:$JUJU_MODEL" kubernetes-worker proxy-extra-args="proxy-mode=userspace"
The cluster comes up and gets most of the way until it sticks here.
Every 2.0s: juju status --color lv-maas-01: Wed Jun 26 21:47:03 2019
Model Controller Cloud/Region Version SLA Timestamp
kubernetes lxd-cluster-default lxd-cluster/default 2.6.4 unsupported 21:47:03Z
App Version Status Scale Charm Store Rev OS Notes
easyrsa 3.0.1 active 1 easyrsa jujucharms 222 ubuntu
etcd 3.2.10 active 3 etcd jujucharms 397 ubuntu
flannel 0.10.0 active 5 flannel jujucharms 386 ubuntu
kubeapi-load-balancer 1.14.0 active 1 kubeapi-load-balancer jujucharms 583 ubuntu exposed
kubernetes-master 1.13.7 waiting 2 kubernetes-master jujucharms 604 ubuntu
kubernetes-worker 1.13.7 waiting 3 kubernetes-worker jujucharms 472 ubuntu exposed
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 <myip> Certificate Authority connected.
etcd/0* active idle 1 <myip> 2379/tcp Healthy with 3 known peers
etcd/1 active idle 2 <myip> 2379/tcp Healthy with 3 known peers
etcd/2 active idle 3 <myip> 2379/tcp Healthy with 3 known peers
kubeapi-load-balancer/0* active idle 4 <myip> 443/tcp Loadbalancer ready.
kubernetes-master/0* waiting idle 5 <myip> 6443/tcp Waiting for 7 kube-system pods to start
flannel/2 active idle <myip> Flannel subnet 10.1.69.1/24
kubernetes-master/1 waiting idle 6 <myip> 6443/tcp Waiting for 7 kube-system pods to start
flannel/3 active idle <myip> Flannel subnet 10.1.7.1/24
kubernetes-worker/0 waiting idle 7 <myip> 80/tcp,443/tcp Waiting for kubelet to start.
flannel/1 active idle <myip> Flannel subnet 10.1.90.1/24
kubernetes-worker/1* waiting idle 8 <myip> 80/tcp,443/tcp Waiting for kubelet to start.
flannel/0* active idle <myip> Flannel subnet 10.1.77.1/24
kubernetes-worker/2 waiting idle 9 <myip> 80/tcp,443/tcp Waiting for kubelet to start.
flannel/4 active idle <myip> Flannel subnet 10.1.97.1/24
Machine State DNS Inst id Series AZ Message
0 started <myip> juju-c4ad65-0 bionic Running
1 started <myip> juju-c4ad65-1 bionic Running
2 started <myip> juju-c4ad65-2 bionic Running
3 started <myip> juju-c4ad65-3 bionic Running
4 started <myip> juju-c4ad65-4 bionic Running
5 started <myip> juju-c4ad65-5 bionic Running
6 started <myip> juju-c4ad65-6 bionic Running
7 started <myip> juju-c4ad65-7 bionic Running
8 started <myip> juju-c4ad65-8 bionic Running
9 started <myip> juju-c4ad65-9 bionic Running
Please let me know what I should check to verify the config. Thank you. |
|