local provider uses the wrong interface

Bug #1491592 reported by Matt Bruzek
20
This bug affects 3 people
Affects Status Importance Assigned to Milestone
juju-core
Fix Released
High
Unassigned
1.24
Fix Released
High
Unassigned

Bug Description

A Juju user has reported a problem with the local kvm provider.

The error message appeared in the unit log:

2015-09-02 19:01:02 ERROR juju.worker runner.go:223 exited "uniter": ModeInstalling cs:~kubernetes/trusty/kubernetes-6: preparing operation "install cs:~kubernetes/trusty/kubernetes-6": failed to download charm "cs:~kubernetes/trusty/kubernetes-6" from ["https://172.17.42.1:17070/environment/cff5caa7-675a-4f0a-87cb-dbe698ec275a/charms?file=%2A&url=cs%3A~kubernetes%2Ftrusty%2Fkubernetes-6"]: cannot download

This particular charm is adding a docker0 bridge and it appears that the api address is using the address from that bridge. There is a related bug here: https://bugs.launchpad.net/juju-core/+bug/1416928 The fix appears to be scoped for the LXC provider only.

The customer reported this problem using juju-deployer when he was deploying a bundle. There was no reboot of any of his systems.

We need an additional fix for the local kvm provider. Please let me know if you need any more information.

Revision history for this message
Matt Bruzek (mbruzek) wrote :
Revision history for this message
Matt Bruzek (mbruzek) wrote :

Additional log files from the customer that show the error "failed to download charm".

Marco Ceppi (marcoceppi)
tags: added: charmers
Revision history for this message
Charles Butler (lazypower) wrote :

@Gennadiy - I've riffed with dimiter about this issue briefly this morning, and we've come up with a possible work-around.

Can you try setting :

ignore-machine-addresses: true

in environments.yaml for your KVM local provider and try a rebootstrap/deploy?

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

@lazypower

i tried this: ignore-machine-addresses: true.

it didn't help me.

Revision history for this message
Charles Butler (lazypower) wrote :

Thats unfortunate, thanks for giving it a go.

I'll huddle up with some core-devs on Tuesday (Monday is a holiday in the US) and see if we can get additional eyes on this. Keep us aprised of any changes in situation.

Curtis Hovey (sinzui)
tags: added: kvm
Changed in juju-core:
status: New → Triaged
importance: Undecided → High
milestone: none → 1.25-beta1
Changed in juju-core:
milestone: 1.25-beta1 → 1.25-beta2
Changed in juju-core:
assignee: nobody → Andrew McDermott (frobware)
Changed in juju-core:
status: Triaged → In Progress
Revision history for this message
Andrew McDermott (frobware) wrote :

Is it possible to get access to the bundle so that I can reproduce the problem locally?

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

standard set of services for kubernetes cluster
demo:
  series: trusty
  services:
    kubernetes-master:
      charm: local:trusty/kubernetes-master
# charm: cs:~kubernetes/trusty/kubernetes-master-7
      expose: true
      options:
        version: "v1.0.3"
      constraints: "mem=2G"
    docker:
# charm: cs:~kubernetes/trusty/docker-0
      charm: local:trusty/docker-charm
      num_units: 1
      options:
        latest: true
        version: "1.6.2"
        aufs: true
    flannel-docker:
# charm: cs:~kubernetes/trusty/flannel-docker-2
      charm: local:trusty/flannel-docker-charm
    kubernetes:
      charm: cs:~kubernetes/trusty/kubernetes-6
    etcd:
      charm: cs:trusty/etcd-0
  relations:
# --- Kubernetes ---
  - - flannel-docker:network
    - docker:network
  - - flannel-docker:docker-host
    - docker:juju-info
  - - flannel-docker:db
    - etcd:client
  - - kubernetes:docker-host
    - docker:juju-info
  - - etcd:client
    - kubernetes:etcd
  - - etcd:client
    - kubernetes-master:etcd
  - - kubernetes-master:minions-api
    - kubernetes:api
# --- ---

you can find our local charms in repo - https://github.com/taddemo2015/vas-charms-dev
also there are our demo bundle - https://github.com/taddemo2015/vas-charms-dev/blob/master/bundle-demo-kube.yaml
but it contains another services like restcomm it requires a lot of resources.

Revision history for this message
Andrew McDermott (frobware) wrote :

@Gennadiy - I am unable to reproduce this.

I have been using the attached bundle.yaml via:

 $ juju-deployer -B -c bundle.yaml demo

but I do not see your "cannot download" error.

Note: I cloned your repo and your 'demo' yaml definition refers to some local: charms that don't exist in the repo. For those I used the charmstore. I did have to make a change to the kubernetes-master as the setup failed to install docker -- you see errors in the logs for that.

$ git diff
diff --git a/charms/trusty/kubernetes-master/hooks/install.py b/charms/trusty/kubernetes-master/hooks/install.py
index 801fe8a..34157d1 100755
--- a/charms/trusty/kubernetes-master/hooks/install.py
+++ b/charms/trusty/kubernetes-master/hooks/install.py
@@ -68,7 +68,7 @@ def install_packages():
     """
     hookenv.log('Installing Debian packages')
     # Create the list of packages to install.
- apt_packages = ['build-essential', 'git', 'make', 'nginx', 'python-pip']
+ apt_packages = ['docker.io', 'build-essential', 'git', 'make', 'nginx', 'python-pip']
     fetch.apt_install(fetch.filter_installed_packages(apt_packages))

Do you have a set of reproducible juju commands that I can use -- assuming I haven't already done something different to you.

And, when you say "local KVM provider" do you mean the local provider (i.e., LXC)?

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

1. i have to update our git repo(use git submodules) because some folders has another remote repo.
but anyway you can checkout them manually
local/flannel-docker-charm - https://github.com/chuckbutler/flannel-docker-charm
local/docker-charm - https://github.com/chuckbutler/docker-charm

2. kubernetes charm from central storage doesn't work now.
3. kvm - containers similar to lxc - https://jujucharms.com/docs/devel/config-KVM

Revision history for this message
Andrew McDermott (frobware) wrote :

@Gennadiy - thanks for the hints for enabling KVM containers; trying that now.

I was reading some of the docs and wondered if you had also done the following:

"You can go further and use the KVM guest as a hosting system for LXC containers. This is achieved in the manner in which Juju commands are invoked; no extra Juju configuration is required. What is required, however, is the creation of a network bridge on the KVM guest (LXC host) in order for the containers to have access to the external network (that of the LXC host, or KVM guest)."

Revision history for this message
Andrew McDermott (frobware) wrote :
Download full text (8.4 KiB)

@Gennadiy - I cannot reproduce your error.

I have forked your repo and added some changes

 - submodules for docker and flannel-docker, and
 - adding `apt-get install docker.io' for kubernetes otherwise that charm fails

Then:

 $ git clone https://github.com/frobware/vas-charms-dev.git -b lp1491592 --recursive

 $ cd vas-charms-dev/charms

 $ git submodule status
    5ddbb4ef6c1c9a797d11b5242c21d79268be1842 trusty/docker-charm (v0.1.6.2)
    538296ddc7ecbe4033de2041d3ac14217b274b59 trusty/flannel-docker-charm (v0.0.6)

 $ juju-deployer -v -B -c ../bundle-lp1491592.yaml

using a local/KVM environment:

  local:
     type: local
     container: kvm

then I see the following output which completes without error. And the subsequent output from `juju status' indicates no errors either.

 - What am I doing different?
 - Or what is different in your environment?
 - Could you try my steps to see if my repo works for you?

2015-09-22 17:18:07 Using runtime GoEnvironment on local
2015-09-22 17:18:07 Using deployment demo
2015-09-22 17:18:07 Starting deployment of demo
2015-09-22 17:18:07 Getting charms...
2015-09-22 17:18:07 Cache dir /home/aim/.juju/.deployer-store-cache/cs_~kubernetes_trusty_kubernetes-6
2015-09-22 17:18:07 Service: kubernetes-master has neither charm url or branch specified
2015-09-22 17:18:07 Service: flannel-docker-charm has neither charm url or branch specified
2015-09-22 17:18:07 Service: docker-charm has neither charm url or branch specified
2015-09-22 17:18:07 Cache dir /home/aim/.juju/.deployer-store-cache/cs_trusty_etcd-0
2015-09-22 17:18:07 Resolving configuration
2015-09-22 17:18:07 Service: kubernetes-master has neither charm url or branch specified
2015-09-22 17:18:07 Service: docker-charm has neither charm url or branch specified
2015-09-22 17:18:07 bootstrapping, this might take a while...
2015-09-22 17:18:42 Bootstrap complete
2015-09-22 17:18:42 Connecting to environment...
2015-09-22 17:18:42 Connected to environment
2015-09-22 17:18:42 Deploying services...
2015-09-22 17:18:42 <deployer.env.go.GoEnvironment object at 0x7f2d44ed1590>
2015-09-22 17:18:42 Service: docker-charm has neither charm url or branch specified
2015-09-22 17:18:42 Deploying service docker using local:trusty/docker
2015-09-22 17:18:47 Deploying service etcd using cs:trusty/etcd-0
2015-09-22 17:18:53 Service: flannel-docker-charm has neither charm url or branch specified
2015-09-22 17:18:53 Deploying service flannel-docker using local:trusty/flannel-docker
2015-09-22 17:18:57 Deploying service kubernetes using cs:~kubernetes/trusty/kubernetes-6
2015-09-22 17:18:59 Service: kubernetes-master has neither charm url or branch specified
2015-09-22 17:18:59 Deploying service kubernetes-master using local:trusty/kubernetes-master
2015-09-22 17:19:10 Adding units...
2015-09-22 17:19:10 Service 'docker' does not need any more units added.
2015-09-22 17:19:10 Service 'etcd' does not need any more units added.
2015-09-22 17:20:14 Service: flannel-docker-charm has neither charm url or branch specified
2015-09-22 17:20:14 Config specifies num units for subordinate: flannel-docker
2015-09-22 17:21:15 Config specifies num units for subordinate: kube...

Read more...

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

i will try it now.

which version of juju do you use?

my juju version - 1.24.5-trusty-amd64

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

error is still here

units:
      docker/0:
        workload-status:
          current: unknown
          since: 22 Sep 2015 23:47:27+03:00
        agent-status:
          current: idle
          since: 22 Sep 2015 23:47:30+03:00
          version: 1.24.5.1
        agent-state: started
        agent-version: 1.24.5.1
        machine: "1"
        public-address: 192.168.122.250
        subordinates:
          flannel-docker/0:
            workload-status:
              current: unknown
              message: Waiting for agent initialization to finish
              since: 23 Sep 2015 00:00:17+03:00
            agent-status:
              current: failed
              message: install local:trusty/flannel-docker-0
              since: 23 Sep 2015 00:02:42+03:00
              version: 1.24.5.1
            agent-state: started
            agent-version: 1.24.5.1
            public-address: 192.168.122.250
          kubernetes/0:
            workload-status:
              current: unknown
              message: Waiting for agent initialization to finish
              since: 23 Sep 2015 00:00:20+03:00
            agent-status:
              current: failed
              message: install cs:~kubernetes/trusty/kubernetes-6
              since: 23 Sep 2015 00:02:45+03:00
              version: 1.24.5.1
            agent-state: started
            agent-version: 1.24.5.1
            public-address: 192.168.122.250

Revision history for this message
Andrew McDermott (frobware) wrote :

Is this using my repo and commands? I was using 1.24.5.1 as well.

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

Yes. I used your steps from previous post. I will ask my friend to install it on another pc and return with result tomorrow

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

@frobware
did you configure "KVM Guest Network Bridge" on your enviropment?

Revision history for this message
Andrew McDermott (frobware) wrote : Re: [Bug 1491592] Re: local provider uses the wrong interface

I did. The "aim-" machines are those from the juju-deployer:

$ virsh list --all
 Id Name State
----------------------------------------------------
 15 0-maas-controller0 running
 84 maas-node10 running
 87 maas-node1 running
 88 maas-node2 running
 101 aim-local-machine-1 running
 102 aim-local-machine-2 running
 103 aim-local-machine-3 running
 - jenkins shut off
 - maas-node11 shut off
 - maas-node12 shut off
 - maas-node13 shut off
 - maas-node14 shut off
 - maas-node15 shut off
 - maas-node16 shut off
 - maas-node3 shut off
 - maas-node4 shut off
 - maas-node5 shut off

On 23 September 2015 at 08:10, Gennadiy Dubina <email address hidden> wrote:

> @frobware
> did you configure "KVM Guest Network Bridge" on your enviropment?
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1491592
>
> Title:
> local provider uses the wrong interface
>
> Status in juju-core:
> In Progress
> Status in juju-core 1.24 series:
> Triaged
>
> Bug description:
> A Juju user has reported a problem with the local kvm provider.
>
> The error message appeared in the unit log:
>
> 2015-09-02 19:01:02 ERROR juju.worker runner.go:223 exited "uniter":
> ModeInstalling cs:~kubernetes/trusty/kubernetes-6: preparing operation
> "install cs:~kubernetes/trusty/kubernetes-6": failed to download charm
> "cs:~kubernetes/trusty/kubernetes-6" from
> ["https://172.17.42.1:17070/environment/cff5caa7-675a-4f0a-87cb-
>
> dbe698ec275a/charms?file=%2A&url=cs%3A~kubernetes%2Ftrusty%2Fkubernetes-6"]:
> cannot download
>
> This particular charm is adding a docker0 bridge and it appears that
> the api address is using the address from that bridge. There is a
> related bug here: https://bugs.launchpad.net/juju-core/+bug/1416928
> The fix appears to be scoped for the LXC provider only.
>
> The customer reported this problem using juju-deployer when he was
> deploying a bundle. There was no reboot of any of his systems.
>
> We need an additional fix for the local kvm provider. Please let me
> know if you need any more information.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju-core/+bug/1491592/+subscriptions
>

--
Andrew McDermott <email address hidden>
Juju Core Sapphire team <http://juju.ubuntu.com>

Revision history for this message
Andrew McDermott (frobware) wrote :
Download full text (3.1 KiB)

Correction, I did not configure the bridge. Sorry for the confusion. I was
asking in an earlier comment whether you had done this but didn't see a
reply.

On 23 September 2015 at 08:21, Andrew McDermott <
<email address hidden>> wrote:

> I did. The "aim-" machines are those from the juju-deployer:
>
> $ virsh list --all
> Id Name State
> ----------------------------------------------------
> 15 0-maas-controller0 running
> 84 maas-node10 running
> 87 maas-node1 running
> 88 maas-node2 running
> 101 aim-local-machine-1 running
> 102 aim-local-machine-2 running
> 103 aim-local-machine-3 running
> - jenkins shut off
> - maas-node11 shut off
> - maas-node12 shut off
> - maas-node13 shut off
> - maas-node14 shut off
> - maas-node15 shut off
> - maas-node16 shut off
> - maas-node3 shut off
> - maas-node4 shut off
> - maas-node5 shut off
>
>
> On 23 September 2015 at 08:10, Gennadiy Dubina <email address hidden>
> wrote:
>
>> @frobware
>> did you configure "KVM Guest Network Bridge" on your enviropment?
>>
>> --
>> You received this bug notification because you are subscribed to the bug
>> report.
>> https://bugs.launchpad.net/bugs/1491592
>>
>> Title:
>> local provider uses the wrong interface
>>
>> Status in juju-core:
>> In Progress
>> Status in juju-core 1.24 series:
>> Triaged
>>
>> Bug description:
>> A Juju user has reported a problem with the local kvm provider.
>>
>> The error message appeared in the unit log:
>>
>> 2015-09-02 19:01:02 ERROR juju.worker runner.go:223 exited "uniter":
>> ModeInstalling cs:~kubernetes/trusty/kubernetes-6: preparing operation
>> "install cs:~kubernetes/trusty/kubernetes-6": failed to download charm
>> "cs:~kubernetes/trusty/kubernetes-6" from
>> ["https://172.17.42.1:17070/environment/cff5caa7-675a-4f0a-87cb-
>>
>> dbe698ec275a/charms?file=%2A&url=cs%3A~kubernetes%2Ftrusty%2Fkubernetes-6"]:
>> cannot download
>>
>> This particular charm is adding a docker0 bridge and it appears that
>> the api address is using the address from that bridge. There is a
>> related bug here: https://bugs.launchpad.net/juju-core/+bug/1416928
>> The fix appears to be scoped for the LXC provider only.
>>
>> The customer reported this problem using juju-deployer when he was
>> deploying a bundle. There was no reboot of any of his systems.
>>
>> We need an additional fix for the local kvm provider. Please let me
>> know if you need any more information.
>>
>> To manage notifications about this bug go to:
>> https://bugs.launchpad.net/juju-core/+bug/1491592/+subscriptions
>>
>
>
>
> --
> Andrew McDermott <email address hidden>
> Juju Core Sapphire team <http://juju.ubuntu.com>
>

--
Andrew McDermott <email address hidden>
Juju Core Sapphire team <http://juju.ubuntu.co...

Read more...

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

no, i didn't configure any bridges manually

Revision history for this message
Andrew McDermott (frobware) wrote :

So we have the same configuration? Agreed?

On 23 September 2015 at 08:44, Gennadiy Dubina <email address hidden> wrote:

> no, i didn't configure any bridges manually
>
> --
> You received this bug notification because you are subscribed to the bug
> report.
> https://bugs.launchpad.net/bugs/1491592
>
> Title:
> local provider uses the wrong interface
>
> Status in juju-core:
> In Progress
> Status in juju-core 1.24 series:
> Triaged
>
> Bug description:
> A Juju user has reported a problem with the local kvm provider.
>
> The error message appeared in the unit log:
>
> 2015-09-02 19:01:02 ERROR juju.worker runner.go:223 exited "uniter":
> ModeInstalling cs:~kubernetes/trusty/kubernetes-6: preparing operation
> "install cs:~kubernetes/trusty/kubernetes-6": failed to download charm
> "cs:~kubernetes/trusty/kubernetes-6" from
> ["https://172.17.42.1:17070/environment/cff5caa7-675a-4f0a-87cb-
>
> dbe698ec275a/charms?file=%2A&url=cs%3A~kubernetes%2Ftrusty%2Fkubernetes-6"]:
> cannot download
>
> This particular charm is adding a docker0 bridge and it appears that
> the api address is using the address from that bridge. There is a
> related bug here: https://bugs.launchpad.net/juju-core/+bug/1416928
> The fix appears to be scoped for the LXC provider only.
>
> The customer reported this problem using juju-deployer when he was
> deploying a bundle. There was no reboot of any of his systems.
>
> We need an additional fix for the local kvm provider. Please let me
> know if you need any more information.
>
> To manage notifications about this bug go to:
> https://bugs.launchpad.net/juju-core/+bug/1491592/+subscriptions
>

--
Andrew McDermott <email address hidden>
Juju Core Sapphire team <http://juju.ubuntu.com>

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

i think yes.

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

the posible reason of issues - order of network interfaces/bridges on the "docker" service machine.
could you share list of existing network interfaces from this virtual machine?

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

because juju agent uses wrong ip address(from docker subnetwork) to download charm.

Revision history for this message
Andrew McDermott (frobware) wrote :

From:

 $ juju ssh docker/0
 $ ifconfig -a

ubuntu@aim-local-machine-1:~$ ifconfig
docker0 Link encap:Ethernet HWaddr 02:42:78:87:90:db
          inet addr:10.10.55.1 Bcast:0.0.0.0 Mask:255.255.255.0
          UP BROADCAST MULTICAST MTU:1500 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

eth0 Link encap:Ethernet HWaddr 52:54:00:8a:48:a7
          inet addr:192.168.122.74 Bcast:192.168.122.255 Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe8a:48a7/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:241106 errors:0 dropped:3 overruns:0 frame:0
          TX packets:171263 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:241534763 (241.5 MB) TX bytes:21004803 (21.0 MB)

flannel0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00
          inet addr:10.10.55.0 P-t-P:10.10.55.0 Mask:255.255.0.0
          UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1472 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

lo Link encap:Local Loopback
          inet addr:127.0.0.1 Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING MTU:65536 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :
Download full text (5.6 KiB)

you can find my dump below. i see only one difference - network mask.

docker0 Link encap:Ethernet HWaddr 02:42:cf:f5:ec:ea
          inet addr:172.17.42.1 Bcast:0.0.0.0 Mask:255.255.0.0
          UP BROADCAST MULTICAST MTU:1500 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)

eth0 Link encap:Ethernet HWaddr 52:54:00:ba:2b:46
          inet addr:192.168.122.53 Bcast:192.168.122.255 Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:feba:2b46/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:74264 errors:0 dropped:4 overruns:0 frame:0
          TX packets:56751 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:161688709 (161.6 MB) TX bytes:6428390 (6.4 MB)

lo Link encap:Local Loopback
          inet addr:127.0.0.1 Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING MTU:65536 Metric:1
          RX packets:486 errors:0 dropped:0 overruns:0 frame:0
          TX packets:486 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:24300 (24.3 KB) TX bytes:24300 (24.3 KB)

Accordig to log file when docker is installed agent uses wrong ip address

ubuntu@android-kernel-kvm-machine-1:~$ sudo less /var/log/juju/unit-flannel-docker-0.log
2015-09-23 09:36:05 INFO juju.cmd supercommand.go:37 running jujud [1.24.5.1-trusty-amd64 gc]
2015-09-23 09:36:05 DEBUG juju.agent agent.go:432 read agent config, format "1.18"
2015-09-23 09:36:05 INFO juju.jujud unit.go:135 unit agent unit-flannel-docker-0 start (1.24.5.1-trusty-amd64 [gc])
2015-09-23 09:36:05 INFO juju.network network.go:194 setting prefer-ipv6 to false
2015-09-23 09:36:05 INFO juju.worker runner.go:269 start "api"
2015-09-23 09:36:05 INFO juju.api apiclient.go:331 dialing "wss://192.168.122.1:17070/environment/01e99d65-b915-4a3b-8ceb-59
09ab261b5a/api"
2015-09-23 09:36:05 INFO juju.api apiclient.go:263 connection established to "wss://192.168.122.1:17070/environment/01e99d65-b915-4a3b-8ceb-5909ab261b5a/api"
2015-09-23 09:36:05 INFO juju.api apiclient.go:331 dialing "wss://192.168.122.1:17070/environment/01e99d65-b915-4a3b-8ceb-5909ab261b5a/api"
2015-09-23 09:36:05 INFO juju.api apiclient.go:263 connection established to "wss://192.168.122.1:17070/environment/01e99d65-b915-4a3b-8ceb-5909ab261b5a/api"
2015-09-23 09:36:06 INFO juju.api apiclient.go:331 dialing "wss://192.168.122.1:17070/environment/01e99d65-b915-4a3b-8ceb-5909ab261b5a/api"
2015-09-23 09:36:06 INFO juju.api apiclient.go:263 connection established to "wss://192.168.122.1:17070/environment/01e99d65-b915-4a3b-8ceb-5909ab261b5a/api"
2015-09-23 09:36:07 INFO juju.worker runner.go:269 start "proxyupdater"
2015-09-23 09:36:07 DEBUG juju.worker.proxyupdater proxyupdater.go:69 write system files: false
2015-09-23 09:36:07 DEBUG juju.worker runner.go:196 "proxyupdater" started
2015-09-23 09:36:07 INFO juju.worker runner.go:269 start "upgrader"
2015-09-23 09:36:07 INFO juju.worker runne...

Read more...

Revision history for this message
Matt Bruzek (mbruzek) wrote :

@frobware

I believe I am seeing the same problem in a different way. I have the docker charm deployed via Juju on KVM. I shutdown the KVM host and when I restarted I am unable to juju ssh to the docker host because the address is the wrong interface. I am able to ssh to the other host.

$ juju status --format=tabular
[Services]
NAME STATUS EXPOSED CHARM
docker unknown false cs:trusty/docker-7
nagios-core maintenance false local:trusty/nagios-core-0

[Units]
ID WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
docker/0 unknown idle 1.25-alpha1.1 2 172.17.42.1
nagios-core/0 maintenance idle 1.25-alpha1.1 1 80/tcp 192.168.122.156 Downloading and configuring the Nagios software

[Machines]
ID STATE VERSION DNS INS-ID SERIES HARDWARE
0 started 1.25-alpha1.1 localhost localhost vivid
1 started 1.25-alpha1.1 192.168.122.156 mbruzek-kvm-machine-1 trusty arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
2 started 1.25-alpha1.1 172.17.42.1 mbruzek-kvm-machine-2 trusty arch=amd64 cpu-cores=1 mem=512M root-disk=8192M

Please notice the docker system thinks its address is 172.17.42.1 and the machine is in an unknown state

Reproduce steps are as follows:
juju bootstrap -e kvm
juju deploy trusty/docker
[Verify you can ssh to docker]
[Restart the KVM host]
[Attempt to ssh to the system when it is back up]

I am using juju 1.25-alpha1-vivid-amd64. Please contact me on irc (mbruzek) if you have any further questions.

Revision history for this message
Matt Bruzek (mbruzek) wrote :

My .juju/enviroments.yaml file for the kvm provider:

  kvm:
    admin-secret: logmeinnow!
    default-series: trusty
    type: local
    container: kvm

I don't recall adding a bridge for this provider, but ifconfig reveals that I do have a lxcbr0

lxcbr0 Link encap:Ethernet HWaddr aa:8b:a3:51:be:98
          inet addr:10.0.3.1 Bcast:0.0.0.0 Mask:255.255.255.0
          inet6 addr: fe80::a88b:a3ff:fe51:be98/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:67 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 B) TX bytes:9438 (9.4 KB)

I do not remember creating this bridge, and see nothing in /etc/network/interfaces for this bridge.

Attaching the machine logs.

Revision history for this message
Gennadiy Dubina (hamsterksu) wrote :

Hi all,

yesterday i installed fresh linux machine with juju and kvm
and i can't reproduce bug too :(. everything works correctly.
i used the same commands to install software.

Revision history for this message
Matt Bruzek (mbruzek) wrote :

I can still reproduce this error using the docker charm when a reboot is involved. Please let me know if you need any more information.

Revision history for this message
Andrew McDermott (frobware) wrote :
Download full text (3.5 KiB)

@mbruzek - It's not clear with the latest version of 1.25 whether this is still happening.

I have:

    kvm:
      admin-secret: logmeinnow!
      default-series: trusty
      type: local
      container: kvm

$ juju bootstrap -e kvm --upload-tools
Bootstrapping environment "kvm"
Starting new instance for initial state server
Building tools to upload (1.25-beta2.1-trusty-amd64)
[...]
Bootstrap complete

$ juju deploy trusty/docker

$ juju ssh 3 uname -a
Warning: Permanently added '192.168.122.212' (ECDSA) to the list of known hosts.
Linux aim-kvm-machine-3 3.13.0-65-generic #106-Ubuntu SMP Fri Oct 2 22:08:27 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
Connection to 192.168.122.212 closed.

$ juju status
environment: kvm
machines:
  "0":
    agent-state: started
    agent-version: 1.25-beta2.1
    dns-name: localhost
    instance-id: localhost
    series: trusty
    state-server-member-status: has-vote
  "3":
    agent-state: started
    agent-version: 1.25-beta2.1
    dns-name: 192.168.122.212
    instance-id: aim-kvm-machine-3
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
services:
  docker:
    charm: cs:trusty/docker-8
    exposed: false
    service-status:
      current: unknown
      since: 15 Oct 2015 14:56:40+01:00
    units:
      docker/0:
        workload-status:
          current: unknown
          since: 15 Oct 2015 14:56:40+01:00
        agent-status:
          current: idle
          since: 15 Oct 2015 15:06:43+01:00
          version: 1.25-beta2.1
        agent-state: started
        agent-version: 1.25-beta2.1
        machine: "3"
        public-address: 192.168.122.212

[Bounce the KVM machine hosting docker]

$ ssh ubuntu@192.168.122.212 sudo reboot

$ juju status

$ juju status
environment: kvm
machines:
  "0":
    agent-state: started
    agent-version: 1.25-beta2.1
    dns-name: localhost
    instance-id: localhost
    series: trusty
    state-server-member-status: has-vote
  "3":
    agent-state: started
    agent-version: 1.25-beta2.1
    dns-name: 192.168.122.212
    instance-id: aim-kvm-machine-3
    series: trusty
    hardware: arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
services:
  docker:
    charm: cs:trusty/docker-8
    exposed: false
    service-status:
      current: unknown
      since: 15 Oct 2015 14:56:40+01:00
    units:
      docker/0:
        workload-status:
          current: unknown
          since: 15 Oct 2015 14:56:40+01:00
        agent-status:
          current: idle
          since: 15 Oct 2015 15:14:54+01:00
          version: 1.25-beta2.1
        agent-state: started
        agent-version: 1.25-beta2.1
        machine: "3"
        public-address: 192.168.122.212

$ juju status --format=tabular
[Services]
NAME STATUS EXPOSED CHARM
docker unknown false cs:trusty/docker-8

[Units]
ID WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
docker/0 unknown idle 1.25-beta2.1 3 192.168.122.212

[Machines]
ID STATE VERSION DNS INS-ID SERIES HARDWARE
0 started 1.25-beta2.1 localhost ...

Read more...

Revision history for this message
Matt Bruzek (mbruzek) wrote :

Andrew,

I was able to get my system working with KVM again, and I tested the binaries you shared with me. I believe the binaries you gave me resolved the problem I was seeing.

Thanks for working with me on this problem!

Revision history for this message
Andrew McDermott (frobware) wrote :

@mbruzek - this looks like it is also fixed in 1.24.7.

Please could you try out 1.24.7 - installation notes are here:

https://jujucharms.com/docs/devel/reference-releases

Current proposed version is 1.24.7.

Proposed releases may be promoted to stable releases after a period of evaluation. They contain bug fixes and recently stablised features. They require evaluation from the community to verify no regressions are present. A proposed version will not be promoted to stable if a regression is reported.

Ubuntu
sudo add-apt-repository ppa:juju/proposed
sudo apt-get update
sudo apt-get install juju-core

$ juju version
1.24.7-trusty-amd64

I tried reproducing the problem using the steps from #26 and post the reboot I now see the correct address for the container and can connect and interact with it:

$ juju status --format=tabular
[Services]
NAME STATUS EXPOSED CHARM
docker unknown false cs:trusty/docker-8
ubuntu unknown false cs:trusty/ubuntu-4

[Units]
ID WORKLOAD-STATE AGENT-STATE VERSION MACHINE PORTS PUBLIC-ADDRESS MESSAGE
docker/0 unknown idle 1.24.7.1 1 192.168.122.149
ubuntu/0 unknown idle 1.24.7.1 2 192.168.122.145

[Machines]
ID STATE VERSION DNS INS-ID SERIES HARDWARE
0 started 1.24.7.1 localhost localhost trusty
1 started 1.24.7.1 192.168.122.149 aim-kvm-machine-1 trusty arch=amd64 cpu-cores=1 mem=512M root-disk=8192M
2 started 1.24.7.1 192.168.122.145 aim-kvm-machine-2 trusty arch=amd64 cpu-cores=1 mem=512M root-disk=8192M

$ juju ssh docker/0 ifconfig eth0
eth0 Link encap:Ethernet HWaddr 52:54:00:6f:7d:b8
          inet addr:192.168.122.149 Bcast:192.168.122.255 Mask:255.255.255.0
          inet6 addr: fe80::5054:ff:fe6f:7db8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
          RX packets:1414 errors:0 dropped:3 overruns:0 frame:0
          TX packets:1220 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:257838 (257.8 KB) TX bytes:142368 (142.3 KB)

Connection to 192.168.122.149 closed.

Revision history for this message
Matt Bruzek (mbruzek) wrote :

@frobware

I can also confirm that I do not see the problem with 1.24.7 of Juju. Thank you for your help on this issue.

Revision history for this message
Martin Packman (gz) wrote :

It appears this issue is much the same as bug 1435283 and was fixed as a side-effect of that change landing in 1.24.7 and the next 1.25 release.

Changed in juju-core:
assignee: Andrew McDermott (frobware) → nobody
milestone: 1.25-beta2 → none
status: In Progress → Fix Released
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.