deploy --to lxc:$foo broken in 1.17.5

Bug #1294777 reported by Tycho Andersen
16
This bug affects 2 people
Affects Status Importance Assigned to Milestone
juju-core
Triaged
Critical
Andrew Wilkins

Bug Description

CloudFive:~$ juju status
environment: maas
machines:
  "0":
    agent-state: started
    agent-version: 1.17.5
    dns-name: centos1.master
    instance-id: /MAAS/api/1.0/nodes/node-4a74679c-adf5-11e3-924d-00ee4c62088d/
    series: precise
    containers:
      0/lxc/0:
        agent-state-info: '(error: error executing "lxc-start": command get_init_pid
          failed to receive response)'
        instance-id: pending
        series: precise
      0/lxc/1:
        agent-state-info: '(error: error executing "lxc-start": command get_init_pid
          failed to receive response)'
        instance-id: pending
        series: precise
      0/lxc/2:
        agent-state-info: '(error: error executing "lxc-start": command get_init_pid
          failed to receive response)'
        instance-id: pending
        series: precise
      0/lxc/3:
        agent-state-info: '(error: error executing "lxc-start": command get_init_pid
          failed to receive response)'
        instance-id: pending
        series: precise
      0/lxc/4:
        agent-state-info: '(error: error executing "lxc-start": command get_init_pid
          failed to receive response)'
        instance-id: pending
        series: precise
      0/lxc/5:
        agent-state-info: '(error: error executing "lxc-start": command get_init_pid
          failed to receive response)'
        instance-id: pending
        series: precise
services:
  glance:
    charm: cs:precise/glance-29
    exposed: false
    relations:
      cluster:
      - glance
      identity-service:
      - keystone
      image-service:
      - nova-cloud-controller
      shared-db:
      - mysql
    units:
      glance/0:
        agent-state: pending
        machine: 0/lxc/2
  keystone:
    charm: cs:precise/keystone-31
    exposed: false
    relations:
      cluster:
      - keystone
      identity-service:
      - glance
      - nova-cloud-controller
      - openstack-dashboard
      shared-db:
      - mysql
    units:
      keystone/0:
        agent-state: pending
        machine: 0/lxc/3
  mysql:
    charm: cs:precise/mysql-38
    exposed: false
    relations:
      cluster:
      - mysql
      shared-db:
      - glance
      - keystone
      - nova-cloud-controller
    units:
      mysql/0:
        agent-state: pending
        machine: 0/lxc/4
  nova-cloud-controller:
    charm: cs:precise/nova-cloud-controller-31
    exposed: false
    relations:
      amqp:
      - rabbitmq-server
      cluster:
      - nova-cloud-controller
      identity-service:
      - keystone
      image-service:
      - glance
      shared-db:
      - mysql
    units:
      nova-cloud-controller/0:
        agent-state: pending
        machine: 0/lxc/0
  openstack-dashboard:
    charm: cs:precise/openstack-dashboard-15
    exposed: false
    relations:
      cluster:
      - openstack-dashboard
      identity-service:
      - keystone
    units:
      openstack-dashboard/0:
        agent-state: pending
        machine: 0/lxc/1
  rabbitmq-server:
    charm: cs:precise/rabbitmq-server-20
    exposed: false
    relations:
      amqp:
      - nova-cloud-controller
      cluster:
      - rabbitmq-server
    units:
      rabbitmq-server/0:
        agent-state: pending
        machine: 0/lxc/5
CloudFive:~$ juju ssh 0
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the RSA key sent by the remote host is
9b:ad:62:6c:1c:86:79:7a:f5:c7:3f:ee:c6:df:42:04.
Please contact your system administrator.
Add correct host key in /home/ubuntu/.ssh/known_hosts to get rid of this message.
Offending RSA key in /home/ubuntu/.ssh/known_hosts:1
  remove with: ssh-keygen -f "/home/ubuntu/.ssh/known_hosts" -R centos1.master
Keyboard-interactive authentication is disabled to avoid man-in-the-middle attacks.
Welcome to Ubuntu 12.04.4 LTS (GNU/Linux 3.2.0-54-generic x86_64)

 * Documentation: https://help.ubuntu.com/

  System information as of Wed Mar 19 17:47:36 UTC 2014

  System load: 0.02 Processes: 103
  Usage of /: 5.2% of 117.37GB Users logged in: 0
  Memory usage: 5% IP address for eth0: 10.0.100.114
  Swap usage: 0% IP address for lxcbr0: 10.0.3.1

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

  Use Juju to deploy your cloud instances and workloads:
    https://juju.ubuntu.com/#cloud-precise

*** System restart required ***
Last login: Wed Mar 19 17:42:40 2014 from 10-0-100-1.master
ubuntu@centos1:~$ cd /var/log/juju/
ubuntu@centos1:/var/log/juju$ ls
all-machines.log ca-cert.pem machine-0.log rsyslog-cert.pem rsyslog-key.pem
ubuntu@centos1:/var/log/juju$ cat all-machines.log
machine-0: 2014-03-19 17:38:01 INFO juju.cmd.jujud machine.go:121 machine agent machine-0 start (1.17.5-precise-amd64)
machine-0: 2014-03-19 17:38:01 DEBUG juju.agent agent.go:318 read agent config, format "1.18"
machine-0: 2014-03-19 17:38:01 INFO juju.cmd.jujud machine.go:149 Starting StateWorker for machine-0
machine-0: 2014-03-19 17:38:01 INFO juju runner.go:262 worker: start "state"
machine-0: 2014-03-19 17:38:01 INFO juju.state open.go:79 opening state; mongo addresses: ["localhost:37017"]; entity "machine-0"
machine-0: 2014-03-19 17:38:01 INFO juju runner.go:262 worker: start "api"
machine-0: 2014-03-19 17:38:01 INFO juju apiclient.go:113 state/api: dialing "wss://localhost:17070/"
machine-0: 2014-03-19 17:38:01 INFO juju runner.go:262 worker: start "termination"
machine-0: 2014-03-19 17:38:01 ERROR juju apiclient.go:118 state/api: websocket.Dial wss://localhost:17070/: dial tcp 127.0.0.1:17070: connection refused
machine-0: 2014-03-19 17:38:01 ERROR juju runner.go:220 worker: exited "api": websocket.Dial wss://localhost:17070/: dial tcp 127.0.0.1:17070: connection refused
machine-0: 2014-03-19 17:38:01 INFO juju runner.go:254 worker: restarting "api" in 3s
machine-0: 2014-03-19 17:38:01 INFO juju.state open.go:117 connection established
machine-0: 2014-03-19 17:38:01 INFO juju runner.go:262 worker: start "instancepoller"
machine-0: 2014-03-19 17:38:01 INFO juju runner.go:262 worker: start "apiserver"
machine-0: 2014-03-19 17:38:01 INFO juju.state.apiserver apiserver.go:43 listening on "[::]:17070"
machine-0: 2014-03-19 17:38:01 INFO juju runner.go:262 worker: start "cleaner"
machine-0: 2014-03-19 17:38:01 INFO juju runner.go:262 worker: start "resumer"
machine-0: 2014-03-19 17:38:01 INFO juju runner.go:262 worker: start "minunitsworker"
machine-0: 2014-03-19 17:38:04 INFO juju runner.go:262 worker: start "api"
machine-0: 2014-03-19 17:38:04 INFO juju apiclient.go:113 state/api: dialing "wss://localhost:17070/"
machine-0: 2014-03-19 17:38:04 INFO juju.state.apiserver apiserver.go:131 [1] API connection from 127.0.0.1:35233
machine-0: 2014-03-19 17:38:04 INFO juju apiclient.go:123 state/api: connection established
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:120 <- [1] <unknown> {"RequestId":1,"Type":"Admin","Request":"Login","Params":{"AuthTag":"machine-0","Password":"6BQu9WKCCBpjqphZhHli9GNr","Nonce":"user-admin:bootstrap"}}
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:127 -> [1] machine-0 11.109173ms {"RequestId":1,"Response":{}} Admin[""].Login
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:120 <- [1] machine-0 {"RequestId":2,"Type":"Agent","Request":"GetEntities","Params":{"Entities":[{"Tag":"machine-0"}]}}
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:127 -> [1] machine-0 906.623us {"RequestId":2,"Response":{"Entities":[{"Life":"alive","Jobs":["JobManageEnviron","JobHostUnits"],"ContainerType":"","Error":null}]}} Agent[""].GetEntities
machine-0: 2014-03-19 17:38:04 INFO juju runner.go:262 worker: start "upgrader"
machine-0: 2014-03-19 17:38:04 INFO juju runner.go:262 worker: start "upgrade-steps"
machine-0: 2014-03-19 17:38:04 INFO juju runner.go:262 worker: start "machiner"
machine-0: 2014-03-19 17:38:04 INFO juju.cmd.jujud machine.go:449 Upgrade to 1.17.5-precise-amd64 already completed.
machine-0: 2014-03-19 17:38:04 INFO juju.cmd.jujud machine.go:436 Upgrade to 1.17.5-precise-amd64 completed.
machine-0: 2014-03-19 17:38:04 INFO juju runner.go:262 worker: start "logger"
machine-0: 2014-03-19 17:38:04 DEBUG juju.worker.logger logger.go:35 initial log config: "<root>=DEBUG"
machine-0: 2014-03-19 17:38:04 INFO juju runner.go:262 worker: start "machineenvironmentworker"
machine-0: 2014-03-19 17:38:04 INFO juju runner.go:262 worker: start "rsyslog"
machine-0: 2014-03-19 17:38:04 DEBUG juju.worker.logger logger.go:60 logger setup
machine-0: 2014-03-19 17:38:04 DEBUG juju.worker.machineenvironment machineenvironmentworker.go:70 write system files: true
machine-0: 2014-03-19 17:38:04 INFO juju runner.go:262 worker: start "authenticationworker"
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:120 <- [1] machine-0 {"RequestId":4,"Type":"Upgrader","Request":"SetTools","Params":{"AgentTools":[{"Tag":"machine-0","Tools":{"Version":"1.17.5-precise-amd64"}}]}}
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:120 <- [1] machine-0 {"RequestId":5,"Type":"Machiner","Request":"Life","Params":{"Entities":[{"Tag":"machine-0"}]}}
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:120 <- [1] machine-0 {"RequestId":6,"Type":"Logger","Request":"LoggingConfig","Params":{"Entities":[{"Tag":"machine-0"}]}}
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:120 <- [1] machine-0 {"RequestId":7,"Type":"Environment","Request":"EnvironConfig","Params":{}}
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:120 <- [1] machine-0 {"RequestId":8,"Type":"KeyUpdater","Request":"AuthorisedKeys","Params":{"Entities":[{"Tag":"machine-0"}]}}
machine-0: 2014-03-19 17:38:04 DEBUG juju.container.kvm kvm.go:51 kvm-ok output:
machine-0: INFO: /dev/kvm does not exist
machine-0: HINT: sudo modprobe kvm_intel
machine-0: INFO: Your CPU supports KVM extensions
machine-0: KVM acceleration can be used
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:127 -> [1] machine-0 251.215887ms {"RequestId":5,"Response":{"Results":[{"Life":"alive","Error":null}]}} Machiner[""].Life
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:127 -> [1] machine-0 251.396858ms {"RequestId":6,"Response":{"Results":[{"Error":null,"Result":"\u003croot\u003e=WARNING;unit=DEBUG"}]}} Logger[""].LoggingConfig
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:120 <- [1] machine-0 {"RequestId":9,"Type":"Rsyslog","Request":"SetRsyslogCert","Params":{"CACert":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNYVENDQWNpZ0F3SUJBZ0lCQURBTEJna3Foa2lHOXcwQkFRVXdSVEVOTUFzR0ExVUVDaE1FYW5WcWRURTAKTURJR0ExVUVBd3dyYW5WcWRTMW5aVzVsY21GMFpXUWdRMEVnWm05eUlHVnVkbWx5YjI1dFpXNTBJQ0p5YzNsegpiRzluSWpBZUZ3MHhOREF6TVRreE56TXpNRFJhRncweU5EQXpNVGt4TnpNNE1EUmFNRVV4RFRBTEJnTlZCQW9UCkJHcDFhblV4TkRBeUJnTlZCQU1NSzJwMWFuVXRaMlZ1WlhKaGRHVmtJRU5CSUdadmNpQmxiblpwY205dWJXVnUKZENBaWNuTjVjMnh2WnlJd2daMHdDd1lKS29aSWh2Y05BUUVCQTRHTkFEQ0JpUUtCZ1FERUtVcy9tTjRrY21yVAovK1RNTTI5VS83dURSTlRHcEo1dzA5MmdmVXZ1S0s0ZFdiUGFZNXRZM1o1Nm1Yc0p1RmlqVERITlFYMW9yZHdJCnFZS2xCYktoWTRidHI1aTlEYUdDTktEbHlWTzRoNTRiMTRPamZseWkzVG5DSElQRGkvdnR2QW9nY0xkenhwMWIKK0hFelFlWFVCcEczRi9vbXEweHB0UlhqOVZYalBRSURBUUFCbzJNd1lUQU9CZ05WSFE4QkFmOEVCQU1DQUtRdwpEd1lEVlIwVEFRSC9CQVV3QXdFQi96QWRCZ05WSFE0RUZnUVVjTVNBby90eUZNOXZZalRXVFRnSmlseWF1VE13Ckh3WURWUjBqQkJnd0ZvQVVjTVNBby90eUZNOXZZalRXVFRnSmlseWF1VE13Q3dZSktvWklodmNOQVFFRkE0R0IKQUJ3blFJSU1RWHczdDZtT1lDOWtjR05aQjd0QjZsOGZ4TnRMc1QwWHJNcXZCekJoOGZ6S1JQOG5jVXk4YUNXQQpReDU2MytqaVhmVG1WUHV4ejBPTGN6M1U4RXpuV1pETUxqVnBEeW1mSnl1eElSRUM3QmxNamQ2akdrd0x1S1laCm1waExUczNwM0FhRDBNWHkxWU14ckI2QzVQWHJNUGswNWNlRWxzeHBCZFFNCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"}}
machine-0: 2014-03-19 17:38:04 DEBUG juju.state.apiserver apiserver.go:120 <- [1] machine-0 {"RequestId":10,"Type":"Provisioner","Request":"Life","Params":{"Entities":[{"Tag":"machine-0"}]}}
machine-0: 2014-03-19 17:38:04 INFO juju.worker.machiner machiner.go:88 setting addresses for machine-0 to ["10.0.100.114" "fe80::eea8:6bff:fef8:d870"]
machine-0: 2014-03-19 17:38:04 DEBUG juju.worker.logger logger.go:45 reconfiguring logging from "<root>=DEBUG" to "<root>=WARNING;unit=DEBUG"
machine-0: 2014-03-19 17:38:04 ERROR juju runner.go:220 worker: exited "environ-provisioner": no state server machines with addresses found
machine-0: 2014-03-19 17:43:10 ERROR juju.container.lxc lxc.go:211 container failed to start: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:43:10 ERROR juju.provisioner.lxc lxc-broker.go:87 failed to start container: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:43:10 ERROR juju.provisioner provisioner_task.go:407 cannot start instance for machine "0/lxc/0": error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:43:23 ERROR juju.container.lxc lxc.go:211 container failed to start: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:43:23 ERROR juju.provisioner.lxc lxc-broker.go:87 failed to start container: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:43:23 ERROR juju.provisioner provisioner_task.go:407 cannot start instance for machine "0/lxc/1": error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:43:36 ERROR juju.container.lxc lxc.go:211 container failed to start: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:43:36 ERROR juju.provisioner.lxc lxc-broker.go:87 failed to start container: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:43:36 ERROR juju.provisioner provisioner_task.go:407 cannot start instance for machine "0/lxc/2": error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:43:48 ERROR juju.container.lxc lxc.go:211 container failed to start: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:43:48 ERROR juju.provisioner.lxc lxc-broker.go:87 failed to start container: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:43:48 ERROR juju.provisioner provisioner_task.go:407 cannot start instance for machine "0/lxc/3": error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:44:01 ERROR juju.container.lxc lxc.go:211 container failed to start: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:44:01 ERROR juju.provisioner.lxc lxc-broker.go:87 failed to start container: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:44:01 ERROR juju.provisioner provisioner_task.go:407 cannot start instance for machine "0/lxc/4": error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:44:14 ERROR juju.container.lxc lxc.go:211 container failed to start: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:44:14 ERROR juju.provisioner.lxc lxc-broker.go:87 failed to start container: error executing "lxc-start": command get_init_pid failed to receive response
machine-0: 2014-03-19 17:44:14 ERROR juju.provisioner provisioner_task.go:407 cannot start instance for machine "0/lxc/5": error executing "lxc-start": command get_init_pid failed to receive response
ubuntu@centos1:/var/log/juju$ dpkg -l | grep cgroup
ii cgroup-lite 1.1.5 Light-weight package to set up cgroups at system boot

Tycho Andersen (tycho-s)
tags: added: cloud-installer
Curtis Hovey (sinzui)
tags: added: deploy local-provider lxc
tags: added: regression
Changed in juju-core:
status: New → Triaged
importance: Undecided → High
milestone: none → 1.17.6
Revision history for this message
Curtis Hovey (sinzui) wrote :

This may relate to change to address bug 1271144. tycho was testing the propsed fix when he encountered this bug.

Changed in juju-core:
importance: High → Critical
tags: added: landscape
Revision history for this message
Tim Penhey (thumper) wrote :

Can we find out what version of lxc was installed?

Revision history for this message
Tim Penhey (thumper) wrote :

Additional info through "sudo lxc-ls --fancy" would also help here

Revision history for this message
Adam Collard (adam-collard) wrote :

This looks like a duplicate of LP: #1271144

We can see from the landscape-sysinfo output

" Memory usage: 5% IP address for eth0: 10.0.100.114
  Swap usage: 0% IP address for lxcbr0: 10.0.3.1
"

That you have eth0, lxcbr0 and not br0 itself. This leads to lxc-start not working, which then gives that wonderful error message :)

William Reade (fwereade)
Changed in juju-core:
assignee: nobody → Andrew Wilkins (axwalk)
Revision history for this message
Martin Packman (gz) wrote :

Tycho, we think this is fixed with the change just landed in trunk, please reopen this bug if you find otherwise.

To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.