container addressability Can't juju ssh to lxd-placed container

Bug #1577638 reported by Casey Marshall on 2016-05-03
This bug affects 9 people
Affects Status Importance Assigned to Milestone

Bug Description

If I deploy into a container of a machine, I can't `juju ssh` into it.

Steps to reproduce:

1. Deploy some stuff. In my case, I deployed ceph on openstack.
2. juju deploy ubuntu --series trusty --to lxc:3

Then I get:

ceph/1 unknown idle 2.0-beta7.1 1
ceph/2 unknown idle 2.0-beta7.1 2
ceph/3 unknown idle 2.0-beta7.1 3
demo-client/2 unknown idle 2.0-beta7.1 3/lxd/0

3. Can't ssh in to the container

$ juju ssh demo-client/2
ssh: connect to host port 22: Connection timed out

I *think* I remember being able to do this with lxc: placed units in juju 1.x.

See this to track the recent form of the regression

James Tunnicliffe (dooferlad) wrote :

Just deployed OpenStack and "juju ssh ceph-mon/0" worked fine. It was created on 1/lxc/0 (this is a xenial machine, so it is a lxd managed lxc).

Tested with master + some routing fixes. I don't think the routing fixes will have had any impact though because they only fixed bonded interfaces.

Changed in juju-core:
status: New → Incomplete
Casey Marshall (cmars) wrote :

I think I mis-typed the deploy command above, sorry, it must have been `--to lxd:3`. So maybe it's only affecting LXD, not LXC placement?

Should I avoid the use of `--to lxd:<machine number>`?

I'm trying a container deployed with `--to lxc:3`.

Cheryl Jennings (cherylj) wrote :

Casey - if you're not using the MAAS provider, you will need to specify --proxy when using juju ssh to get to the containers. There was a change in beta6(maybe 5?) where it is no longer the default to proxy through the state server.

Nate Finch (natefinch) wrote :

This used to Just Work. Now it doesn't. That's bad. Having to specify your own proxy manually is really ugly. The whole point of juju ssh is to jump through hoops for you.

I'm not an expert in SSH, but it seems like, if I can ssh to the host machine, we should be able to set up the proxy automatically from the host to the container on that host.

At the very least, the error message is really bad:

$ juju ssh 1/lxc/0
ssh: connect to host port 22: No route to host

If we explicitly aren't allowing people to ssh to the container, say so. This just looks like a bug, like we're giving SSH the wrong IP to connect to.

James Tunnicliffe (dooferlad) wrote :

Agreed. Why did we change this?

Cheryl Jennings (cherylj) wrote :

See bug #1566237 for discussion on the change.

I think it's a fair point to proxy through the host for containers by default.

Changed in juju-core:
status: Incomplete → Triaged
importance: Undecided → High
tags: added: juju-release-support lxd network usability
tags: added: rc1
Curtis Hovey (sinzui) on 2016-05-23
tags: added: ci regression
tags: added: ssh
Curtis Hovey (sinzui) on 2016-05-23
Changed in juju-core:
importance: High → Critical
description: updated
Changed in juju-core:
milestone: none → 2.0-beta8
Curtis Hovey (sinzui) on 2016-05-24
Changed in juju-core:
milestone: 2.0-beta8 → 2.0.0
importance: Critical → High
Richard Harding (rharding) wrote :

The issue is that just because you can ssh to the container through the machine doesn't mean you should proxy through the controller. We need to limit folks that can access the controller directly like this. I think the longer term path is to find a way to enable proxying through the host machine in a lxd case.

I do think we can do a few things to help:

1) if the user ssh'ing is a controller admin, then proxy for them. It's ok, they could ssh there anyway

2) if the user is not and is a model user only, suggest they ssh directly to the host machine and then suggest ssh'ing to the container from there. I think we can provide these instructions without too much pain. This would only be required if the --proxy is indeed set to false for the controller.

affects: juju-core → juju
Changed in juju:
milestone: 2.0.0 → none
milestone: none → 2.0.0
Anastasia (anastasia-macmood) wrote :

I think that this has been fixed - last failure on master is seen on Apr 10 2016.

Changed in juju:
status: Triaged → Incomplete
milestone: 2.0.0 → none
Launchpad Janitor (janitor) wrote :

[Expired for juju because there has been no activity for 60 days.]

Changed in juju:
status: Incomplete → Expired
Stuart Bishop (stub) wrote :

I just deployed kubernetes-core to aws, which places the easyrsa unit into an lxd container:

easyrsa/0* active idle 0/lxd/0 Certificate Authority connected.

juju ssh fails for this unit:

$ juju ssh easyrsa/0
ERROR cannot connect to any address: []

juju ssh works fine for all the non-lxd contained units.

Changed in juju:
status: Expired → New
Stuart Bishop (stub) wrote :

(the above is Juju 2.0.1, after bootstrapping an aws model and running 'juju deploy kubernetes-core')

Stuart Bishop (stub) on 2016-11-15
tags: added: canonical-is
Changed in juju:
status: New → Triaged
milestone: none → 2.1.0
Reed O'Brien (reedobrien) wrote :

That should be
$ juju ssh 0/lxd/0


$ juju ssh easyrsa/0

easyrsa/0 is the name of the unit, 0/lxd/0 is the name of the machine. Please confirm you can ssh to the machine rather than the unit.


Changed in juju:
status: Triaged → Invalid
Stuart Bishop (stub) wrote :

From the docs:

"The machine is identified by the argument which is either a 'unit name' or a 'machine id'."

It is expected by users that the unit name works here, and it is trivial for Juju to map the unit name to the machine if that is what it needs internally.

Changed in juju:
status: Invalid → New
Changed in juju:
status: New → Triaged
milestone: 2.1.0 → 2.2.0
Anastasia (anastasia-macmood) wrote :

Related to bug # 1569106

Curtis Hovey (sinzui) on 2017-02-16
summary: - Can't juju ssh to lxd-placed container
+ container addressability Can't juju ssh to lxd-placed container
Curtis Hovey (sinzui) on 2017-03-24
Changed in juju:
milestone: 2.2-beta1 → 2.2-beta2
Curtis Hovey (sinzui) on 2017-03-30
Changed in juju:
milestone: 2.2-beta2 → 2.2-beta3
Kevin W Monroe (kwmonroe) wrote :

@reedobrien et al, deploying to AWS with --to lxd placement, I cannot ssh to the unit name nor the machine.

$ juju version


$ juju deploy ubuntu u1
$ juju deploy ubuntu uc1 --to lxd:0

$ juju status
Model Controller Cloud/Region Version
l1 aws-e aws/us-east-1 2.1.2

App Version Status Scale Charm Store Rev OS Notes
u1 16.04 active 1 ubuntu jujucharms 10 ubuntu
uc1 16.04 active 1 ubuntu jujucharms 10 ubuntu

Unit Workload Agent Machine Public address Ports Message
u1/0* active idle 0 ready
uc1/0* active idle 0/lxd/0 ready

Machine State DNS Inst id Series AZ
0 started i-0608bcd8aa65ec94b xenial us-east-1a
0/lxd/0 started juju-93ecfe-0-lxd-0 xenial

$ juju ssh 0/lxd/0
ERROR cannot connect to any address: []
$ juju ssh uc1/0
ERROR cannot connect to any address: []

John A Meinel (jameinel) wrote :

the machine is not externally routable, thus you aren't able to ssh to it. (the address of the machine is explicitly in a local-only network that is only valid from the host machine.)

we could proxy/use the host machine as a jump host. I think ideally we would try to make it so that containers can get addresses from the provider in their hosts network. This is quite provider specific, and often requires setting flags that enable/disable security settings (allow traffic to the machine for mac addresses that the provider didn't provision, etc.)

Changed in juju:
milestone: 2.2-beta3 → 2.2-beta4
Changed in juju:
milestone: 2.2-beta4 → 2.2-rc1
Tim Penhey (thumper) on 2017-05-31
tags: added: container-addressability
Changed in juju:
milestone: 2.2-rc1 → none
To post a comment you must log in.
This report contains Public information  Edit
Everyone can see this information.

Duplicates of this bug

Other bug subscribers