juju status shows ip address from public-api space rather than internal-api space

Bug #1659102 reported by Narinder Gupta
12
This bug affects 2 people
Affects Status Importance Assigned to Milestone
Canonical Juju
Triaged
Low
John A Meinel

Bug Description

we are doing caonical openstack deployment in ETSI plugtest to test the vnf on top of our openstack.

Currently facing isuses with deployment of openstack with space. One of root cause is juju status is showing ip address for the nodes on the network other than pxe boot.

Revision history for this message
Narinder Gupta (narindergupta) wrote :

here is one example where juju machine show different subnet for public address. I have two network 10.200.2.x and 10.200.5.x and bot routable to network. PXE boot network is 10.200.5.x but nodes public address is on 10.200.2.x

Unit Workload Agent Machine Public address Ports Message
aodh/0 waiting allocating 0/lxd/0 waiting for machine
ceilometer/0 waiting allocating 2/lxd/0 waiting for machine
ceph-mon/0 waiting allocating 2/lxd/1 waiting for machine
ceph-mon/1 waiting allocating 1/lxd/0 waiting for machine
ceph-mon/2 waiting allocating 0/lxd/1 waiting for machine
ceph-osd/0 waiting allocating 0 10.200.2.15 waiting for machine
ceph-osd/1 waiting allocating 1 10.200.2.13 waiting for machine
ceph-osd/2 waiting allocating 2 10.200.2.14 waiting for machine
ceph-radosgw/0 waiting allocating 0/lxd/2 waiting for machine
cinder/0 waiting allocating 1/lxd/1 waiting for machine
glance/0 waiting allocating 0/lxd/3 waiting for machine
heat/0 waiting allocating 0/lxd/4 waiting for machine
keystone/0 waiting allocating 2/lxd/2 waiting for machine
mongodb/0 waiting allocating 0/lxd/5 waiting for machine
mysql/0 waiting allocating 0/lxd/6 waiting for machine
neutron-api/0 waiting allocating 1/lxd/2 waiting for machine
neutron-gateway/0 waiting allocating 0 10.200.2.15 waiting for machine
nodes/0 waiting allocating 0 10.200.2.15 waiting for machine
nodes/1 waiting allocating 1 10.200.2.13 waiting for machine
nodes/2 waiting allocating 2 10.200.2.14 waiting for machine
nova-cloud-controller/0 waiting allocating 1/lxd/3 waiting for machine
nova-compute/0 waiting allocating 1 10.200.2.13 waiting for machine
nova-compute/1 waiting allocating 2 10.200.2.14 waiting for machine
openstack-dashboard/0 waiting allocating 1/lxd/4 waiting for machine
opnfv-promise/0 waiting allocating 0/lxd/7 waiting for machine
rabbitmq-server/0 waiting allocating 2/lxd/3 waiting for machine

Revision history for this message
John A Meinel (jameinel) wrote : Re: [Bug 1659102] Re: juju status shows ip address from public-api space rather than internal-api space
Download full text (3.4 KiB)

I would actually expect "juju status" to show a public address if possible,
rather than an internal address. That doesn't mean it is the address that a
given unit should be advertising to its peers (that would be "juju run
--unit ceph-mon/0 network-get --primary-address BINDING").
Did you actually deploy the services bound into spaces, or you just
deployed them on a system that has more than one network interface and
hoped it would always pick what is "right" for you?

On Thu, Jan 26, 2017 at 9:18 PM, Narinder Gupta <email address hidden>
wrote:

> here is one example where juju machine show different subnet for public
> address. I have two network 10.200.2.x and 10.200.5.x and bot routable
> to network. PXE boot network is 10.200.5.x but nodes public address is
> on 10.200.2.x
>
> Unit Workload Agent Machine Public address
> Ports Message
> aodh/0 waiting allocating 0/lxd/0
> waiting for machine
> ceilometer/0 waiting allocating 2/lxd/0
> waiting for machine
> ceph-mon/0 waiting allocating 2/lxd/1
> waiting for machine
> ceph-mon/1 waiting allocating 1/lxd/0
> waiting for machine
> ceph-mon/2 waiting allocating 0/lxd/1
> waiting for machine
> ceph-osd/0 waiting allocating 0 10.200.2.15
> waiting for machine
> ceph-osd/1 waiting allocating 1 10.200.2.13
> waiting for machine
> ceph-osd/2 waiting allocating 2 10.200.2.14
> waiting for machine
> ceph-radosgw/0 waiting allocating 0/lxd/2
> waiting for machine
> cinder/0 waiting allocating 1/lxd/1
> waiting for machine
> glance/0 waiting allocating 0/lxd/3
> waiting for machine
> heat/0 waiting allocating 0/lxd/4
> waiting for machine
> keystone/0 waiting allocating 2/lxd/2
> waiting for machine
> mongodb/0 waiting allocating 0/lxd/5
> waiting for machine
> mysql/0 waiting allocating 0/lxd/6
> waiting for machine
> neutron-api/0 waiting allocating 1/lxd/2
> waiting for machine
> neutron-gateway/0 waiting allocating 0 10.200.2.15
> waiting for machine
> nodes/0 waiting allocating 0 10.200.2.15
> waiting for machine
> nodes/1 waiting allocating 1 10.200.2.13
> waiting for machine
> nodes/2 waiting allocating 2 10.200.2.14
> waiting for machine
> nova-cloud-controller/0 waiting allocating 1/lxd/3
> waiting for machine
> nova-compute/0 waiting allocating 1 10.200.2.13
> waiting for machine
> nova-compute/1 waiting allocating 2 10.200.2.14
> waiting for machine
> openstack-dashboard/0 waiting allocating 1/lxd/4
> waiting for machine
> opnfv-promise/0 waiting allocating 0/lxd/7
> waiting for machine
> rabbitmq-server/0 waiting allocating 2/lxd/3
> waiting for machine
>
> --
> You receive...

Read more...

Revision history for this message
Anastasia (anastasia-macmood) wrote :

We track Juju 2.x issues in "juju" project. Re-targeting and marking as Incomplete since we are awaiting Narinder Gupta's reply.

no longer affects: juju-core
Changed in juju:
status: New → Incomplete
Revision history for this message
Narinder Gupta (narindergupta) wrote :

John,
Will you please define a public address here? As i have two networks here 10.200.2.x (eno1) and 10.200.5.x (eno2) and both are routable to outside world through a gateway 10.200.2.254 and 10.200.5.254. I can configure all my servers pxe boot on any of the network. Problem is with insistency as i do configure my script to configure based on juju status of an service. How can i predict juju status will show the ip address from the specific subnet all the time?

look like it picks the random subnet which are routable. In this example in MAAS i configure two spaces as internel-api and public-api. All services are showing the openstack API on correct network as expected but my scripts based on juju status fails. As i want juju status to show only the ip address from 10.200.5.x subnet rather than 10.200.2.x

Thanks and Regatds,
Narinder Gupta

Changed in juju:
status: Incomplete → New
Changed in juju:
status: New → Triaged
importance: Undecided → High
assignee: nobody → John A Meinel (jameinel)
Revision history for this message
John A Meinel (jameinel) wrote :

Can you confirm that "juju show-machine 0", and "juju status --format=yaml" are showing both addresses as you would expect?

Generally "juju status" in tabular form is not intended to be scraped for scripting. I would recommend either using "juju status --format=json" or "--format=yaml" which are both more ameniable to automatic processing.

As for whether status reports 10.200.5.x or 10.200.2.x. Generally, there isn't a reason why one set of addresses would be preferred for external access than the other. (There may be sites where PXE network is the one with external routing, and others where it *isn't* externally routable.)

We should be able to give you all addresses (and do everywhere but in tabular output).

I'm curious why when you say "both are routable" but one of them is failing.

Revision history for this message
Canonical Juju QA Bot (juju-qa-bot) wrote :

This bug has not been updated in 2 years, so we're marking it Low importance. If you believe this is incorrect, please update the importance.

Changed in juju:
importance: High → Low
tags: added: expirebugs-bot
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.