Interfaces used in default Grizzly install are not sensible for a 'real' installation

Bug #1192970 reported by Ian Wells
6
This bug affects 1 person
Affects Status Importance Assigned to Milestone
Cisco Openstack
Triaged
Wishlist
Unassigned

Bug Description

The current setup configures two interfaces, private and external.

The private, internal interface inherits its IP address from the boot network and is used for both private communication (i.e. Openstack comms between servers on the various machines and the API endpoints. Everything tends to listen on '*' but the endpoints as submitted to keystone all have this address on.

The external interface is an unaddressed interface used for Quantum routing, and is where traffic emerges from VMs.

Typically, a setup should have *three* interfaces, not two, on the control host, and only one on every compute host.

The control host requires:

- the internal interface for control traffic between services
- the external interface for VM traffic to the world
- an API interface, where users can contact the API server to make requests.

That API interface needs to be separate because, assuming your users are not actually Openstack administrators, you would normally not want them to be able to reach the internal network addresses of the machines.

The compute hosts require only an internal interface. They make no use of the external interface, and it should not be created on them.

Proviso here: if you are using provider networks, all machines also require an interface dedicating and configuring per provider network. This is aside from the above.

Analysis:

There should be an API IP address that is supplied to the control node, which may be different from the private IP address.

The control node must have a new interface configured with that API address.

The implication is that setting up interfaces other than the internal one (and provider ones) can, or should, be done by puppet-network post reboot, since all networks in the interfaces template appear on all machines. This will happen only on the control node.

Finally, the default route is typically on the API interface - not the internal interface. Typically this means installs are no-net - during the install phase, no default route is available, because you're on the private side without network connectivity - and the default route should be added post-reboot on a case by case basis (i.e. only to the control server). It's important that the no-net install does not install a default route - even a bogus one - because puppet-network is not capable of fixing default routes after a reboot.

Changed in openstack-cisco:
status: New → Triaged
importance: Undecided → Wishlist
To post a comment you must log in.
This report contains Public information  
Everyone can see this information.

Other bug subscribers

Remote bug watches

Bug watches keep track of this bug in other bug trackers.