I've used MAAS version 2.0.0 (rc4+bzr5187), deployed from ppa:maas/next inside a xenial LXD container with 2 NICs connected to libvirt bridges - one external with NAT and DHCP enabled, the other internal (for MAAS) without NAT and DHCP.
There are 6 dual-NIC KVM nodes, but we'll be using only 4 of them - 1 for Neutron/Ceph, and 3 for Nova/Ceph. All nodes also have 2GB RAM and 21.5GB disk space. UI screenshots:
- all nodes summary: http://pasteboard.co/9n69tmwSU.png
- maas-20-node-0 (the network node): http://pasteboard.co/W6RPapEr.png
- maas-20-node-5 (the compute and bootstrap node; the other compute nodes are configured the same way, except for the different IPs): http://pasteboard.co/9n6SD8H2b.png
MAAS also has 3 zones - default (empty), zone1 (nodes 0, 1, and 2), zone2 (nodes 3, 4, 5).
All of the subnets, except 10.10.20.0/24 (external) and 10.99.20.0/24 (compute-external) have DHCP enabled from a dynamic range 10.X.20.10-10.X.20.99 (X being the VLAN ID or 20 for the PXE subnet), and have a static range 10.X.20.100-10.X.20.200. You can ignore 'demo-* subnets, spaces, and VLANs (they're not related to the OpenStack deployment I'm describing here).
To simplify the deployment steps, I'll be using the following short bash script, deploy-4-nodes-vmaas-20.sh: http://paste.ubuntu.com/23061880/
As you can see the private/public addresses shown in status might look odd, but that's not a problem for the charms as they're using 'network-get <binding> --primary-address' internally (not 'unit-get private|public-address', except as fallback). The bundle contains "bindings" section for setting up the endpoint bindings to spaces, for each application.
@john, Please, have a look at the steps and setup. How different is that from what you're trying to setup? With a properly configured networks and nodes, it should be easy to replicate and modify it as needed.
I've used MAAS version 2.0.0 (rc4+bzr5187), deployed from ppa:maas/next inside a xenial LXD container with 2 NICs connected to libvirt bridges - one external with NAT and DHCP enabled, the other internal (for MAAS) without NAT and DHCP.
MAAS UI screenshots of: pasteboard. co/9n5Ymdrc4. png pasteboard. co/9n6nU3ubI. png interfaces contents: http:// paste.ubuntu. com/23061857/
- rack controller (interfaces and served VLANs): http://
- networks (fabrics, VLANs, subnets, spaces): http://
- /etc/network/
There are 6 dual-NIC KVM nodes, but we'll be using only 4 of them - 1 for Neutron/Ceph, and 3 for Nova/Ceph. All nodes also have 2GB RAM and 21.5GB disk space. UI screenshots: pasteboard. co/9n69tmwSU. png pasteboard. co/W6RPapEr. png pasteboard. co/9n6SD8H2b. png
- all nodes summary: http://
- maas-20-node-0 (the network node): http://
- maas-20-node-5 (the compute and bootstrap node; the other compute nodes are configured the same way, except for the different IPs): http://
MAAS also has 3 zones - default (empty), zone1 (nodes 0, 1, and 2), zone2 (nodes 3, 4, 5).
All of the subnets, except 10.10.20.0/24 (external) and 10.99.20.0/24 (compute-external) have DHCP enabled from a dynamic range 10.X.20. 10-10.X. 20.99 (X being the VLAN ID or 20 for the PXE subnet), and have a static range 10.X.20. 100-10. X.20.200. You can ignore 'demo-* subnets, spaces, and VLANs (they're not related to the OpenStack deployment I'm describing here).
To simplify the deployment steps, I'll be using the following short bash script, deploy- 4-nodes- vmaas-20. sh: http:// paste.ubuntu. com/23061880/
A slightly modified openstack-base bundle (original: https:/ /jujucharms. com/openstack- base/) is deployed by the script (in bundle- 3-nodes. yaml: http:// paste.ubuntu. com/23061882/) with some minimal config (openstack- base-config. yaml: http:// paste.ubuntu. com/23061885/)
It takes about 40-60 minutes to get OpenStack up and running on the KVMs with Juju 2.0-beta15. paste.ubuntu. com/23061249/.
Script output: http://
Juju status dumps: paste.ubuntu. com/23061029/ paste.ubuntu. com/23061034/ paste.ubuntu. com/23061344/
- at the beginning: http://
- midway, after all machines have started: http://
- at the end, once everything settles: http://
As you can see the private/public addresses shown in status might look odd, but that's not a problem for the charms as they're using 'network-get <binding> --primary-address' internally (not 'unit-get private| public- address' , except as fallback). The bundle contains "bindings" section for setting up the endpoint bindings to spaces, for each application.
@john, Please, have a look at the steps and setup. How different is that from what you're trying to setup? With a properly configured networks and nodes, it should be easy to replicate and modify it as needed.