Comment 2 for bug 1743106

Revision history for this message
Dmitrii Shcherbakov (dmitriis) wrote :

John,

The requirements the field team members and I have encountered:

1. AZ-specific charm configuration (doesn't have to be one AZ, may be a group of AZs with a common configuration). Same charm/application - different config;
2. Large deployments may have AZs used for particular purposes. Say, your MAAS would control the whole DC but you'd deploy OpenStack in AZ0, AZ1, AZ2 and Kubernetes in AZ3, AZ4, AZ5;
3. Large installations of MAAS have "implicit tenants" (like projects in OpenStack) - they may create their own zone and start working on machines that belong to that zone but those machines would not be used for deployment purposes just yet.

2 - as you mentioned, zones as a model-config would be interesting to explore, provided that this model config can be changed to add new zones to a model or remove them from a model.

3 - this is something I would talk about with the MAAS team as well because it would seem that we need a Keystone-like story with (identity, project, role) assignments to do RBAC and resource separation for a given user or user group. That would solve the resource isolation problem without creating unnecessary functionality in Juju.

~~

1 - I have a practical example for this related to Neutron and OpenVSwitch and I think I need to give the full context because it may not be apparent from the RFE description.

In this particular case there are different VLANs allocated to usage by SDN on physical switches (which MAAS is not aware about). Machines on each AZ are attached to a different switch fabric so even if the VLAN numbers used match, those VLANs are not identical because they reside on different L2 networks.

So per-AZ charm configuration is needed due to (potentially different) per-AZ switch fabric configuration - we have no awareness of that from MAAS because it does not control switches and cannot provide meta-data we need.

https://paste.ubuntu.com/26512589/

To interpret why the bundle is structured this way:

https://docs.openstack.org/ocata/networking-guide/deploy-ovs-provider.html (how this setup works in OpenStack - this is the one without neutron-gateway - VM instances are connected via an OVS bridge on a compute node to a physical network without VXLAN-ing to a gateway first).

Routed provider networks (VRFs) in OpenStack to model multi-segment provider networks:
https://docs.openstack.org/neutron/pike/admin/config-routed-networks.html#example-configuration
"Network or compute nodes
Configure the layer-2 agent on each node to map one or more segments to the ***appropriate physical network*** bridge or interface and restart the agent."

https://jujucharms.com/neutron-openvswitch/#charm-config-vlan-ranges
vlan-ranges charm config
(string) Space-delimited list of <physical_network>:<vlan_min>:<vlan_max> or <physical_network> specifying physical_network names usable for VLAN provider and tenant networks, as well as ranges of VLAN tags on each available for allocation to tenant networks.

* Per-AZ config includes a physical_network name and a vlan range;

    options:
      bridge-mappings: *bridge-mappings-az2
      vlan-ranges: *vlan-ranges-az2

* charm-neutron-openvswitch needs to be placed per-AZ but this is a subordinate to charm-nova-compute => per-AZ nova-compute applications are used as well;

* AZ tags are used to replace unimplemented AZ constraints for per-AZ applications;

* to start deploying in a new AZ you'd have to add another pair of compute/neutron-openvswitch applications while other applications are independent of that constraint.